For a long time IBM's AI strategy has been to do normal business, and claim it's AI-related in order to make themselves sound cooler.
This is the same thing. Any layoffs happening today don't really have anything to do with AI. The company just needs to do layoffs, and saying "we have layoffs because of AI" sounds better than "we have layoffs because revenues are worse than we expected".
Same thing at the company I recently worked at-- a maker of software for car dealerships & manufacturers.
The product marketing mentions AI.
I asked a staff data scientist (has been with company for several years) if AI is used in our products.
"No."
It's quite amazing how pervasive Fraud & pseudo-Fraud is in the American economy. Regulators seem to turn a blind eye to so much of it. A recent example I saw was Food adulteration, with things such as sawdust [0]
I don't think adding cellulose to processed food in order to prevent caking is food adulteration. It's clearly listed on the ingredients and is perfectly safe.
I do think it's fun to point out in the ingredients list, sometimes it's a little unexpected, like on a bag of chips. "Mmm, sawdust" is kind of funny because it sounds bad but is completely harmless.
The company I work for just jumped on the bandwagon and is actually searching for people with ML/AI experience. If you can use Spark, TensorFlow, Scikit and Keras, you actually have a better chance of getting the job than a Ph.D. who knows only one of the frameworks. (That's the way many corporations work, sadly.)
The only place where "intelligence" is in the AI is in its name. These are mathematical or logical models which resemble a behaviour of an intelligent being and if you throw ML into the mix, the AI models can actually learn on their own. But they are not creative. They can do amazing things but they have no comprehension of WHY they do these things or have (usually) no notion of truthfulness. They just repeat what they were learnt on or extrapolate from it (often wrongly because there is no critical thinking and fact-checking in contemporary models).
I see a lot of tell-tale signs of another bubble which is going to burst in a couple of years like it did in the 1980s.
Nothing to worry about. If someone's job security is endangered, they can either switch employer or do something else.
But if you can claim experience with these frameworks, enjoy the ride. Companies are going to pay you whatever you ask.
Sure. But I'm a software engineer who is finishing a Ph.D. in applied informatics (coincidently in an area of time series prediction using ML). I'm not a manager.
When I mentioned "AI Winter" in front of them they didn't know what I was speaking about. But they created a nice corporate ladder which anybody can climb and it was based on years of experience with the aforementioned frameworks. Python experience needed, Scala + Spark was an advantage.
I don't know what are you planning to do. But ... I'm buying a huge load of popcorn and I will laugh my ass off when this bubble bursts in a couple of years.
(not parent but -)
It depends on area, generally:
Provider APIs(ChatGPT through OpenAPI etc), langchain, huggingface transformers, pinecone/vector DBs are absolutely taking off.
Lots of specialist ML models which required specific data collected carefully for a business task are no longer needed. Most ML roles until now, and research time, was spent collecting data and training specialist models.
General models like ChatGPT or pretrained image models do better than a fresh model trained from scratch or even a finetuned small/medium model (e.g. BERT/T5) ever will these days.
The special ML (pipeline of data --> train --> deploy --> manage/mlops/drift) used tech like PyTorch, Tensorflow, MLFlow.. and for more applicable levels (e.g. deployment), transformers, sci-kit learn, keras. However these are being replaced wholesale at many companies by langchain, huggingface inference API (for vision tasks) and pinecone/other vector DBs.
Langchain is just a smart way to wrap and order API calls to OpenAI/ChatGPT/other providers really, with some prebuilt use cases. Right now there is less on the metrics/output side than with lets say "bring-your-own-data" ML models, which you could measure things like precision.
Now, the old guard of ML (PyTorch, Tensorflow) is still used for training new models, open source replication attempts etc. But newer frameworks like JAX have not really taken off, as they have been entering the community as the community switched to using providers, rather than training their own models.
There remains a subset still powerful for communication with C-suite: Using simple models like K-means to show clusters with readable axis. They tend to use sci-kit learn or R. But this is more classic data science than ML.
There are also areas of AI so far relatively unaffected by ChatGPT etc - time series prediction (like OP, so it's less surprising they are using the old guard technologies), game engine AIs, non-discrete data, recommendation algorithms, some computer vision algorithms (especially Active learning). Some like HuggingFace (a commercial company running transformers Python module) are sort of inbetween given they serve both data-trained and the newer models.
> The company I work for just jumped on the bandwagon and is actually searching for people with ML/AI experience. If you can use Spark, TensorFlow, Scikit and Keras
This has nothing to do with question though. No one is hiring people who know Spark and Tensorflow to replace jobs. The kind of job replacement OP is asking about will potentially come from having your company sign a huge contract with Azure or whatever, hook up a bunch of LLM agents and APIs to it and lay off your 90% of your client support department. It won't be "we heard about AI on the news so we hired someone who knows keras". Companies do this but they have been doing it for many years - it isn't new or interesting (and many companies figured out how to do it well along the way too).
> But if you can claim experience with these frameworks, enjoy the ride. Companies are going to pay you whatever you ask.
No, they won't. The pay is similar to other software development. If at some companies it's higher, it might be something like 10% higher, definitely not "pay you whatever you want".
You're extrapolating too much from your company and your comment seems to be based on things that were relevant 5+ years ago, not on what's been happening in the field in the past 12 months.
Our marketing department already started replacing content creators with chatGPT. While I don't think it will replace that many jobs right away, it will start replacing more junior roles, which will make it even more difficult to break into many industries.
>I think if you can replace employees with ChatGPT you probably didn't need them to begin with. They weren't doing valuable work anyway.
That's a large number of employees. Labor markets are way less rational and efficient than we think.
Snark aside, I disagree somewhat. There are plenty of valuable but menial tasks (e.g. cut-and-dry CRUD app development) where >90% of the work can be done with LLMs. The question is, will this cause companies to ...
1. ... keep their current workforce, but produce 10 times the number of CRUD apps? Is there (induced) demand for that?
2. ... keep their current workforce, continue to produce the same number of CRUD apps as today, but with far more features and sophistication than the current product?
3. ... cut 90% of their workforce and continue to deliver the status quo?
Doing 3) will result in loosing your competitive edge when everybody else is using AI to improve the product. That's similar to not using enterprise software at all because paper and pen or office 365 will do the trick for now except when it's too late.
I believe that's an overgeneralization - a lot of jobs when done badly don't immediately reflect in quarterly revenues (the usual definition of "value" in my experience). Companies can make their software unmaintainable, cause a major security issue, ruin their brand or lose customer trust without noticing it - before it's too late.
But if management is up for having a job done badly, they usually achieve that regardless of whether an LLM is involved.
Edit: I guess I actually agree with you, it's just that companies can have peculiar ideas of what they consider valuable.
Anecdotally, I know quite a few copywriters who've seen their clientele dry up, especially those deep in SEO and content marketing, thought leadership, that sort of thing.
I have no data to support this, but I have seen plenty of LinkedIn headlines switch to "Prompt engineer:
Wouldn't you have to first automate the job? Speculative layoffs because something is going to be automated "any day now" seems unwise to me, how will the task be completed tomorrow?
Or is the idea that businesses already automated stuff and the management is so incompetent that IBM has to send around a pamphlet to remind them to layoff the people now sitting on their hands?
The idea in those cases is not so much about automating jobs, but rather retaining less people but still getting the same productivity because the tools are helping.
As an avid user of Copilot and ChatGPT, those tools definitely make me faster. Sure, I gotta review everything it writes. But it's already easier to review Copilot code directly in my VSCode window than that of a junior or mid-level developer in Github.
I definitely wouldn't replace an employee with AI tools, though, and I don't see that changing...
Also: whether we can replace less experienced devs and not drive the remaining ones crazy, that remains to be seen.
If you are being effective at managing a business, you'd be keeping track of employee utilization at some level. What I'm saying is, if your employee utilization suddenly drops for reasons that it won't recover from, you should consider layoffs. If it hasn't dropped, you haven't really replaced a job with AI yet and are putting the cart before the horse.
Companies have shown they'll do speculative layoffs or hiring. It's possible it's just an excuse in this case, but they might also genuinely believe the AI hype.
I have a friend (really) who is no longer replacing front-end devs who quit at their mid-stage startup. Says that he’s getting enough lift from ChatGPT-4 to make up enough of the gap that so far it’s not worth replacing them as the team can pick up the slack.
Question for OP - How big is the company you work for?
I work for a company that has around 50 technical employees and I'd say use of LLMs is putting pressure on the company against hiring more employess rather than actually laying people off.
My armchair estimate is LLMs make the employees ~5% more efficient "on average" (not everyone is using it or using it effectively), which is 15-30 minutes more efficient per day. That would mean you'd start thinking about laying off 5 people if you're a company with 100 technical employees. If you're a smaller company, it would be premature to lay off employees based solely off of the impact of LLMs.
Great question, and one to be skeptical of. IBM doing layoffs and saying they will be an AI company, is well, the slumbering giant trying to keep itself relevant and squeeze costs to pump stock.
> The IBM layoffs articles have been passed around executive management recently.
That IBM interview was highly speculative, and was arguably as much a smoke screen for layoffs they wanted to do anyway as it was a prediction of their future AI plans. No one at IBM has been layed off due to AI yet either, they simply "expect" they can do it in years to come, which may well be true.
I don't think this is common yet anywhere serious, as realistically you aren't going to be replacing an IC with an LLM yet, despite the hype, with very few exceptions.
Arvind Krishna is as much trying to associate IBM with the current AI investor craze as he is making a sensible statement about the future of work in that interview, and it should be seen as the investor marketing it is. IBM have done this in the past too - remember the Watson AI ads with Bob Dylan? Now no one remembers the Watson brand.
Planning to reduce headcount by 7800 people because you have awesome AI technology coming down the pipeline sounds a lot better to some investors ears than firing 7800 people because company isn't performing well, and investors are often rewarding AI news handsomely in the stock market recently.
I can't even remember the last time I saw Arvind or senior IBM staff being interviewed in the mainstream media at all before he uttered the word AI.
I think you've have to be specific about what kind of staff. Developers? Users?
ChatGPT has only been out for about six months. Even if there are big layoffs coming, going from released technology, to implementation in a customer domain, to being comfortable laying off significant staff in that time-frame seems extremely aggressive. I would guess layoffs in that time frame would actually be fore other reasons, though "AI" could possibly been used to obscure the true reasons.
Preliminary but it looks like everyone ?
AI boosted productivity = less needed employees.
The only blocker for mass company adoption right now is waiting for Azure secured GPT endpoints (up to our standards anyways), but we are talking to our Microsoft rep.
Skeptical of layoffs happening for non-"tech" companies due to AI, but lower HC for next year is happening in at least some orgs for sure, especially at entry levels. Cf. Bill Gates' comment of having a white-collar worker made available.
I'm skeptical.. It doesn't even sound like they've actually started on whatever AI/ML projects he's talking about. They're having profitability problems and needed to cut costs. I assume the AI mention here is just fluff so the layoff news doesn't sound so bad
What he is saying is that they're reevaluating strategic priorities; this is a statement akin to meta saying they're laying off VR/metaverse engineers and hiring AI engineers.
Yeah, true; I should have been more specific, that the mention of AI there is "we're changing priorities to develop AI-based offerings", not "we are replacing staff with AI".
AI won't directly lead to large-scale layoffs. It's more of a trend where companies begin to hire fewer people. They might not replace those who retire or leave.
First should come the training to use AI then should come the layoffs of those who don’t use AI.
But it sounds like your company is not interested in training. And would rather hire from outside first. So fire now and Hire AI enhanced staff next.
I have an aversion to such companies. But the other kinds of companies are not firing staff because of AI. Instead They are increasing staff workload, a fallout of lots of staff finding better jobs.
This is the same thing. Any layoffs happening today don't really have anything to do with AI. The company just needs to do layoffs, and saying "we have layoffs because of AI" sounds better than "we have layoffs because revenues are worse than we expected".
The product marketing mentions AI.
I asked a staff data scientist (has been with company for several years) if AI is used in our products.
"No."
It's quite amazing how pervasive Fraud & pseudo-Fraud is in the American economy. Regulators seem to turn a blind eye to so much of it. A recent example I saw was Food adulteration, with things such as sawdust [0]
[0] "31 Foods You're Eating That Contain Sawdust" https://www.prevention.com/food-nutrition/healthy-eating/a20...
I do think it's fun to point out in the ingredients list, sometimes it's a little unexpected, like on a bag of chips. "Mmm, sawdust" is kind of funny because it sounds bad but is completely harmless.
These days a lot of companies seem to be we have layoffs because everyone else is doing it and shareholders want short term gain.
This is why we need to pay CEOs so much: it's important to have a top-talent to ape the decisions that "everyone else" is making.
The only place where "intelligence" is in the AI is in its name. These are mathematical or logical models which resemble a behaviour of an intelligent being and if you throw ML into the mix, the AI models can actually learn on their own. But they are not creative. They can do amazing things but they have no comprehension of WHY they do these things or have (usually) no notion of truthfulness. They just repeat what they were learnt on or extrapolate from it (often wrongly because there is no critical thinking and fact-checking in contemporary models).
I see a lot of tell-tale signs of another bubble which is going to burst in a couple of years like it did in the 1980s.
Nothing to worry about. If someone's job security is endangered, they can either switch employer or do something else.
But if you can claim experience with these frameworks, enjoy the ride. Companies are going to pay you whatever you ask.
When I mentioned "AI Winter" in front of them they didn't know what I was speaking about. But they created a nice corporate ladder which anybody can climb and it was based on years of experience with the aforementioned frameworks. Python experience needed, Scala + Spark was an advantage.
I don't know what are you planning to do. But ... I'm buying a huge load of popcorn and I will laugh my ass off when this bubble bursts in a couple of years.
Lots of specialist ML models which required specific data collected carefully for a business task are no longer needed. Most ML roles until now, and research time, was spent collecting data and training specialist models.
General models like ChatGPT or pretrained image models do better than a fresh model trained from scratch or even a finetuned small/medium model (e.g. BERT/T5) ever will these days.
The special ML (pipeline of data --> train --> deploy --> manage/mlops/drift) used tech like PyTorch, Tensorflow, MLFlow.. and for more applicable levels (e.g. deployment), transformers, sci-kit learn, keras. However these are being replaced wholesale at many companies by langchain, huggingface inference API (for vision tasks) and pinecone/other vector DBs.
Langchain is just a smart way to wrap and order API calls to OpenAI/ChatGPT/other providers really, with some prebuilt use cases. Right now there is less on the metrics/output side than with lets say "bring-your-own-data" ML models, which you could measure things like precision.
Now, the old guard of ML (PyTorch, Tensorflow) is still used for training new models, open source replication attempts etc. But newer frameworks like JAX have not really taken off, as they have been entering the community as the community switched to using providers, rather than training their own models.
There remains a subset still powerful for communication with C-suite: Using simple models like K-means to show clusters with readable axis. They tend to use sci-kit learn or R. But this is more classic data science than ML.
There are also areas of AI so far relatively unaffected by ChatGPT etc - time series prediction (like OP, so it's less surprising they are using the old guard technologies), game engine AIs, non-discrete data, recommendation algorithms, some computer vision algorithms (especially Active learning). Some like HuggingFace (a commercial company running transformers Python module) are sort of inbetween given they serve both data-trained and the newer models.
This has nothing to do with question though. No one is hiring people who know Spark and Tensorflow to replace jobs. The kind of job replacement OP is asking about will potentially come from having your company sign a huge contract with Azure or whatever, hook up a bunch of LLM agents and APIs to it and lay off your 90% of your client support department. It won't be "we heard about AI on the news so we hired someone who knows keras". Companies do this but they have been doing it for many years - it isn't new or interesting (and many companies figured out how to do it well along the way too).
> But if you can claim experience with these frameworks, enjoy the ride. Companies are going to pay you whatever you ask.
No, they won't. The pay is similar to other software development. If at some companies it's higher, it might be something like 10% higher, definitely not "pay you whatever you want".
You're extrapolating too much from your company and your comment seems to be based on things that were relevant 5+ years ago, not on what's been happening in the field in the past 12 months.
I think if you can replace employees with ChatGPT you probably didn't need them to begin with. They weren't doing valuable work anyway.
That's a large number of employees. Labor markets are way less rational and efficient than we think.
Snark aside, I disagree somewhat. There are plenty of valuable but menial tasks (e.g. cut-and-dry CRUD app development) where >90% of the work can be done with LLMs. The question is, will this cause companies to ...
1. ... keep their current workforce, but produce 10 times the number of CRUD apps? Is there (induced) demand for that?
2. ... keep their current workforce, continue to produce the same number of CRUD apps as today, but with far more features and sophistication than the current product?
3. ... cut 90% of their workforce and continue to deliver the status quo?
Frameworks are constantly evolving and setting up a CRUD app is probably easier today than 10 years ago.
But if management is up for having a job done badly, they usually achieve that regardless of whether an LLM is involved.
Edit: I guess I actually agree with you, it's just that companies can have peculiar ideas of what they consider valuable.
I have no data to support this, but I have seen plenty of LinkedIn headlines switch to "Prompt engineer:
https://www.linkedin.com/search/results/people/?keywords=pro...
Or is the idea that businesses already automated stuff and the management is so incompetent that IBM has to send around a pamphlet to remind them to layoff the people now sitting on their hands?
As an avid user of Copilot and ChatGPT, those tools definitely make me faster. Sure, I gotta review everything it writes. But it's already easier to review Copilot code directly in my VSCode window than that of a junior or mid-level developer in Github.
I definitely wouldn't replace an employee with AI tools, though, and I don't see that changing...
Also: whether we can replace less experienced devs and not drive the remaining ones crazy, that remains to be seen.
The only real blocker is waiting on Azure secured GPT endpoints with some contract legalese about protecting our data from our Microsoft reps.
I work for a company that has around 50 technical employees and I'd say use of LLMs is putting pressure on the company against hiring more employess rather than actually laying people off.
My armchair estimate is LLMs make the employees ~5% more efficient "on average" (not everyone is using it or using it effectively), which is 15-30 minutes more efficient per day. That would mean you'd start thinking about laying off 5 people if you're a company with 100 technical employees. If you're a smaller company, it would be premature to lay off employees based solely off of the impact of LLMs.
That IBM interview was highly speculative, and was arguably as much a smoke screen for layoffs they wanted to do anyway as it was a prediction of their future AI plans. No one at IBM has been layed off due to AI yet either, they simply "expect" they can do it in years to come, which may well be true.
I don't think this is common yet anywhere serious, as realistically you aren't going to be replacing an IC with an LLM yet, despite the hype, with very few exceptions.
Arvind Krishna is as much trying to associate IBM with the current AI investor craze as he is making a sensible statement about the future of work in that interview, and it should be seen as the investor marketing it is. IBM have done this in the past too - remember the Watson AI ads with Bob Dylan? Now no one remembers the Watson brand.
Planning to reduce headcount by 7800 people because you have awesome AI technology coming down the pipeline sounds a lot better to some investors ears than firing 7800 people because company isn't performing well, and investors are often rewarding AI news handsomely in the stock market recently.
I can't even remember the last time I saw Arvind or senior IBM staff being interviewed in the mainstream media at all before he uttered the word AI.
ChatGPT has only been out for about six months. Even if there are big layoffs coming, going from released technology, to implementation in a customer domain, to being comfortable laying off significant staff in that time-frame seems extremely aggressive. I would guess layoffs in that time frame would actually be fore other reasons, though "AI" could possibly been used to obscure the true reasons.
The only blocker for mass company adoption right now is waiting for Azure secured GPT endpoints (up to our standards anyways), but we are talking to our Microsoft rep.
I don't dispute that. It just might also be a lie.
However, unemployment will never happen, if you know humans. There is going to be a lot of outrage, and regulators will regulate it like drugs!
But it sounds like your company is not interested in training. And would rather hire from outside first. So fire now and Hire AI enhanced staff next.
I have an aversion to such companies. But the other kinds of companies are not firing staff because of AI. Instead They are increasing staff workload, a fallout of lots of staff finding better jobs.
The tool has somehow become less impressive over time.