I see things like 2 sentence menu summaries in Uber Eats that are completely off in tone.
A quick sample from my app right now:
“Authentic Caribbean Flavours. Jerk Chicken, Curry Goat, and more. A vibrant culinary journey awaits.” - local Caribbean place
“Customisable burgers with 250,000+ toppings. Hand-cut fries and rich milkshakes await.” - Five Guys
“Authentic Indian cuisine bursting with rich flavours. Perfect for late-night cravings” - local Indian
Everything is Authentic, or Rich, or whatever.
—-
They’re investing in the wrong bits of AI. I’m sure they’re AB testing these soulless often inaccurate blurbs but I just cannot see how investing money into them actually sells more product.
On the other hand, if they had a coherent product vision, and trusted their engineers to use AI how they see fit, then I’m sure they would be more successful, and it would be cheaper.
Yeah, what's going on in these cos is that a PM is tasked to find ways of integrating AI into product and well if someone's payroll depends on it they _will_ find ways to integrate AI into product. "Hey I couldn't find ways to integrate AI into product" is not an acceptable response.
And it's not just Uber. My weather app has an AI weather summary these days
Aside from the hilarious "250,000+ toppings" error, these summaries seem... fine? I would be unsurprised to learn that a human came up with them, even. Seems like pretty common/standard marketing copy.
So, I've just read a few dozen student reports, which I'm 95% sure were mostly generated by AI.
The problem isn't one page of one report. It's not even one whole report. But, the more you read, the more irritating it gets. It's hard not to notice the AIisms, and once you know them, it gets really obvious. And I know some people will say 'Oh, I say X', for any particular X, but the thing that people don't do is use some same construction at least twice a page, every page, forever.
Now, I can imagine there ends up being a bit of a battle, where AIs try to learn to write 'less AI', but for now, it's very obvious if you read enough AI generated stuff.
>It's hard not to notice the AIisms, and once you know them, it gets really obvious.
Maybe I haven't read enough uber eats descriptions to notice, but at least from the sampling above it doesn't seem too obviously AI. There might be a lot of cliche wording, but it's not even clear whether it's worse than human reviews/descriptions.
Maybe each one is fine in isolation - what doesn’t come across from the sample is that every single one is practically the same. If you have Uber Eats, open up the app and look through the summaries for a bunch of restaurants and you’ll see what I mean.
And besides that, this just feels like something nobody asked for that probably doesn’t sell more food compared to, for example, more pictures.
It seems to have been at least slightly improved, but youtube video summaries suffered from this to an almost comical degree not long ago. The AI voice is already pretty recognizable and stilted, then you constrain it to avoid saying anything negative or spoilery about the video, and (presumably) don't let it remember past output. No surprise its extremely repetitive. For humans you're at least getting different people's voices, on different days, who remember that they just wrote about how the last one was a "unique look highlighting the importance of design".
Don't worry, they'll also use AI to add more pictures, which will all look strangely similar while bearing at best passing resemblance to anything you might actually receive after placing an order.
I think that's exactly the point. It's the distillation of the most common marketing copy possible, and when that tone is applied everywhere it becomes very same-y, like those cookie cutter neighborhoods where every house is the exact same. Which to some extent defeats the purpose of marketing as it doesn't stand out at all, just sanitized sameness. It's boring and a bit creepy.
But why does Uber need to spend 3.4B on injecting a useless blob of text between me and an overpriced burger delivered by a struggling illegal immigrant in a smoke-belching jalopy?
I know the counter-argument. "This will increase sales". You know what else would increase sales? Spending the 3.4B to replace the above with a uniformed delivery service similar to UPS. That job could pay benefits.
My last job did something similar. An AI blurb feature was researched and built and costs a good chunk of resources to run for no reason other than being able to tell investors AI was being used.
I proposed a solution using simple heuristics that would have accomplished the same output, would have been cheaper to build and cost next to nothing to run, but being economical, efficient and boring doesn't make exciting PowerPoint slides.
In most large tech companies the senior leaders want to run some vanity projects so having a research arm makes that possible. They can screw around without impacting product teams.
For the same reason the shoe industry spends billions sponsoring athletes and sports teams to hawk their gear. It's to build layer upon layer of abstractions to move the conversation away from how the sausage is made, and towards something that could justify their own bloated salaries, like promoting "sporting excellence" or "tech innovation".
Product reviews from real people are useful because they are allowed to say negative things.
Once you bypass the real reviews for a summary, all those useful negative signals get glossed over because the host platform doesn’t want to piss off the restaurants by propagating those negative comments.
They’re investing on the wrong bits, not wrong bits of AI. No matter how many features they come up with after spending billions of dollars, customers are not any more likely to order food than they already are. The money is better spent reducing their atrocious fees and making sure the restaurant isn’t marking up every single menu item by 25%.
The linked article is not about usage of AI in the product. It's about blowing the budget on AI coding tools, which is a much more interesting topic to discuss given how heavily they are being pushed by some companies.
If AI coding tools were having the benefits promised by AI vendors then Uber would be dropping staff, not the tools themselves.
I stopped ordering after I realized most places were using ai photos and descriptions. It’s worse than the stock images they’d use in the old days. It’s actively lying about what an item is.
> if they had a coherent product vision, and trusted their engineers to use AI how they see fit, then I’m sure they would be more successful
Out of curiosity, what do you think might be a successful application for AI in Uber's business? It seems like this is the sort of thing AI applications end up being. Does it actually get better than this?
You misunderstand. AI cannot fail. It can only _be_ failed. In this case, it was failed by the restaurant industry's lack of actual diversity. _They_ need to do better, not AI.
> According to The Information, Chief Technology Officer Praveen Neppalli Naga said Uber is now "back to the drawing board" after a surge in the use of AI coding tools, particularly Anthropic's Claude Code, has blown past internal expectations.
Of usage costs?
> The payoff is starting to show. Around 11% of Uber's live backend code updates are now written by AI agents, up sharply in just a few months. These systems power everything from ride-matching to pricing and bug fixes.
That's not a payoff.
What is the immediate cost of those code updates, what is the quality, how do they affect longer-term maintenance, how does that compare to doing it without "AI", etc.
Are these articles written to inform or to hype?
> UNLOCKED: 5 NEW TRADES EVERY WEEK. Click now to get top trade ideas daily, plus unlimited access to cutting-edge tools and strategies to gain an edge in the markets.
There's my answer. Here's a helpful uBlock Origin filter:
Only 11%!? Slackers. My team's project is 100% coding agent generated as pushed for by our dear leaders. Yes I'm very scared for when it all crashes down and really hope I'm not there when it does.
Yes, yahoo “journalism” is garbage. The primary source of this story is paywalled, so I can’t actually see what it said, but this AI (or otherwise crappy) summary is worthless.
>Despite spending $3.4 billion on research and development, the company has already exhausted its planned AI budget just months into 2026.
This, and the rest of the article, does does not seem to support that they spent 3.4B on AI. The text implies that the R&D budget for the entire company is 3.4 billion (which sounds vaguely reasonable given that market cap), and the portion of that which was earmarked for AI is already spent. I have no idea what the AI spend is there (although I assume it's not small), and the article doesn't provide any number either.
Those are extremely different things (unless there's evidence that 100% of R&D is spent on AI) and that headline seems to be intentionally misleading.
Based on my direct experience of similar budgets, you are exactly correct. AI coding tools don’t incur costs like this. AI costs are dominated by runtime and offline inference as part of business- and customer-facing workloads.
AI isn’t cheap, but what is especially not cheap is trying to get results that exceed ~80% in quality. Developers can tolerate gaps, customers won’t.
I'm coming around to it being like getting a pair of industrial grade yak clippers. Yes, there will be a lot of shiny yaks, but the market for shiny yaks is low.
I'm a hot dog chef with over 20 years of experience. Credited with inventing 274 hot dog styles. International awards. World renowned and industry figure.
My entire team, very competent hot dog experts, was laid off after a hot dog cooking machine could do what took us 3 months, in just one day. I've been out of a job for 12 months. The reason? All hot dog making has been offloaded to Claudog Hotdog. "Sorry. Hot dog manual cooking is a thing of the past", one recruiter told me.
I'm working as a software engineer as we speak. I keep applying to hot dog related positions but I get no interviews. Even positions significantly below my pay grade and skillset. No one is hiring. Hot dog cooking is over. We are entering a new era.
I'd take these options from several companies (all selling hotdogs) and wrap them up in Collateral Hotdog Obligations which I'd then offer to investors.
The main question is: what is demand elasticity for software?
If it low, and lower prices won’t generate much new demand, we should expect AI to improve engineering productivity, and for companies to reduce staff.
If it is high, then we should see companies hire more engineers, increase output and lower prices (and earn more).
Bureaucracy creates work so long as it owns the production function. In software that's typically through system upgrades, new API's, etc. The system will grow in internal complexity to its carrying capacity. You'd need someone who understands how to replace parts to prune, but they don't really have the incentive. This effect is reduced where software is less essential to the product, but any software-heavy product (particularly with a moat) will be more susceptible.
Companies try to manage it via CI/CD, outsourcing and internal competition, but no, companies can't magically reduce staff. They can, however, inject fear, which is good for reducing overt bureaucratic games, but actually increases covert bureaucracy and reduces knowledge-sharing, making the problem worse.
Only when incentives are aligned - when developers have an (equity) stake in growing the company - can the culture be open and efficient.
Every time the cost of software development has gone down due to higher level tools we've gotten higher software budgets, more software developers, and more released software. Demand appears to be effectively infinite.
oh man uber is acquiring the company I work for [1] and we currently really like Claude ... but if Codex is better so be it. I just really, really, really like Claude Code as a front end. Guess I'll have to make it talk Codex instead.
"My delivery service CEO told me the AI keep eating his tokens so I asked how many tokens he has and he said he just goes to the token shop and gets a new batch of tokens afterwards so I said it sounds like he’s just feeding tokens to the AI and then his laid off workers started crying."
If it is anything like my company, sign enormous deals to AI startups that have existed for 8 months, and do little more than provider wrappers around someone else's model. Then hire three different firms that do the same thing because each division has to prove how much more AI they are than the others. Have a handful of internal engineers who have no idea what they are doing, but get approval to build and run an internal B200 server farm. Ensure any big jobs are done through some kind of white-glove offering from Amazon/Azure that removes complexity, but charges astronomical rates.
This makes it sound like they spent $3.4B on tooling, but is it actually on salaries? Hardware?
Probably 5k-6k hires in the department, at say $350k/employee costs, is $2.1B which still leaves a ton of extra costs somewhere. Are they sending $1B to Anthropic?
Hi everyone. We are going to rate your performance directly on how much money you cost the company to do your job. The more money you spend doing your job, the better your performance review.
Large companies have been incentivizing and correlating token spend to performance, thus creating needless spend of tokens for now. Goodharts Law and all that.
I think the article is very deceptive in its framing. They spent 9% more; not $3.2B more. That’s $300m more which is frankly not that much. Firms spent way more than that on cloud adoption or before that web adoption in prior cycles.
This article is very unclear. It says that "Despite spending $3.4 billion on research and development, the company has already exhausted its planned AI budget just months into 2026" - does this mean the budget for AI coding tools is $3.4B? Or the budget on R&D (product development) at Uber?
Then later, "Uber's R&D expenses rose 9% to $3.4 billion in 2025, and the company expects that figure to keep climbing—suggesting AI may be as much a cost driver as a productivity lever" - so what's the 2026 budget? (From a quick googling Uber operates on a December 31-ending financial year).
The the article says "Chief Technology Officer Praveen Neppalli Naga said Uber is now "back to the drawing board" after a surge in the use of AI coding tools". But then says "The financial pressure is already building. Uber's R&D expenses rose 9% to $3.4 billion in 2025, and the company expects that figure to keep climbing". So are they "back to the drawing board" (pulling back on the tooling) or plowing on and continuing to grow the costs?
The article goes on to say "The payoff is starting to show. Around 11% of Uber's live backend code updates are now written by AI agents, up sharply in just a few months. These systems power everything from ride-matching to pricing and bug fixes.".
So what am I to draw from this? What actually was the budget? And was it blown? And if so, what is the consequence? This is just such a bizarre piece.
Holy misleading headlines Batman. They're not spending $3.4B on solely tokens for Anthropic are they? I don't think so...
If anything the CTO is just saying, we're blowing through token budgets way faster than expected as the uptake is so immense. I think that's right from what I've seen. Once people get it, they start using AI for everything. Obviously that's not going to be sustainable forever. I do think we're going to see a lot of adaptive routing in the future to cheaper models for more mundane tasks, whereas right now everything is getting routed to Opus regardless of real need.
Why would that not be sustainable forever? Over the long run the price per token is likely to decline as both hardware and software gets more efficient.
Look, we're using Uber Eats to order food for "free food Tuesdays" in our office.
I'm struggling to not puke using their interface, and a couple of times I gave up ordering even though it was free.
Every click can take 2-5 seconds to be processed, without any indication. Menus glitch. I once got 2 copies of my order because I rage-clicked the "Finish" button several times.
So you're trying to do high-end AI when you can't make a basic fucking form-based webapp work?!? What do you expect?
A quick sample from my app right now:
“Authentic Caribbean Flavours. Jerk Chicken, Curry Goat, and more. A vibrant culinary journey awaits.” - local Caribbean place
“Customisable burgers with 250,000+ toppings. Hand-cut fries and rich milkshakes await.” - Five Guys
“Authentic Indian cuisine bursting with rich flavours. Perfect for late-night cravings” - local Indian
Everything is Authentic, or Rich, or whatever.
—-
They’re investing in the wrong bits of AI. I’m sure they’re AB testing these soulless often inaccurate blurbs but I just cannot see how investing money into them actually sells more product.
On the other hand, if they had a coherent product vision, and trusted their engineers to use AI how they see fit, then I’m sure they would be more successful, and it would be cheaper.
And it's not just Uber. My weather app has an AI weather summary these days
The problem isn't one page of one report. It's not even one whole report. But, the more you read, the more irritating it gets. It's hard not to notice the AIisms, and once you know them, it gets really obvious. And I know some people will say 'Oh, I say X', for any particular X, but the thing that people don't do is use some same construction at least twice a page, every page, forever.
Now, I can imagine there ends up being a bit of a battle, where AIs try to learn to write 'less AI', but for now, it's very obvious if you read enough AI generated stuff.
Maybe I haven't read enough uber eats descriptions to notice, but at least from the sampling above it doesn't seem too obviously AI. There might be a lot of cliche wording, but it's not even clear whether it's worse than human reviews/descriptions.
And besides that, this just feels like something nobody asked for that probably doesn’t sell more food compared to, for example, more pictures.
Does order of toppings matter?
I know the counter-argument. "This will increase sales". You know what else would increase sales? Spending the 3.4B to replace the above with a uniformed delivery service similar to UPS. That job could pay benefits.
I proposed a solution using simple heuristics that would have accomplished the same output, would have been cheaper to build and cost next to nothing to run, but being economical, efficient and boring doesn't make exciting PowerPoint slides.
They didn't. 3.4B was their total R&B cost. Don't blame AI for your human hallucination.
If we're using AI and we're still getting the gobbley gook nothing burger marketing word soup, then what are we doing here?
No, not everything IS rich and authentic. And no, it's not awaiting me!
Once you bypass the real reviews for a summary, all those useful negative signals get glossed over because the host platform doesn’t want to piss off the restaurants by propagating those negative comments.
One would hope Uber could manage 1 sentence API summaries (regardless of their quality) for less than $3.4 billion.
They’re investing on the wrong bits, not wrong bits of AI. No matter how many features they come up with after spending billions of dollars, customers are not any more likely to order food than they already are. The money is better spent reducing their atrocious fees and making sure the restaurant isn’t marking up every single menu item by 25%.
If AI coding tools were having the benefits promised by AI vendors then Uber would be dropping staff, not the tools themselves.
Out of curiosity, what do you think might be a successful application for AI in Uber's business? It seems like this is the sort of thing AI applications end up being. Does it actually get better than this?
Of usage costs?
> The payoff is starting to show. Around 11% of Uber's live backend code updates are now written by AI agents, up sharply in just a few months. These systems power everything from ride-matching to pricing and bug fixes.
That's not a payoff.
What is the immediate cost of those code updates, what is the quality, how do they affect longer-term maintenance, how does that compare to doing it without "AI", etc.
Are these articles written to inform or to hype?
> UNLOCKED: 5 NEW TRADES EVERY WEEK. Click now to get top trade ideas daily, plus unlimited access to cutting-edge tools and strategies to gain an edge in the markets.
There's my answer. Here's a helpful uBlock Origin filter:
This, and the rest of the article, does does not seem to support that they spent 3.4B on AI. The text implies that the R&D budget for the entire company is 3.4 billion (which sounds vaguely reasonable given that market cap), and the portion of that which was earmarked for AI is already spent. I have no idea what the AI spend is there (although I assume it's not small), and the article doesn't provide any number either.
Those are extremely different things (unless there's evidence that 100% of R&D is spent on AI) and that headline seems to be intentionally misleading.
AI isn’t cheap, but what is especially not cheap is trying to get results that exceed ~80% in quality. Developers can tolerate gaps, customers won’t.
Token maxxing? Might explain high costs if you are actively encouraging developers to spend as much tokens as possible.
(Other inputs from days of yore: number of people that report to you, budget allocation to your team. Nothing new under the sun!)
So what can you do?
Buy as many hot dogs as you can. Buy stock in hot dog companies.
My entire team, very competent hot dog experts, was laid off after a hot dog cooking machine could do what took us 3 months, in just one day. I've been out of a job for 12 months. The reason? All hot dog making has been offloaded to Claudog Hotdog. "Sorry. Hot dog manual cooking is a thing of the past", one recruiter told me.
I'm working as a software engineer as we speak. I keep applying to hot dog related positions but I get no interviews. Even positions significantly below my pay grade and skillset. No one is hiring. Hot dog cooking is over. We are entering a new era.
Well, there’s your problem. Get back to making the hotdogs!
If it low, and lower prices won’t generate much new demand, we should expect AI to improve engineering productivity, and for companies to reduce staff.
If it is high, then we should see companies hire more engineers, increase output and lower prices (and earn more).
Companies try to manage it via CI/CD, outsourcing and internal competition, but no, companies can't magically reduce staff. They can, however, inject fear, which is good for reducing overt bureaucratic games, but actually increases covert bureaucracy and reduces knowledge-sharing, making the problem worse.
Only when incentives are aligned - when developers have an (equity) stake in growing the company - can the culture be open and efficient.
NP-hard
[1] it's public knowledge https://investor.uber.com/news-events/news/press-release-det...
Probably 5k-6k hires in the department, at say $350k/employee costs, is $2.1B which still leaves a ton of extra costs somewhere. Are they sending $1B to Anthropic?
Weird and uninformetive article.
6 months later
Hi everyone. We are over budget.
It's a poorly written junk article upvoted based on Uber/Anthropomorphic sentiment. I recommend flagging it.
Then later, "Uber's R&D expenses rose 9% to $3.4 billion in 2025, and the company expects that figure to keep climbing—suggesting AI may be as much a cost driver as a productivity lever" - so what's the 2026 budget? (From a quick googling Uber operates on a December 31-ending financial year).
The the article says "Chief Technology Officer Praveen Neppalli Naga said Uber is now "back to the drawing board" after a surge in the use of AI coding tools". But then says "The financial pressure is already building. Uber's R&D expenses rose 9% to $3.4 billion in 2025, and the company expects that figure to keep climbing". So are they "back to the drawing board" (pulling back on the tooling) or plowing on and continuing to grow the costs?
The article goes on to say "The payoff is starting to show. Around 11% of Uber's live backend code updates are now written by AI agents, up sharply in just a few months. These systems power everything from ride-matching to pricing and bug fixes.".
So what am I to draw from this? What actually was the budget? And was it blown? And if so, what is the consequence? This is just such a bizarre piece.
If anything the CTO is just saying, we're blowing through token budgets way faster than expected as the uptake is so immense. I think that's right from what I've seen. Once people get it, they start using AI for everything. Obviously that's not going to be sustainable forever. I do think we're going to see a lot of adaptive routing in the future to cheaper models for more mundane tasks, whereas right now everything is getting routed to Opus regardless of real need.
I'm struggling to not puke using their interface, and a couple of times I gave up ordering even though it was free.
Every click can take 2-5 seconds to be processed, without any indication. Menus glitch. I once got 2 copies of my order because I rage-clicked the "Finish" button several times.
So you're trying to do high-end AI when you can't make a basic fucking form-based webapp work?!? What do you expect?