At first glance its like Hollywood movies announcing they're the best selling of all time, ignoring inflation. In other words a ratchet just to get clicks.
However this is relevant because this is an investor report helping people forecast, and this stat helps calibrate readers expectations of just how fast a product can scale in this day & age, using a relevant comparison of products in the same category that when launched offered the same step change in value.
My gripe is not with relevancy of the data, its with the chosen comparison. Comparing with Google at the beginning of the internet revolution, to now with billions of internet enabled devices across the world, is not a fair comparison and does not give any meaningful insight.
but that’s precisely the point and it does give insight. Google scaled off of existing infrastructure like computers. Computers scale off of existing infrastructure like electricity.
The point is to compare current era of scaling to the previous era and see how much faster it is.
It’s not comparing Google to Open Ai. It’s comparing the environment that produced Google to the environment that produced Open Ai.
It’s kind of obvious that new eras will produce faster scaling. But what if you ran the numbers and it wasn’t true?
There are plenty of times when this happens, the obvious is actually something different. This time isn’t the case but that’s the point of research, to back up common sense with evidence.
Also, it is very different to know that it is faster vs it is 5.5x faster. The 5.5x might not be completely accurate but it’s more in depth than just your intuition.
There is wisdom in simple, profound statements that open up new lines of thought. But There is also wisdom in doing research to make things you already know quantified and more concrete.
One example of research being wisdom is demographics. It’s one thing to know that there are more whites than blacks in the US, it’s another thing to know that there are 200m whites and 40m blacks. The numbers shed light into precision and also validate or clarify your thinking. For instance maybe you thought that blacks should be the second largest demographic since they have been here longest. Not so, Hispanic is at around 60m. Or maybe you knew that already. But if you wanted to argue with others about demographic growth and what is actually happening in immigration, knowing the numbers is wisdom, and going off of intuition leads to “they took my job” hot takes.
If you continue reading, they're comparing ChatGPT with more companies than just Google. TikTok and Fortnite are also included for example, both came much later so I'm guessing you'll feel it is a bit fairer of comparison.
It’s not a contest that needs to be adjudicated fairly, it’s a report on the state of today and looking forward. So yes, the gazillion phones are definitely relevant.
Yes? With internet access being more prevalent than ever, it is expected that new product categories will have faster adoption. This demonstrates how much faster using ChatGPT and Google as proxies for their respective product categories.
I also wonder how much of ChatGPT's usage is to basically cheat on homework. Basically 100% of the users of ChatGPT I know mostly use it to do their homework for them.
Maybe this will turn out to be a valuable user segment, but I'm not sure.
Yes. The question isn’t which team is better. The question is how fast they can grow. Similar companies in Pakistan and America will grow disparately. Same for companies on the Internet in 1998 versus, practically, 2022. That isn't fair. But nobody cares about fair, we’re measuring what’s true. It’s fair, from those data, to conclude that the latter should beat the former's record, whether separated by space or time.
(You correctly conclude, from that slide, that Google grew in a less favourable environment than OpenAI et al. You just need to take it one step further into the potential rate for growth and disruption today versus in the past. Put another way, Google could be disrupted quicker than it could disrupt.)
This may be my single biggest pet peeve. I think of this problem more broadly every time I see some new movie is the highest grossing movie of all time. No shit, Sherlock. More people, more screens, more movie theaters, inflation... the record is always going to be broken.
I hate this so much I actually ran the numbers and saw that per capita box office revenues have remained generally stable since the 1980s.
I saw another one recently that said something like "ChatGPT has 350 millions unique visits per month, if it were a country, it would be the 3rd largest in the world"
Observations and personal opinions, after glancing through the slides:
* The entertainment industry will be transformed drastically. Music and movies will be transformed by AI to such an extent that the next generation will find it hard to believe how the industry operated.
* The moonshot will be biological research and research in general. When a breakthrough happens, it will transform our health for the better in astonishing ways.
* In terms of direct adoption, the urban-rural divide is vast.
* Less democratic countries will have an advantage over the democratic countries in terms of fast execution, unless the latter manage to integrate private-public operations effectively.
* I've liked Mary Meeker's reports since the heydays of TechCrunch. This report has a lot of details that I did not know. Nevertheless, I didn't see a single point that stood out.
> Less democratic countries will have an advantage over the democratic countries in terms of fast execution, unless the latter manage to integrate private-public operations effectively.
This argument is like a classic. But on which time frame is it supposed to be operative? Is that back with any empirical data to test the claim?
Less involvement of edge nodes in the decision process is also encouraging "don't give a shit to suggest improvements" and "utter whatever lie is expected by the system to avoid claims of dissidence".
There are actual pragmatic benefits in democratic systems, it's not only pure idealistic motivation at stake that can argue in their favor.
they will continue in the American west/southwest which have decent predictable road infrastructure, weather, and driving behavior... not a lot of pedestrians or snow in Phoenix (though some pedestrians in SF).
It will be a while before philly or Boston gets Waymo.
Mary's reports are always interesting reads. That said, the Internet (and now AI) is a significantly less mysterious area than it used to be so I think there are, as a consequence, fewer surprises lurking than there used to be.
There are exceptionally few less democratic countries that are functional in a manner such that they can take great advantage of the AI potential, much less execute in some sort of super fast manner compared to the democratic nations. You can count those less democratic nations on one hand.
It's overwhelmingly the case that affluence and national wealth goes hand in hand with greater democracy, there is a tight correlation (and of course there are exceptions). All you need to do is look at the top ~50 nations in terms of GDP per capita or median wealth per adult, then look at the bottom 50.
Less democratic nations will be left even further behind, as the richer democratic nations race ahead as they have been doing for most of the post WW2 era. The richer democratic nations will have the resources to make the required enormous investments. The more malevolent less democratic nations will of course make use of good-enough AI to do malicious things, not much about that will change. Their power position won't fundamentally change however.
>> The entertainment industry will be transformed drastically. Music and movies will be transformed by AI to such an extent that the next generation will find it hard to believe how the industry operated.
As someone who was very worried about how this would impact artistic output I've started to change my mind. It seems younger people are extremely sensitive to AI content and happy to call out anything that could conceivably be generated by AI as slop. People want real art created by real people with real skill. They want to be able to connect with their art and connect with them in person. The muzak industry is in trouble but I no longer think music in general will be replaced by AI. We'll see improvements to software instruments, plugins etc but AI improving the tools is a different prospect than fully AI generated music.
I disagree. In my experience the best art is almost always created or at least masterminded by a single person or small team. AI will allow these visionaries to create high fidelity content without engaging with the Hollywood machine. I view it as an incredible democratization of art that will allow the best ideas to rise in conjunction with apps like tiktok that are quickly able to find the most entertaining content posted by anyone and share it widely.
In music I think its similar to the claim that people using samples arent real musicians. Its a take many will have, but many others will have no problem enjoying stuff created with the help of AI.
Why is it lol? I'd make a comment about "kids these days .." but even more than that, I'm amazed by the lack of curiosity or even the shamelessness in making this comment and expecting others to agree with you.
Am I in the minority that finds these slides hard to parse?
One stylistic quirk is its liberal use of `=` and `+` for things that aren't equivalence or summation, which keeps throwing me off.
Does she ever do a presentation of these, talking through / commentary on the slides? If a recording of that, I'd 100% clear half a day's worth of meetings to watch it.
At some point I eyeballed a comparison to WhatsApp/Instagram/Snapchat growth, and IIRC although it was within the same order of magnitude, it still didn’t reach the rate of growth of those hypergrowth social apps.
I am genuinely curious if lay people will pay for AI. I know people who spend literally hours daily on YouTube and complain about the ads and don’t want to pay the quarter a day to get rid of em.
Will these people pay $50 a month for gpt? We will see.
ChatGPT already has over 20 million paying subscribers, even in its early state.
But asking if average people will pay for AI is the wrong question. It's like asking if average people will pay for Salesforce or Oracle. Even if consumers don't pay for AI en masse, their employers will as the value proposition is a lot clearer.
OpenAI is not at all profitable right now. Which is their plan. They have from the start been focusing on hypergrowth to become one of the very biggest players. The monitization phase comes later. Probably still a few years from now. Of course it remains to be seen if they are actually able to capture a significant amount of the value when the time comes. There are many other strong players - Google and Microsoft have a massive advantages wrt distribution and data collection. Meta as well. And they have existing profitable businesses, so they can afford to give away "AI" for a long time. Cost to serve users right now is quite high, that will likely go down by a factor 10x over the next 10 years (assuming the energy prices don't go haywire).
We really don't know, in part because they're cooking all kinds of costs into R&D. My hunch is we'll recapitulate the dark fibre of the 1990s, with OpenAI et al burning capital to develop models others profit off. But that's predicated on the assumption that large models can be effectively distilled into cheap-to-run small ones. If that doesn't prove right, or the frontier keeps advancing for decades, the operating model of a high-R&D industry could resemble Intel (and semiconductors broadly) instead.
For knowledge workers in western companies, 20 USD/month for ChatGPT style tool, seems like it is likely worth it for employers to front.
I think over time we will see this included in "standard office tools" like email/messaging/videoconf/documents. Which would make Microsoft and Google very well positioned.
But probably everyone will adopt it - so it may not give a competitive advantage to any given company.
The enabling technologies of the World Wide Web economic explosion were the web server and the browser. Both are now broadly available for free. The money is made in the applications enabled by them.
I think it will be similar in AI. The AI will be free or cheap; the money will be made in its application. I won’t be surprised if my kid looks back on Anthropic and OpenAI the way we look back on Sun and Netscape.
It amazes me people still trot out the same "where's the immediate profits?" line after they were wrong, wrong, and wronger, about Uber, Facebook, Amazon, and Google all being absurdly profitable in the long term. Every one of those had a tremendous amount of naysayers based on poo-pooing the company strategy of not immediately pursuing profitability. They were all wrong in every case then and they're wrong now.
I’d like to hear more discussion of AI being applied in a ways that are “good enough”. So much focus on it having to be 100% or it sucks. There are use cases where it provides a lot of value and doesn’t have to be perfect to replace tasks done by an imperfect employee who sometimes misses details too. Audit the output with a human like Taco Bell (Yum) is doing in AI drive through orders. Are most the day to day questions the a person asks so critical in nature that hallucinations cause any more issues than bad advice from a person or an inaccurate Wikipedia or news article or mishearing? Tolerance of correctness proportional to the importance of the task i guess. I wouldn’t publish government health policy citing hallucinated research or devise tariff algorithms but I’m cool with my generative pumpkin bars recipe accidentally having a tbsp tsp error I’d notice in making them.
I think we see this a lot with software development AI; the tab complete only has to be “good enough” to be worth tweaking. Often “good enough” first pass from the AI is a few motions on the keyboard away from shippable.
Now with headless agents (like CheepCode[0], the one I built) that connect directly to the same task management apps that we do as human programmers, you can get “good enough” PRs out of a single Linear ticket with no need to touch an IDE. For copy changes and other easy-to-verify tweaks this saves developers a lot of overhead checking out branches, making PRs, etc so they can stay focused on the more interesting/valuable work. At $1/task a “good enough” result is well worth it compared to the cost of human time.
> like to hear more discussion of AI being applied in a ways that are “good enough”
Search. AI works about as well as a clueless first-year analyst. You need to double check its work. But there is still value added in its compiling together sources and providing a reasonably-accurate summary of, at the very least, some of their arguments.
Also basic web development. Most restauranteurs I know no longer deem it necessary to hire a web developer. (Happy side effect: more PDF menus instead of over-engineered nonsense.)
I think biochemistry might be the biggest beneficiary of "good enough" AI. Much of the expensive and slow parts of discovery (like creating new drug or making sense of some crazy complicated protein structure) are already speedrunning and while I'm on the conservative side of estimates, with mathematical certainty will see groundbreaking new drugs, discoveries from biochemistry field and not in the distant future but in a few years.
For the other stuff like automating white collar jobs, good enough might not suffice due to the intricate dependencies and implicit contracts formed naturally out of human groups.
Creative jobs will be the most impacted by "good enough" depending on the number of features. For 2d art it was almost certainly over (unless you add text feature to it like manga). You can see with increasing features, like starting with general photography, stock photography and now product photography are overnight made redundant. ex) with the latest Flux image editor negates a need to hire a photo editor, photographer, camera equipment, lighting, product artist. Veo3 not quite there but handles speech features in video generation that other models did not and getting closer to replacing videographers. I think 3d model is the next frontier here following the trend but is still quite difficult as it involves mesh generation/texture/rigging/animation/physics that also must come with shaders and interaction with other 3d models.
Software engineering falls somewhat in the creative field but also shares the complexity from white collar jobs for the same reason that will prevent it from being completed automatable with "good enough".
The hallucination issue is less of an issue and an old trope. The truly challenging enemy of AI of "good enough" is due to "not enough context" and "poor context compression and recall". The problems I listed in white collar and software engineering jobs is context problem. The compression of contexts cannot be stable as the former isn't solved. The fast efficient recall of contexts then cannot take place due to poor compression and so on.
This is just my observation of seeing how things are progressing. I do feel that we will see something different from LLM altogether that could solve some of the context issues but a major misalignment of incentives is what I think would prevent an AGI know-all-see-all type of deal. ex) you might not have any incentive to share all the essential context with the AI because you might become irrelevant and want it to stay in the dark. you might have a union or some social organization to legislate monopoly of human knowledge/skill workers in a field.
but perhaps THE most difficult problem even after we solve the context problem is the inability for the God AGI to be awake or conscious which is absolutely critical in many real world applications.
I like to focus more on the very near impact of what AI is currently doing in the labs and its impact on humans than worrying about who and when all of the other problems are going to be addressed.
Whether we get a UBI-first socialist world order or a continuation of technological feudalism with the poors still using GPTs while the rich sell the energy and chips (software would almost be worthless on its own by then) is the least of my concern.
I'm an optimist and I'm very excited for the very-near and immediate impact of our currently available AI tools doing the "good enough" in very positive ways.
I think some more white collar jobs might be affected, not just creative ones. There is a substantial amount of jobs where the end result needs to be of a certain quality, but all context can be inferred or provided up front, and checking and correcting a result is quicker than producing it manually. Think e.g. law or translations. Translators, proofreaders, and others are already feeling the squeeze.
In other cases, like software development, there is a split between tasks of a narrow scope and those of a wide scope. Creating one-shot pieces of software is kind of a solved issue now. Maintaining some relatively self-contained piece of software might soon turn into a task for single maintainers that review AI PRs. The more the bottleneck is context tracking, as opposed to producing code, the less useful the AI. I am uncertain, however, how the millions of devs in the world are distributed on this continuum.
I am also skeptical about legal protections or unionization, as many of these jobs are quite suited to international competition.
Based only on the headings and titles of the charts on the first few pages? I smell an AI writing. I mean, do these titles sound like an intelligent human wrote them?
"Charts paint thousands of words..."
"Leading USA-Based LLM User" -- what does that even mean? With a value of "800MM" where MM is what units?
"AI Usage + Cost + Loss Growth = Unprecedented" -- how can you add three things that don't appear to be commensurable? Also, what is "Loss Growth" and how does it add to "Usage"?
There are plenty more examples in the charts later. The Overview section, while not so obviously AI-written, has an over-the-top enthusiasm and loose structure that makes me squeamish. I don't trust it, but that's just me I guess.
Super helpful assembly of lots of data on point - a bit too much to digest quickly. Mary Meeker is great.
Some limitations:
- Unhelpful modernity-scale trend/hype-lines. Everyone knows prospects are big and real.
- No significant coverage of robotics, factory automation? (TAM for physical products is 15X search+streaming+saas)
- No insight? No new categories, surprising predictions, critical technologies identified?
Surprises:
- AI productivity improvements are marginal, esp. relative to the concern over jobs
- US ratio of top public companies by market cap increased from ~50% in 1995 to ~85% in 2025. Seems big; or is it an artifact of demographics of retirement investments? Or is it less significant due to growing private capital markets?
What I would like addressed: The AI means of production seem very capital-intensive, even as marginal cost of consumption is Saas-scalable (i.e., big producers, small consumers). I have some concern that AI development directions are decided in relatively few companies (which are biased to Saas over manufacturing, where consumers are closer to producers in size). This increases the likelihood of a generational whiff (a mistake I suspect China won't make).
As an aside, I wish Elon Musk would pivot xAI out of Saas AI (and science AI), focusing exclusively on manufacturing robotics -- dogfooding at Tesla, SpaceX and even Boring -- with the simpler autonomy of controlled environments but the hard problem of not custom building everything every time. They're well positioned, he could learn some discipline from working with downstream and upstream partners on par (instead of slavish employees, fan investors, and dull consumers or slow governments as customers). He'd redeem himself as a builder of stuff that builds, so we can make infrastructure for generations to come.
Corporates are going to talk about the current in-vogue thing always.
They talked about Blockchain.
They talked about crypto
They talked about anything that when not spoken about made the investors feel the board is behind in the world.
> 72 Enterprise AI Adoption = Rising Priority…Bank of America – Erica Virtual Assistant (6/18)
Ok fine. People are using it. 2 important questions. Did the users have other choices, over which they chose this. Did the users feel happier than other methods.
Without this data, this is just feeding the hype train.
Above is the press release. AI comes up 3 times, including the title. Release talks about AI-Driven platform to streamline operations of franchisees. Ok fine. But how exactly does AI enhance operations at the franchisee level? Especially if all they are providing as part of the SaaS are "Backed by artificial intelligence, Byte by Yum! offers franchisees leading technology capabilities with advantaged economics made possible by the scale of Yum!"
I mean, sure, if at their corporate office, they are using data analysis to predict demand and allocate resources or raw materials to improve profitability, ok. But that can be done quite effectively with statistical analysis.
Where exactly does AI come into the picture is unclear.
Yet, the slides show this as some sort of a monumental achievement, especially highlighting "25000 restaurants are using at-least 1 product". Sure, they are going to use, if that is what the franchise owner provides them, probably for additional cost.
Yum brand has 61000 restaurants as per their website. Looks like they rolled out a new solution, and about 1/3rd have been successful in adopting the new platform. Others may be in the line to do the same. Is this related to AI, or is this related to regular software changes / updates / revamps?
This strikes so close to the infamous Dropbox comment [1].
The common interactions restaurants have with technology is in bookings and online menu presentation. The latter historically required hiring a lightweight web dev. That’s now irrelevant.
The person you're responding to is a pretty prolific poster here. Often about financial matters. And well before good text models would have been useful.
If even the hyperscalers like OpenAI aren't making a profit, when exactly to companies adopting AI start making money? I wonder if we'll start seeing the hyperscalers start raising prices and squeezing customers when investors finally start expecting a return, and if that will put the breaks on adoption as a result.
Depends a lot on the investor expectations. If they think the opportunity for growth and market expansion is coming to its end, then they would force for higher returns.
The graph on page 6 makes it look like Waymo has overtaken "Ride Share" in general—but the chart is based on the data from page 302, which only compares Waymo and Lyft.
Waymo is clearly growing fast, but it's not bigger than Uber—yet. Super impressive its already surpassed Lyft. From personal experience, it also seems to have driven Uber prices down materially.
India School of Business, one of the highest ranked business schools in India, is offering leadership courses with a touch of AI, for as little as 10 lakh rupees (about 11000 USD). For a majority of IT folk in India, that is a very reasonable amount for a course from a very reputed institute.
Many IT folk here are scared to the core about their jobs and there has been a mass movement towards AI certificates. While the courses teach the basics, none of them are at a university level. Most of the students are only users of tools though.
Would you call them AI developers? Meh. They get by. Most of the work in India is back-office work anyway, and these AI engineers end up doing data related tasks mostly.
Very few are actually building worthwhile AI stuff.
"Share of total current users/years in" on slide 3 seems like nonsense metric to me that is far from being interesting or relevant. Yes, we know that there was hype growth for LLM while Google has long history of growth. This chart gives no useful info and pretends to reveal something while just being repeat of "USA-based LLM users" on the same page.
Looks like yet another "numbers going up" report, not going to spend time on reading 300+ slides like that.
Maybe I am unfair, but I have no time to read every single report and I am willing to read only exceptional ones with no (or very limited) marketing slop.
I don't think AI is a sham, but I would like to see more effort to quantify AI's improvements to productivity and practical applications - rather than countless charts detailing the increasing watts and flops and GPUs and dollars being dedicated to AI. I have zero doubt that companies are spending a lot of resources on AI. It's difficult to exist in society and not recognize that.
There were a few practical use cases mentioned here, but the only quantified impact to worker productivity from LLMs that I noticed was a single study of call center agents from 2023 that showed a 14% increase in cases handled per hour.
340 slides (not even dense text!) is not much iff content is worth reading.
(yesterday evening I read about 400 pages of text as entertainment, it is not much - though admittedly this specific one failed as entertainment and it was not content worth reading)
Is this really relevant? Google was formed when there were no gazillion phones to do searches a million times a day.
ChatGPT was formed recently, when every strata of society, and every country in the world has double digit internet penetration.
However this is relevant because this is an investor report helping people forecast, and this stat helps calibrate readers expectations of just how fast a product can scale in this day & age, using a relevant comparison of products in the same category that when launched offered the same step change in value.
Also, quantity has a quality of its own.
The point is to compare current era of scaling to the previous era and see how much faster it is.
It’s not comparing Google to Open Ai. It’s comparing the environment that produced Google to the environment that produced Open Ai.
It’s kind of obvious that new eras will produce faster scaling. But what if you ran the numbers and it wasn’t true?
There are plenty of times when this happens, the obvious is actually something different. This time isn’t the case but that’s the point of research, to back up common sense with evidence.
Also, it is very different to know that it is faster vs it is 5.5x faster. The 5.5x might not be completely accurate but it’s more in depth than just your intuition.
There is wisdom in simple, profound statements that open up new lines of thought. But There is also wisdom in doing research to make things you already know quantified and more concrete.
One example of research being wisdom is demographics. It’s one thing to know that there are more whites than blacks in the US, it’s another thing to know that there are 200m whites and 40m blacks. The numbers shed light into precision and also validate or clarify your thinking. For instance maybe you thought that blacks should be the second largest demographic since they have been here longest. Not so, Hispanic is at around 60m. Or maybe you knew that already. But if you wanted to argue with others about demographic growth and what is actually happening in immigration, knowing the numbers is wisdom, and going off of intuition leads to “they took my job” hot takes.
Maybe this will turn out to be a valuable user segment, but I'm not sure.
Practices learned during school do enter the workforce. Messaging during class turned into messaging during meetings.
So, if 100% try using it to do their work toil for them...
Yes. The question isn’t which team is better. The question is how fast they can grow. Similar companies in Pakistan and America will grow disparately. Same for companies on the Internet in 1998 versus, practically, 2022. That isn't fair. But nobody cares about fair, we’re measuring what’s true. It’s fair, from those data, to conclude that the latter should beat the former's record, whether separated by space or time.
(You correctly conclude, from that slide, that Google grew in a less favourable environment than OpenAI et al. You just need to take it one step further into the potential rate for growth and disruption today versus in the past. Put another way, Google could be disrupted quicker than it could disrupt.)
I hate this so much I actually ran the numbers and saw that per capita box office revenues have remained generally stable since the 1980s.
Meeker is illustrating a material change in the environment.
Instead, we have this monstrosity of metrics that make no sense.
Data scientists bring out relevant, thought-provoking metrics. They work for the people who work for the people who are the target audience here.
With all due respect, I’ve seen more corporate drivel and slide-show sugar out of data scientists than VCs. (Largely as a product of attention span.)
I saw another one recently that said something like "ChatGPT has 350 millions unique visits per month, if it were a country, it would be the 3rd largest in the world"
Or something along those lines.
* The entertainment industry will be transformed drastically. Music and movies will be transformed by AI to such an extent that the next generation will find it hard to believe how the industry operated.
* The moonshot will be biological research and research in general. When a breakthrough happens, it will transform our health for the better in astonishing ways.
* In terms of direct adoption, the urban-rural divide is vast.
* Less democratic countries will have an advantage over the democratic countries in terms of fast execution, unless the latter manage to integrate private-public operations effectively.
* I've liked Mary Meeker's reports since the heydays of TechCrunch. This report has a lot of details that I did not know. Nevertheless, I didn't see a single point that stood out.
This argument is like a classic. But on which time frame is it supposed to be operative? Is that back with any empirical data to test the claim?
Less involvement of edge nodes in the decision process is also encouraging "don't give a shit to suggest improvements" and "utter whatever lie is expected by the system to avoid claims of dissidence".
There are actual pragmatic benefits in democratic systems, it's not only pure idealistic motivation at stake that can argue in their favor.
Having never ridden in a Waymo, I’m curious if anyone sees a reason those trends won’t continue in SF or replicate elsewhere.
It will be a while before philly or Boston gets Waymo.
It's overwhelmingly the case that affluence and national wealth goes hand in hand with greater democracy, there is a tight correlation (and of course there are exceptions). All you need to do is look at the top ~50 nations in terms of GDP per capita or median wealth per adult, then look at the bottom 50.
Less democratic nations will be left even further behind, as the richer democratic nations race ahead as they have been doing for most of the post WW2 era. The richer democratic nations will have the resources to make the required enormous investments. The more malevolent less democratic nations will of course make use of good-enough AI to do malicious things, not much about that will change. Their power position won't fundamentally change however.
As someone who was very worried about how this would impact artistic output I've started to change my mind. It seems younger people are extremely sensitive to AI content and happy to call out anything that could conceivably be generated by AI as slop. People want real art created by real people with real skill. They want to be able to connect with their art and connect with them in person. The muzak industry is in trouble but I no longer think music in general will be replaced by AI. We'll see improvements to software instruments, plugins etc but AI improving the tools is a different prospect than fully AI generated music.
In music I think its similar to the claim that people using samples arent real musicians. Its a take many will have, but many others will have no problem enjoying stuff created with the help of AI.
One stylistic quirk is its liberal use of `=` and `+` for things that aren't equivalence or summation, which keeps throwing me off.
Does she ever do a presentation of these, talking through / commentary on the slides? If a recording of that, I'd 100% clear half a day's worth of meetings to watch it.
Her use is not the math sense, it's the conceptual sense. In AI space, a common example is:
Or think of it as alchemy:This is also interesting to see. ChatGPT currently has about 20 million paying subscribers and 400 million weekly active users.
Did 100 million people use chatGPT a few months into opening. Sure, but that was because of massive hype and word of mouth viral moments.
Is is comparable to Internet usage, especially from "years launched" X axis. Definitely not.
> 59 AI User Adoption (ChatGPT as Proxy) = Materially Faster + Cheaper vs. Other Foundational Technology Products
Another pointless comparison.
I am genuinely curious if lay people will pay for AI. I know people who spend literally hours daily on YouTube and complain about the ads and don’t want to pay the quarter a day to get rid of em.
Will these people pay $50 a month for gpt? We will see.
But asking if average people will pay for AI is the wrong question. It's like asking if average people will pay for Salesforce or Oracle. Even if consumers don't pay for AI en masse, their employers will as the value proposition is a lot clearer.
We really don't know, in part because they're cooking all kinds of costs into R&D. My hunch is we'll recapitulate the dark fibre of the 1990s, with OpenAI et al burning capital to develop models others profit off. But that's predicated on the assumption that large models can be effectively distilled into cheap-to-run small ones. If that doesn't prove right, or the frontier keeps advancing for decades, the operating model of a high-R&D industry could resemble Intel (and semiconductors broadly) instead.
But probably everyone will adopt it - so it may not give a competitive advantage to any given company.
Also, access to market leading AI is not going to cost $20/mo when everything is said and done.
[1] https://blogs.worldbank.org/en/developmenttalk/half-global-p...
I think it will be similar in AI. The AI will be free or cheap; the money will be made in its application. I won’t be surprised if my kid looks back on Anthropic and OpenAI the way we look back on Sun and Netscape.
Just like paying for a smartphone or data or broadband.
Now with headless agents (like CheepCode[0], the one I built) that connect directly to the same task management apps that we do as human programmers, you can get “good enough” PRs out of a single Linear ticket with no need to touch an IDE. For copy changes and other easy-to-verify tweaks this saves developers a lot of overhead checking out branches, making PRs, etc so they can stay focused on the more interesting/valuable work. At $1/task a “good enough” result is well worth it compared to the cost of human time.
[0] https://cheepcode.com
Search. AI works about as well as a clueless first-year analyst. You need to double check its work. But there is still value added in its compiling together sources and providing a reasonably-accurate summary of, at the very least, some of their arguments.
Also basic web development. Most restauranteurs I know no longer deem it necessary to hire a web developer. (Happy side effect: more PDF menus instead of over-engineered nonsense.)
For the other stuff like automating white collar jobs, good enough might not suffice due to the intricate dependencies and implicit contracts formed naturally out of human groups.
Creative jobs will be the most impacted by "good enough" depending on the number of features. For 2d art it was almost certainly over (unless you add text feature to it like manga). You can see with increasing features, like starting with general photography, stock photography and now product photography are overnight made redundant. ex) with the latest Flux image editor negates a need to hire a photo editor, photographer, camera equipment, lighting, product artist. Veo3 not quite there but handles speech features in video generation that other models did not and getting closer to replacing videographers. I think 3d model is the next frontier here following the trend but is still quite difficult as it involves mesh generation/texture/rigging/animation/physics that also must come with shaders and interaction with other 3d models.
Software engineering falls somewhat in the creative field but also shares the complexity from white collar jobs for the same reason that will prevent it from being completed automatable with "good enough".
The hallucination issue is less of an issue and an old trope. The truly challenging enemy of AI of "good enough" is due to "not enough context" and "poor context compression and recall". The problems I listed in white collar and software engineering jobs is context problem. The compression of contexts cannot be stable as the former isn't solved. The fast efficient recall of contexts then cannot take place due to poor compression and so on.
This is just my observation of seeing how things are progressing. I do feel that we will see something different from LLM altogether that could solve some of the context issues but a major misalignment of incentives is what I think would prevent an AGI know-all-see-all type of deal. ex) you might not have any incentive to share all the essential context with the AI because you might become irrelevant and want it to stay in the dark. you might have a union or some social organization to legislate monopoly of human knowledge/skill workers in a field.
but perhaps THE most difficult problem even after we solve the context problem is the inability for the God AGI to be awake or conscious which is absolutely critical in many real world applications.
I like to focus more on the very near impact of what AI is currently doing in the labs and its impact on humans than worrying about who and when all of the other problems are going to be addressed.
Whether we get a UBI-first socialist world order or a continuation of technological feudalism with the poors still using GPTs while the rich sell the energy and chips (software would almost be worthless on its own by then) is the least of my concern.
I'm an optimist and I'm very excited for the very-near and immediate impact of our currently available AI tools doing the "good enough" in very positive ways.
In other cases, like software development, there is a split between tasks of a narrow scope and those of a wide scope. Creating one-shot pieces of software is kind of a solved issue now. Maintaining some relatively self-contained piece of software might soon turn into a task for single maintainers that review AI PRs. The more the bottleneck is context tracking, as opposed to producing code, the less useful the AI. I am uncertain, however, how the millions of devs in the world are distributed on this continuum.
I am also skeptical about legal protections or unionization, as many of these jobs are quite suited to international competition.
"Charts paint thousands of words..."
"Leading USA-Based LLM User" -- what does that even mean? With a value of "800MM" where MM is what units?
"AI Usage + Cost + Loss Growth = Unprecedented" -- how can you add three things that don't appear to be commensurable? Also, what is "Loss Growth" and how does it add to "Usage"?
There are plenty more examples in the charts later. The Overview section, while not so obviously AI-written, has an over-the-top enthusiasm and loose structure that makes me squeamish. I don't trust it, but that's just me I guess.
Just millions. (“M” is the Roman numeral for a thousand.)
Some limitations:
- Unhelpful modernity-scale trend/hype-lines. Everyone knows prospects are big and real.
- No significant coverage of robotics, factory automation? (TAM for physical products is 15X search+streaming+saas)
- No insight? No new categories, surprising predictions, critical technologies identified?
Surprises:
- AI productivity improvements are marginal, esp. relative to the concern over jobs
- US ratio of top public companies by market cap increased from ~50% in 1995 to ~85% in 2025. Seems big; or is it an artifact of demographics of retirement investments? Or is it less significant due to growing private capital markets?
What I would like addressed: The AI means of production seem very capital-intensive, even as marginal cost of consumption is Saas-scalable (i.e., big producers, small consumers). I have some concern that AI development directions are decided in relatively few companies (which are biased to Saas over manufacturing, where consumers are closer to producers in size). This increases the likelihood of a generational whiff (a mistake I suspect China won't make).
As an aside, I wish Elon Musk would pivot xAI out of Saas AI (and science AI), focusing exclusively on manufacturing robotics -- dogfooding at Tesla, SpaceX and even Boring -- with the simpler autonomy of controlled environments but the hard problem of not custom building everything every time. They're well positioned, he could learn some discipline from working with downstream and upstream partners on par (instead of slavish employees, fan investors, and dull consumers or slow governments as customers). He'd redeem himself as a builder of stuff that builds, so we can make infrastructure for generations to come.
Corporates are going to talk about the current in-vogue thing always.
They talked about Blockchain.
They talked about crypto
They talked about anything that when not spoken about made the investors feel the board is behind in the world.
> 72 Enterprise AI Adoption = Rising Priority…Bank of America – Erica Virtual Assistant (6/18)
Ok fine. People are using it. 2 important questions. Did the users have other choices, over which they chose this. Did the users feel happier than other methods.
Without this data, this is just feeding the hype train.
This does not pass the smell test.
https://www.yum.com/wps/portal/yumbrands/Yumbrands/news/pres...
Above is the press release. AI comes up 3 times, including the title. Release talks about AI-Driven platform to streamline operations of franchisees. Ok fine. But how exactly does AI enhance operations at the franchisee level? Especially if all they are providing as part of the SaaS are "Backed by artificial intelligence, Byte by Yum! offers franchisees leading technology capabilities with advantaged economics made possible by the scale of Yum!"
I mean, sure, if at their corporate office, they are using data analysis to predict demand and allocate resources or raw materials to improve profitability, ok. But that can be done quite effectively with statistical analysis.
Where exactly does AI come into the picture is unclear.
Yet, the slides show this as some sort of a monumental achievement, especially highlighting "25000 restaurants are using at-least 1 product". Sure, they are going to use, if that is what the franchise owner provides them, probably for additional cost.
Yum brand has 61000 restaurants as per their website. Looks like they rolled out a new solution, and about 1/3rd have been successful in adopting the new platform. Others may be in the line to do the same. Is this related to AI, or is this related to regular software changes / updates / revamps?
It's replacing drive-through attendants [1].
[1] https://investors.yum.com/news-events/financial-releases/new...
This strikes so close to the infamous Dropbox comment [1].
The common interactions restaurants have with technology is in bookings and online menu presentation. The latter historically required hiring a lightweight web dev. That’s now irrelevant.
[1] https://news.ycombinator.com/item?id=9224
Waymo is clearly growing fast, but it's not bigger than Uber—yet. Super impressive its already surpassed Lyft. From personal experience, it also seems to have driven Uber prices down materially.
Page 6: https://www.bondcap.com/report/pdf/Trends_Artificial_Intelli...
Page 302: https://www.bondcap.com/report/pdf/Trends_Artificial_Intelli...
I am sure a majority of those are from India.
A small industry has formed here imparting "AI" training, for very cheap, online and courses with duration of a few months to 2 years.
https://www.shiksha.com/online-courses/artificial-intelligen...
India School of Business, one of the highest ranked business schools in India, is offering leadership courses with a touch of AI, for as little as 10 lakh rupees (about 11000 USD). For a majority of IT folk in India, that is a very reasonable amount for a course from a very reputed institute.
Many IT folk here are scared to the core about their jobs and there has been a mass movement towards AI certificates. While the courses teach the basics, none of them are at a university level. Most of the students are only users of tools though.
Would you call them AI developers? Meh. They get by. Most of the work in India is back-office work anyway, and these AI engineers end up doing data related tasks mostly.
Very few are actually building worthwhile AI stuff.
Looks like yet another "numbers going up" report, not going to spend time on reading 300+ slides like that.
Maybe I am unfair, but I have no time to read every single report and I am willing to read only exceptional ones with no (or very limited) marketing slop.
There were a few practical use cases mentioned here, but the only quantified impact to worker productivity from LLMs that I noticed was a single study of call center agents from 2023 that showed a 14% increase in cases handled per hour.
Many of the slides are packed with details.
Who, exactly, has time to read all that stuff?
Only AI models, I'd guess.
(yesterday evening I read about 400 pages of text as entertainment, it is not much - though admittedly this specific one failed as entertainment and it was not content worth reading)