23 comments

  • teiferer 8 hours ago
    > absent the AI boom we would probably have lower interest rates [and] electricity prices, thus some additional growth in other sectors.

    In other words, the AI hype comes at the cost of lower growth rates in other sectors of the economy?

    It makes sense, since investor money is spent exactly once. If it goes to AI then it doesn't go elsewhere. And if it didn't go to AI then it would go elsewhere.

    Sad for folks outside tech. But at least they can AI generate cat pictures now, and watch their tech friend use AI tooling to write software.

    • oytis 6 hours ago
      Sad for folks inside tech not interested in/working on AI either
      • re-thc 42 minutes ago
        > Sad for folks inside tech not interested in/working on AI either

        Not true. With this much money and more coming it means other roles will benefit too. The whole tech sector will grow - maybe less than AI specific, but still.

        It's better than the alternative of no or negative growth.

        Some comments assume the funds exist and will be spent elsewhere in the US or the markets they refer to but maybe not. If no AI, US funds could invest in Vietnam (that is receiving a FTSE market upgrade), China, EU or just about anywhere else.

        Don't assume you'd benefit by wishing AI to begone. When you wanted crypto to go away you got AI.

      • benterix 6 hours ago
        I upvoted you but strictly speaking it is not true. "AI" is such a broad term. You probably meant GenAI like LLMs, and even here there are some genuinely useful applications.

        But in general, there is a lot of extremely fascinating stuff, both to exploit and explore, for example in the context of traditional (non-transforeme-based) ML/DL methods. The methods are getting better year by year, and the hardware needed to do anything useful is getting cheaper.

        So while it's true that after the initial fascination developers might not be that interested in GenAI, and some even deliberately decided not to use these tools at all in order to keep their skills fresh and avoid problems related with constant review fatigue, many tech folks are interested in AI in a wider context and are getting good results.

        • amonith 5 hours ago
          Not the parent commenter, but why would you assume that he meant LLMs specifically? I'm one of the "tech people not interested in AI" and I mean everything around AI/ML. I just like writing OG code man. I like architecture, algorithms, writing "feeling good" code. Like carpenters who just like to work with wood I like to work with code.
          • oytis 5 hours ago
            Yes, same feeling about ML really. Whether you are working with classic ML or LLMs, it's all about trial and error without predictable results, which just feels like sloppy (pun unintended) engineering by programmers' standards.
            • benterix 4 hours ago
              But this just doesn't correspond to reality. Most interesting algorithms in optimization etc. are metaheuristics as precise solutions are either proven to be impossible to get or we haven't found a solution yet. In the meantime, we get excellent results with "close-enough" solutions. Yes, the pedantic aspect of my soul may suffer, and we will always strive towards better and better solutions, but I guess we accepted already over a century ago that approximate solutions are extremely valuable.
          • trod1234 27 minutes ago
            One must naturally make assumptions when responding to something that is poorly defined or communicated. That's just how it is. That's an issue for the original poster, not the responder.

            The terminology of AI has a strong link with LLMs/GenAI. Quite reasonable.

            As for code/architecture/infrastructure I like those things too. You do have to shape your communications to the audience you are talking to though. A lot of the products have eliminated the demand for such jobs, and its a false elimination so there will be an overcorrection later in a whipsaw, but by that time I'll have changed careers because the jobs weren't there. I'm an architect, with 10+ years of experience, not a single job offer in 2 years with tens of thousands of submissions in that time.

            If there is no economic opportunity you have to go where the jobs are. When executives play stupid games based in monopoly to drive wages down, they win stupid prizes.

            Sometime around 2 years is the max time-frame before you get brain drain for these specialized fields, and when that happens those people stop contributing to the overall public parts of the sector entirely. They take their expertise, and use it for themselves only, because that is the only value it can provide and there's no winning when the economy becomes delusional and divorced from reality.

            You have AI eliminating demand for specialized labor that requires at least 5 years of experience to operate competently, AI flooding the communication space with jammed speech (for hiring through a mechanism similar to RNA interference), and you have professional certificate providers retiring all benefits, and long-lasting certificates that prove competency on top of the coordinated layoffs by big tech in the same time period. Eliminating the certificate path as a viable option for the competent but un-accredited through university.

            You've got a dead industry. Its dead, but it doesn't know it yet. Such is the problem with chaotic whipsaws and cascading failures that occur on a lag. By the time the problem is recognized, it will be too late to correct (because of hysteresis).

            Such aggregate stupidity in collapsing the labor pool is why there is a silent collapse going on in the industry, and why so many people cannot find work.

            The level of work that can be expected now in such places because of such ill will by industry is abyssal.

            Given such fierce loss and arbitrarily enforced competition, who in their right mind would actually design resilient infrastructure properly; knowing it will chug away for years without issue after they lay you off with no intent towards maintenance (making money all that time).

            A time is fast approaching where you won't find the people competent enough to know how to do the job right, at any price.

          • nosianu 5 hours ago
            I see my instructions for the LLM still as code. Just in human language and not a particular programming language. I still have to specify the algorithm, and I still have to be specific - the more fuzzy my instructions the more likely it is that I end up with having to correct the LLM afterwards.

            There is so much framework stuff, when I started coding I could mostly concentrate on the algorithm, now I have to do so much framework stuff, I feel like telling the LLM really only the actual algorithm, minus all the overhead, is much more "programming" than today's programming with the many many layers of "stuff" layered around what I actually want to do.

            I find it a bit ironic though that our tool out of the excessive complexity is an even more complex tool, although, looking at biology and that programming in large longer-running projects already felt like it had plenty of elements that reminded me of how evolution works in biology, already leading to hard or even impossible to comprehend systems (like https://news.ycombinator.com/item?id=18442637), the new direction is not that big of a surprise. We'll end up more like biology and medicine some day, with probabilistic methods and less direct knowledge and understanding of the ever more complex systems, and evolution of those systems based on "survival" (does what it is supposed to most of the time, we can work around the bugs, no way to debug in detail, survival of the fittest - what doesn't work is thrown away, what passes the tests is released).

            Small systems that are truly "engineered" and thought through will remain valuable, but increasingly complex systems go the route shown by these new tools.

            I see this development as part of a path towards being able to create and deal with ever more complex systems, not, or only partially, to replace what we have to create current ones. That AI (and what will develop out of it) can be used to create current systems too is a (for some, or many, nice) side effect, but I see the main benefit in the start of a new method to deal with ever more complexity.

            I only ever see single-person or -team short-term experiences of LLM use for development. Obviously, since it is so new. But one important task of the tooling will only partially be to help that one person, or even team, to produce something that can be released. Much more important will be the long-term, like that decades-long software dev process they ended up with in my link above, with a lot of developers over time passing through still being able to extend it and fix issues years later. Right now it is solved in ways that are far from fun already, with many developers staying in those teams only long enough, or H1Bs who have little choice. If this could be done in a higher level way, with whatever "AI for software dev" will turn into over the next few decades, it could help immensely.

            • benterix 4 hours ago
              > There is so much framework stuff, when I started coding I could mostly concentrate on the algorithm, now I have to do so much framework stuff, I feel like telling the LLM really only the actual algorithm, minus all the overhead, is much more "programming" than today's programming with the many many layers of "stuff" layered around what I actually want to do.

              I was wondering about this a lot. While it's a truism the generalities are always useful whereas the specific gets deprecated with time, I was trying to get down deeper on why certain specifics age quickly whereas other seem to last.

              I came up with the following:

              * A good design that allows extending or building on top of it (UNIX, Kubernetes, HTML)

              * Not being owned by a single company, no matter how big (negative examples: Silverlight, Flash, Delphi)

              * Doing one thing, and being excellent at it (HAproxy)

              * Just being good at what needs to be done in a given epoch, gaining universality, building ecosystem, and just flowing with it (Apache, Python)

              Most things in JS ecosystem are quite short-lived dead ends so if I were a frontend engineer I might consider some shortcuts with LLMs because what's the point of learning something that might not even exist a year from now? OTOH, it would be a bad architectural decision to use stuff that you can't be sure it will be supported in 5 years from now, so...

            • WhyOhWhyQ 4 hours ago
              I predict the useful activity of writing LLM boilerplate will have a far shorter shelf-life than the activity of writing code has has.
    • sgnelson 44 minutes ago
    • zerosizedweasle 7 hours ago
      That’s the most unforgivable part of this. How this is starving and destroying the rest of the economy
      • trod1234 11 minutes ago
        Opportunity costs don't destroy the economy.

        Money-printing does destroy the economy on a lag, specifically when the production has such catastrophic shortfall that it shows itself to be ponzi without tangible value or benefit. Value being based entirely in human action.

        When that happens, its basically slave labor silently extracted from the population through inflation. Such things historically always trigger other cascading failures.

    • nosianu 5 hours ago
      > since investor money is spent exactly once

      So I'm nitpicking here, but this seems to me to be an important nitpick: This is not true because money circulates.

      The distinction is, one should not stop looking at only the first level effects, but the entire fields the money streams flow through.

      It remains true that money flows in specific areas, but it is on a higher level than only the immediate first level spending, so the analysis has to be different too.

      • vlovich123 45 minutes ago
        That only works in the long term if the investments pay off, generating enough returns to then create a new generation of entrepreneurs and investors who simulate the economy further. If they don’t, that money kind of vanishes - it pays for salaries partially but those aren’t generally enough to stimulate meaningful amounts of angel investors. It also buys capex equipment that depreciates in value (and presents a fixed value amount of sales for the manufacturers of said equipment, not reliably repeating sales over a long time period ).
      • datadrivenangel 22 minutes ago
        If I invest in lighting money on fire, there is no money to circulate.

        Paying my staff to light money on fire means that some of the money will circulate, and my brilliant idea to decarbonize the economy by replacing coal with printed currency will result in some benefits (much much less than the costs), but fundamentally it is not a productive endeavor.

        The AI Data center build out is much more useful than purely lighting money on fire, but if we overpay for more than it is actually ultimately worth, than it still was a bad idea.

      • lqstuart 5 hours ago
        Like, for example, NVIDIA investing billions back into OpenAI so that they can buy more of NVIDIA’s hardware
        • nosianu 5 hours ago
          That money will still flow elsewhere.
          • tylerhou 2 hours ago
            Money circulates but resources do not. A human hour spent constructing a data center can’t then be used to build an apartment building.
          • llamatastic 4 hours ago
            Your house is in flames? Don't worry that energy will flow elsewhere
    • bko 5 hours ago
      Yes, without [popular activity] there would be more [resource activity uses] at a lower price.

      At some point you have to grant people agency and accept that things spend money and time on things that are valuable to them.

      • mcphage 3 hours ago
        > you have to grant people agency and accept that things spend money and time on things that are valuable to them

        We're talking about companies here, not people. And yet, it is kinda true—companies spend time and money on things that are valuable to the people at high level positions in the company and board. But that isn't the same as companies spending time and money on things that are valuable to the company.

        • bko 2 hours ago
          ChatGPT currently has ~ 120 to 190 million daily active users and ~ 800 million weekly active users. It's the fastest growing product in history, blowing others out of the water. I think investment is warranted
          • datadrivenangel 20 minutes ago
            Hey if you have a few trillion dollars to invest, my "Free dollar per user daily giveaway" app will be even faster growing. ChatGPT is great, but giving things away is ultimately philanthropy, and the OpenAI investors expect returns, not a tax write-off.
          • trod1234 9 minutes ago
            Your just looking at pool walking down a concentration gradient. When everyone has access, and such access eliminated more jobs than created, do you think the investment was warranted. Value is only ever based in human action; when the basis for the reasoning is shown to be false or removed it becomes something similar to tulip mania.
    • rich_sasha 7 hours ago
      That's right.

      By the same argument, I don't think it's right to say without AI, GDP growth would be flat. That cash would likely go into other investments.

      The question is if AI will make a return on the investment or not.

      • BoredPositron 7 hours ago
        If you look at the last 10 years you have to admit that the statement wouldn't be true. No tech company spent like they do now they rather opted for stock buybacks.
        • zerosizedweasle 7 hours ago
          This is just a very expensive buyback. At the heart this is all Nvidia doing round about the same mechanism of pushing the share price up. This started out as a useful idea, and it’s mutated into something that is destroying the economy
          • pj_mukh 5 hours ago
            Except NVIDIA is turning the roundabout investments into real silicon, data centers and extraneous software investments (see: IsaacSim, autonomy stack etc). They may not have the returns all the investors are expecting but they are an extreme net good for the ecosystem.

            I think the comparison to stock buybacks is ludicrous.

        • rich_sasha 4 hours ago
          Buybacks are essentially dividends. The cash went to investors who did something with it.

          Did they park it in bonds over the past 10 years? I doubt it. Interest rates were ~0, VC funding was crazy, the money taps were open. They would have been less open without these buybacks.

          It's not necessarily 1:1, it seems people are more willing to spend the cash on AI than they were on other things. But it's not 1:0 either.

        • graemep 6 hours ago
          If the tech companies spent the money on stock buybacks, it would not have disappeared. It would have been reinvested elsewhere.
          • surgical_fire 6 hours ago
            Or they would be parked in things such as gold, bonds, etc.
            • graemep 6 hours ago
              That requires buying those investments, which means the person who sold them has to invest that money somewhere. It still does not disappear.

              I love the way I keep getting downvoted on HN whenever I say anything about a subject about which I know a lot more about than the average person here (usually investment and finance).

              • tyleo 3 hours ago
                In my experience, it’s usually tone more than content.

                In your comment I’m replying to, the first paragraph contributes meaningfully to the discussion; the second sounds a bit like lashing out, which might be why people react negatively.

              • potato3732842 4 hours ago
                >I love the way I keep getting downvoted on HN whenever I say anything about a subject about which I know a lot more about than the average person here (usually investment and finance).

                That's the inherent nature of these voting based online platforms. They reward what the user base wants to hear over what is correct. This is especially apparent in matters with inherent nuance and uncertainty.

              • surgical_fire 6 hours ago
                > That requires buying those investments, which means the person who sold them has to invest that money somewhere. It still does not disappear.

                I also didn't say that the money disappear, I said the money may just end up getting parked in the modern day equivalent of dragon hoards. There's plenty of things to park idle money in hopes of returns.

                I was just pointing out that the idea thaf it would just become investment in other parts of the economy is naive.

                > I love the way I keep getting downvoted

                I hadn't downvoted you, but I will do so now. I always downvote people that are butthurt about internet points.

                • graemep 5 hours ago
                  > I also didn't say that the money disappear, I said the money may just end up getting parked in the modern day equivalent of dragon hoards. There's plenty of things to park idle money in hopes of returns.

                  You miss the point. What does the person who sold the assets in which the money is "parked" do with it? If they buy a bonds what does the seller of the bonds do with the money? Leave it in a bank account? The bank will lend the money to someone who will either spend the money (stimulating the economy) or reinvest it. They might buy another asset. If that asset is newly issued shares or bonds, the money will then go to a company planning to reinvest it. Anything else and it just pushes it another step to another person.

                  Eventually it goes back into the economy.

                  > I was just pointing out that the idea thaf it would just become investment in other parts of the economy is naive.

                  The naive assumption is that "parked" money somehow leaves the economy. its "parked" from the point of view of the person making the investment, but it has to go somewhere.

                  > I hadn't downvoted you, but I will do so now. I always downvote people that are butthurt about internet points.

                  How mature and charmingly expressed!

                  My point is that there is a lot of Dunning–Kruger in HN discussions of economics and finance.

                  • surgical_fire 4 hours ago
                    > You miss the point. What does the person who sold the assets in which the money is "parked" do with it? If they buy a bonds what does the seller of the bonds do with the money? Leave it in a bank account? The bank will lend the money to someone who will either spend the money (stimulating the economy) or reinvest it. They might buy another asset. If that asset is newly issued shares or bonds, the money will then go to a company planning to reinvest it. Anything else and it just pushes it another step to another person.

                    I miss no point. I understand quite well that "parked money" still exusts. What you ignore is that value is sometimes "destroyed". Investiments that underperform or go in the red, loans that default, crashes in real estate, etc. if money is invested in stocks, and the stocks value go in freefall, the nominal amount of money that existed previously in the economy is the same, and everyone is still poorer because of it.

                    The massive AI hype is massively pumping a bull run in a very small sector of the economy (if this is a bubble is not something I can answer). A lot of money is moving around around a small subset of companies pumping revenues of one another in a circular fashion, which increases the value of those stocks (thus creating economic growth, real or otherwise). Without this mechanism, this value wouldn't have been created. It's anyone's guess how things would perform without it.

                    During a crash, the same amount of money that existed prior to the crash is still there. The crash still happens and the country can still go into recession.

                    > How mature and charmingly expressed!

                    Thank you. I, too, think I am mature and charming.

                    • graemep 2 hours ago
                      You are shifting what you talked about. The comment I replied to was about tech companies putting money into stock buy backs instead of AI. You are no talking about AI being a bubble.

                      You also failed to understand that money put into an investment has to go somewhere pretty much immediately. If someone defaults on a loan they must have used the money, so someone else has it, so it is still not destroyed.

                      • surgical_fire 1 hour ago
                        > You are no talking about AI being a bubble.

                        I explicitly did not talk about AI being a bubble.

                        You may understand of economics (at least you say so). But reading comprehension is not your forte.

    • csomar 7 hours ago
      Is it good for folks in tech? Most of the money is spent on energy and silicon.
      • zerosizedweasle 7 hours ago
        It’s good for a tiny subsection of people in tech, for most people this is very destructive
    • re-thc 2 hours ago
      > It makes sense, since investor money is spent exactly once. If it goes to AI then it doesn't go elsewhere. And if it didn't go to AI then it would go elsewhere.

      This assumes only "the US" exists in this world. The AI hype would have been a thing regardless.

      If the money doesn't go to the US it'd go to China or somewhere else. Just like with batteries you'd just lose the market if you don't invest.

    • throw94i4485 7 hours ago
      What else US would spend money on? Real estate, crypto and wars! At least AI does go somewhere!
      • zerosizedweasle 7 hours ago
        You really can’t see that in the entire economy (particularly in such a large country) there isn’t anything else worth investing in and that basically everyone outside the small groups of people involved by in AI should what, just be starved out of the economy?
        • pj_mukh 7 hours ago
          But that’s not how investing works? It’s money looking for returns not social good (that’s what taxes are for).

          In that way AI is 1000% better than crypto or real estate speculation.

          • mancerayder 6 hours ago
            How would you differentiate between real estate investment versus speculation? Money goes towards building new housing, say, or buying some houses in a neighborhood to renovate, or a new apartment building, is that speculation or investment or both?

            How is new housing supply to arrive?

            • pj_mukh 6 hours ago
              I don’t think speculation was the right word. Scarcity exploitation (something current investment firms love doing) is better and building new housing is not that.
          • hvb2 7 hours ago
            Investing is choosing how to put your money to work just like how you choose what vendor to give your business.

            Here's a though experiment. If you could invest today in a company that will result in the destruction of your town (say a mining company) but you got 1% higher return compared to others, you're saying that's a perfect investment and would do it right away?

            And if the answer is yes to the above, you can make that a lot darker if you want. See how far your belief goes.

            • vladms 6 hours ago
              Economics works on larger scale than region or sector. I am sure that people that were copying books in the medieval ages would not have invested in the printing press, but someone else did and it was still good for society even if not for that specific group.

              Society works by balancing the interests of various groups, and there will be people with different opinions than yours, including some you don't like.

            • pj_mukh 6 hours ago
              That’s what regulations are for. If AI was known to have the destructive capabilities of strip mining, with real evidence of harm then it should be regulated and would have much lower returns regardless of your morals.
            • surgical_fire 6 hours ago
              > If you could invest today in a company that will result in the destruction of your town (say a mining company) but you got 1% higher return compared to others, you're saying that's a perfect investment and would do it right away?

              If you look at how the money people behaved since always, that's exactly what would happen.

          • zerosizedweasle 7 hours ago
            This isn’t an opportunity if Open AI is signing deals in a single year 2025 where it commits to pay a bigger number than the entire defense budget despite being deeply unprofitable. It’s a scam. The bubble makes it seem like it’s a good investment. And yes I do believe there are better investments outside the AI bubble.
            • pj_mukh 6 hours ago
              “I do believe there are better investments outside the AI bubble.”

              Then make them! I don’t have the confidence you have in this “scam”. For sure the valuations are inflated but I don’t think this infrastructure investment will be a waste.

        • throw94i4485 7 hours ago
          What else can yield like 10% per anum to beat 5% from goverment bonds?

          And AI is not "the small groups of people". It is a revolution, comparable to printing press and internet.

          • coldtea 6 hours ago
            More like compared to the invention of nuclear weapons or advertising
            • throw94i4485 2 hours ago
              Yes, when applying for jobs, I will just use nuclear weapon on me desk. Very practical, 100%
      • djd20 7 hours ago
        Solar panels for example...
        • throw94i4485 7 hours ago
          I am sure that is counted under "evil AI datacenters"
        • shellwizard 6 hours ago
          Not for the next 4 years, drill baby, drill
    • pj_mukh 7 hours ago
      I see this take a lot, but it confuses me. There is no guarantee that LP’s would take that money and instead invest it in <tech I respect that is not AI>.

      They would just as likely hoover up housing around the country or some such insanity to capitalize on the scarcity.

      VC is actually a pretty effective vehicle to separate rich people from their money so society can try crazy things. You just don’t agree with this particular adventure and frankly there will never be the perfect alternative adventure.

      • mordae 7 hours ago
        Most people wouldn't vote or participate in this investment craze. Maybe we shouldn't let unelected lucky nobodies decide how we invest our time for us.
        • pj_mukh 7 hours ago
          I’m not saying don’t tax them or let them influence politics.

          I’m saying letting them go to space or turning sand into intelligence is infinitely better than buying land and charging us rent (what most rich people have done in history).

          • alanbernstein 7 hours ago
            They are hoping to invest in the companies that will be the (next) electronic equivalent of rent-seeking land owners, as faang are now. It's better since land is physically necessary to live, but only marginally so.
            • pj_mukh 7 hours ago
              Happy to wield the hammer of Lina Khan to stop the monopolizing and rent seeking.
      • piva00 7 hours ago
        Isn't the core issue that this deluge of spending on the hype inflates away adjacent companies valuation (Meta, Google, Tesla, Nvidia, etc.) where people's pensions and savings get directed towards since they become the growth stocks in the index, and when it inevitably corrects there's second/third-order effects on non-rich people?

        > They would just as likely hoover up housing around the country or some such insanity to capitalize on the scarcity.

        Another core issue in a hyper-financialised economy, the money doesn't get invested in what would be best for society, it keeps chasing either risky endeavours or parked in presumed safe assets (such as housing), inflating away asset classes. Where are the incentives to invest in foundational areas which do compound to make a society have resilient growth, like infrastructure: energy, transportation, etc.? It feels like without government direction to spend in big projects there's simply no appetite from the private, hyper-financialised, system to do the work, unless there's potential to get 10-100x returns. Is that good for society at large?

        If hyper-financialisation is not helping the overall economy, and society to become better, why the hell should we still (in the Western world) pursue that? If all it can do is increasingly chase the extremes: hyper-growth vs extremely safe assets, is it any good anymore?

        • pj_mukh 7 hours ago
          I think if you want the public to accept that the government will be a better shepherd of this money than a decentralized smattering of individuals then the government should provide evidence for this, once it actually opens up from a dysfunctional shutdown.

          At least in the American context everything from California High Speed rail to bloated defense spending has shown that VC’s are much better shepherds of their own money.

          • piva00 7 hours ago
            Completely agree, the American government has become incompetent in delivering any real big project to its citizenry, it went through a whole ideological process of gutting its abilities to do so.

            It was designed to lose this ability, and to lean onto private enterprises to do anything but in the past the government was able to rollout highways, go to the moon, build dams, bridges, power plants.

            If both the government and VCs are now unreliable to shepherd capital to direct it to the improvement of society at large you might need to rethink the whole system, and work to nudge it into a better path.

    • fragmede 7 hours ago
      Some of that “AI-driven growth” might just be the economy treading water against the tariff headwinds.
    • eunos 7 hours ago
      If the AI is indeed a bubble and burst not so long after this... We might even have a Warhammer 40k style anti AI movement
      • KronisLV 7 hours ago
        I mean, most of my friends (especially the artists but also software devs) seem to hate AI with a passion: sometimes because of the ethical bankruptcy, other times because of the amount of slop it produces and how in your face it is due to the hype cycle, other times due to a belief that it more or less leads to brainrot and atrophy of cognitive abilities.
        • grues-dinner 5 hours ago
          > how in your face it is due to the hype cycle,

          I'm really struggling with this one. I think AI (generative and not) is surely fascinating. I should by rights be all up in it. I could definitely get it, I don't think I'm stupid in terms of technology. Regardless of the damage the laser-focus on one thing might (or might not) be doing to rest of industry (and the effect on society, which to be honest, I am conflicted on if we can blame the technology). And yet so much of it is all so...tedious and fake somehow, and just even keeping up with headlines is exhausting let along engaging with every LinkedIn "next huge thing that if you don't do you should find a bridge to live under soon".

          It's like that guy who tells you constantly how rich and cool he is. Bro, if you're that cool, let your cool speak for itself. But I'm not sure I want to lend you a grand for your new car.

          • rsynnott 5 hours ago
            It's all very much the crypto bubble all over again, at this point. Same hype, same "get in now before you're left behind" (this is almost a sure signal that something is an unsustainable bubble; sustainable growth doesn't require this type of scaremongering recruitment), same level of completely unrealistic promises, same grifters (in some cases, literally the same people).
        • cafebabbe 7 hours ago
          I'm a big AI supporter.

          I'm just waiting for the slop to be so metastasized that our terminally ill "social networks" finally die, alongside with it.

          Of course i'll be proven terribly wrong. But hey. hope.

        • vladms 6 hours ago
          Hating a tool? And software developers with emotions ? (I get it for the artists :-p)

          In my opinion AI makes visible more structural issues that were always there, but we could ignore. People addicted to various stuff (being substances or social networks or watching sports), social communities disappearing (no more going to the pub, stay at home with your TV), growing inequality (because capital is not taxed as labor), strange beliefs (all the conspiracy theories, which existed before) and others.

          Find a use for the new tool to improve the situation if you can, but I think that hating tools can lead you on dark paths.

        • allie1 7 hours ago
          The slop is real. Especially when I see promoters of platforms for vibe coders. They don't understand the implications of lack of security in potentially viral apps. It's easy to consider them as WMDs.

          People have the same password across services. They share personal information. In a geopolitical climate as today's, where the currency of war is disruption, it can wreak havoc.

  • lifeisstillgood 8 hours ago
    It’s worth noting that pretty much all the growth in AI / Data centres is an accounting trick as well.

    Nvidia just announced it’s investing X billion dollars into OpenAI who will turn around and spend 98% of that on Nvidia chips, so GDP rises, stocks rise but actual free market activity? Not so much

    • B-Con 7 hours ago
      GDP is a known silly metric, but it's easy to define and measure, so we keep using it.

      I'm often reminded of the quote: "A man marries his housekeeper and that country’s GDP falls".

      GDP is uncomfortably linked to granularity of measurement as well as the number of times money changes hands to accomplish a task. Split a pipeline over more businesses boundaries and suddenly GDP is "bigger" despite no change in value or utility.

      • csomar 7 hours ago
        GDP is not a silly metric, it measures economic activity that the government can tax; which is what matters to governments at the end of the day.
        • lm28469 1 hour ago
          In France they count illegal drug trafficking and illegal prostitution in the GDP, I doubt these are taxed...
        • rsynnott 5 hours ago
          Unless you have a transaction tax, this isn't really so. As a contrived example, company A buys thing from company B and sells it to company C, who sells it to company B, all at the same price. There's no profit anywhere in this system (so no tax), but there is economic activity (so, GDP).
          • csomar 4 hours ago
            Many countries charge a flat x% on revenue (not profit). There is also sales tax (VAT) which has to be eventually paid off by the final consumer. There also other taxes derived from the activity (real estate, employees, etc.). So hardly any company can "operate" without paying any taxes.

            Corporate income tax is usually a small slice of the overall taxation of a country.

            What GDP measures (and what I meant) is the visible part of the economy that the government has knowledge of; and therefore can (not necessarily do) tax.

        • logicchains 6 hours ago
          That's not true; it also measures government spending (e.g. on building infrastructure), which cannot be taxed because it makes no sense for the government to tax its own spending. Gross Private Product is the GDP equivalent that only includes taxable (private) activity.
          • csomar 6 hours ago
            I am pretty sure government spending is taxed in most countries. Both companies and employees pay taxes. Maybe you meant government revenue? (but then that’s the tax itself!)
            • dragonwriter 5 hours ago
              Some government spending is immune to some taxes which would otherwise be paid on similar private transactions (which taxes and transactions this applies to are different in different jurisdictions.)
            • NoahZuniga 5 hours ago
              Yep, government employees still pay income tax, and government contractors pay taxes on profits.
    • ChadNauseam 7 hours ago
      It just means that Nvidia is selling the chips for equity instead of selling them for cash. That difference does not turn it into an accounting trick
    • whazor 8 hours ago
      Building physical data centres and GPUs will cost some real money.
      • usrnm 7 hours ago
        GPUs aren't built in the US, though. I wonder, what percentage of all the stuff in a typical datacenter is actually made in the US
        • allie1 7 hours ago
          The margins collected by Nvidia end up in the US. And Nvidia and its employees get paid in the US. Only a small part of the revenue is outside the US.
        • re-thc 7 hours ago
          > GPUs aren't built in the US, though. I wonder, what percentage of all the stuff in a typical datacenter is actually made in the US

          Does that matter? So many things these days aren't physically made in the US. So US companies don't get the profits and aren't needed?

          The actual "made in" part is a small fraction of the total earnings. See how much an iPhone costs to make vs how much it gets sold for.

          • usrnm 7 hours ago
            And now look at how making iPhones in China helped to grow the whole Chinese electronics industry, as opposed to inflating some virtual numbers for the US
            • hollerith 2 hours ago
              It's not just virtual numbers. Apple's investors and employees (mostly in California) keep about half of the price of every iPhone sold, which is more than people in China get or keep.
            • re-thc 2 hours ago
              > helped to grow the whole Chinese electronics industry

              Why do you want it? It wouldn't have grown in the US. The protests would have erupted before anything happened. There's huge amounts of contamination, pollution, deaths, low wages, over-work, etc...

              Also US focused on designing the electronics and the ecosystem around it. Are you saying there's no industry around AMD, Broadcom, Qualcomm, etc that are fabless but hire vast amounts of people?

              > as opposed to inflating some virtual numbers for the US

              You have App Store / SaaS (i.e. developers) and a lot more.

              Would you prefer an average developer salary vs below-minimum US wage factory assembler?

            • Propelloni 7 hours ago
              And now look how the "virtual numbers" allow the USA to drag the rest of the world through the mud on a nosering.

              The USA was the winner in all this, but apparently the people don't feel it. What might have gone wrong? (hint: wealth distribution, not manufacturing)

              • usrnm 5 hours ago
                Well, the US definitely is dragging the rest of the world, the question is where and whether it's somewhere the rest of the world really wants to go. I think, we'll see soon enough
    • aurareturn 7 hours ago
      It isn't an accounting trick because the workers have to be paid and that money must be coming from somewhere.

      In this case, it is coming from investors like Microsoft, Softbank, Saudis, ChatGPT subscriptions, etc.

    • imtringued 7 hours ago
      They also spent a good chunk of that money on AMD stock, which is a trade that instantly became profitable the moment they announced it, without them ever doing anything.
  • kubb 8 hours ago
    Isn’t it funny that there are like 3 numbers that matter to people when evaluating economic conditions?

    Inflation, unemployment, GDP.

    It’s like we’re incapable of nuance on a societal level.

    • toast0 8 hours ago
      A broad overview number is going to lack nuance, yes.

      There's a ton more of more nuanced measures, many of which get reported too, but don't make a splash as a broad overview economic health indicator because the observations need to be paired with an explanation of the observation, and then you're in the business section and not on the front page.

    • scrollop 8 hours ago
      Matter to whom?

      If you're an economist or analyst, I have a feeling it's a little more nuanced than you're stating.

      If you mean "for the average person" or "what the media reports for the average person", then, "duh".

      The normal distribution of IQ in the general population would cause a general media company to limit it's complexity of data reported.

      • kubb 8 hours ago
        To central banks, in politics and in the media.
        • panick21_ 2 hours ago
          This is just factually false. If you think central banks only look at those your are either uniformed or willingly ignorant.

          For the media it might be a bit more true if you only consider short term news media. But what do expect, them to do a 3h 'state of the economy' everyday? Tons of other media is covering lots of things.

        • throwaway290 8 hours ago
          do you live in the same reality as me?

          Tons of articles posted here talk about economy topics but not inflation or GDP or unemployment

          As to banks, there's a LOT more stuff considered in monetary policy, like domestic consumption/investment, gov spending/taxes, net exports/imports.

          populist politics focus on unemployment and inflation but even Trump campaigned on reducing government spending which in fact is bad for unemployment but apparently more important to Americans regardless

    • lmm 8 hours ago
      More like we know that people can always find an obscure enough number to support what they want to do. There's a lot of value in having simple metrics that are hard to game.
      • kubb 8 hours ago
        Inflation is easily gamed by manipulating the basket of goods used to determine it.

        Unemployment can be manipulated by only including people who „actively search” for the job, and tweaking the definition of „actively”, counting part timers.

        GDP is gamed by circular money movements.

        • lmm 8 hours ago
          Well sure, but the more complex your measurements are the worse you make it. Rather than looking for more nuance, we should look for stricter approaches to avoid those issues.
          • carlmr 8 hours ago
            A lot of our economy relies on "number goes up" to tell people to invest

            The metric doesn't matter as long as it goes up most of the time.

        • re-thc 8 hours ago
          > Inflation is easily gamed by manipulating the basket of goods used to determine it.

          Just turn the tariffs on and off again.

    • grues-dinner 5 hours ago
      Reducing companies to a stock price has always seemed similar to me. Especially when they so often jump about by big percentages in a day.

      There's 10,000 people and 200 facilities in 32 countries and suddenly that's all worth 10% more, no, 6% less, wait, it's holding in a reverse double-sigma split backflip indication, it'll be up when the markets open. Head explode.gif.

    • apexalpha 7 hours ago
      In my country we also use poverty indicators as well as disposable income for middle class.

      Pretty sure most other countries do, too.

    • swyx 8 hours ago
      you want what, 5y5y breakevens?
    • NoMoreNicksLeft 8 hours ago
      It's ok. Domestic steel and aluminum production were at an all time high, finished goods production nearly doubled, and we rolled several million tons of newly built ships off the dry docks and into the water. Oh wait, those numbers were so shit that we should actively deny their existence let alone relevance.
  • simianwords 8 hours ago
    I don’t know why everyone’s confident that the investments won’t pay off. Every such post in HN is such a way. 800 million weekly active users from ChatGPT implies that people actually like LLM’s and likely the growth will still keep increasing. Every signal points to this - so investment in data centers make sense?
    • wowohwow 8 hours ago
      History doesn't repeat itself, but it rhymes. Callbacks to the infrastructure laid out during dotcom bubble, i.e Cisco, and all of the networking infrastructure. Likewise, internal memos at Oracle indicate they're losing money on their hardware. Anecdotal and hand wavy; but there are plenty of signals out there that it won't pay off. I'm not arguing one way or the other, but there are plenty of arguments for it to not succeed in a way that's required to justify the mind boggling growth we've been experiencing.
      • simianwords 7 hours ago
        The mind boggling growth you should also focus on is consumer demand of LLMs. Can you account for it and then re-analyse?
        • overfeed 6 hours ago
          Surely, Oracle could make up for its unit losses with volume, just like SNL's "Change Bank"
          • simianwords 6 hours ago
            why do you assume unit losses? they can just price it a bit higher
        • wongarsu 6 hours ago
          The technology obviously has merrits. When the bubble pops LLMs won't go away, just like the dotcom crash didn't spell the end of the internet.

          The issue isn't that there aren't good business models or value creation, it's that anything related to AI currently has valuations that are unsustainably high given the current limits of the technology. That leads to economic activity that just couldn't exist without those valuations. And once the hype cools down the valuations will go closer to reality, leaving a lot of companies unviable, and many more will have to severely cut back spending to remain viable.

          Or maybe the entire AI market pulls a Tesla and just stays at valuations that aren't justified by normal market fundamentals. Or maybe the technology adapts fast enough to keep up with the hype and can actually deliver on everything that's promised. This doesn't have to come down, it's just very likely that it will.

      • re-thc 8 hours ago
        > internal memos at Oracle indicate they're losing money on their hardware

        That doesn't mean much does it? Oracle is a huge company. Not just a cloud either. Companies often offer discounts or promotions; so? There could be plenty of managed services, managed databases, CRM and many more that make up for it.

        Whilst I'm not sure if Oracle's stock price is right - the memo was more like a way to pressure the stock down for whatever reason.

      • mike_hearn 6 hours ago
        History doesn't necessarily repeat nor rhyme. For 15 years now there have been people wrongly calling a bubble. The justifications change but looking back we can say they were wrong - or at least, a "bubble" that never pops and people get bored of talking about might have just been actual, real economic growth.

        September 2020: 2020 Tech Stock Bubble (Sunpointe Investments, tech in general)

        https://sunpointeinvestments.com/2020-tech-stock-bubble/

        August 2017: When Will The Tech Bubble Burst?" (NY Times)

        https://www.nytimes.com/2017/08/05/opinion/sunday/when-will-...

        March 2015: Why This Tech Bubble is Worse Than the Tech Bubble of 2000 (Mark Cuban, bubble is social media)

        https://blogmaverick.com/2015/03/04/why-this-tech-bubble-is-...

        May 2011: The New Tech Bubble (Economist, bubble is "web companies")

        https://memex.naughtons.org/where-angels-dare-to-tread-the-n...

        And of course I haven't even bothered listing all the people who said cryptocurrency is a bubble. That's 15+ years of continuous bubble-calling.

        At some point you have to say that if the thing supposedly inflating the tech bubble changes four or five times over a period that lasts a big chunk of a century, then maybe it's not a bubble but simply that economic growth comes from only two sources: a bigger population and technological progress. If technological progress becomes concentrated in a "tech industry" then it's inevitable that people will start claiming there is a "tech bubble" even if that doesn't make much sense as a concept. It's sort like claiming there's a "progress bubble". I mean, sure, there can and will be bankruptcies and retrenchments, as there always are at the frontier of progress. But that doesn't mean there's going to be a mega-collapse.

    • grues-dinner 8 hours ago
      Building the datacentres may be massively "productive" in GDP terms (servers! GPU! Power plants! Electrical inspection!), but when they're completed, that activity will cease. Datacentres don't employ many people directly and mostly consume only electricity.

      So if the action of datacentre building shows up as essentially the only GDP growth, but what later happens in the datacentres fails to take its place or exceed it, there will be a dip.

      Whether LLMs grinding away can prop up all GDP growth from now on remains to be seen. People use them when they're free, but people also collected AOL discs for tree decorations because they were free.

      • simianwords 8 hours ago
        You don’t seem to have got my point. You say the investment won’t pay off but why not? This is quite a big assumption to make considering there’s huge evidence that people like LLMs.
        • grues-dinner 7 hours ago
          I'm not saying they won't (the "if" is an important word in the sentence), it's just that datacentre growth is not the same as AI growth. It's related, but it is, by definition, temporary as datacentres are soon completed.

          There's obviously evidence people use LLMs. That's not necessarily the same as people paying a noticeable fraction of all their money to use them in perpetuity. And even if "normal" people do start taking out $50 subscriptions as a matter of course, commoditisation could push that price down as could "dumping" of cheap models from overseas. A breakthrough in being able to run "good enough" models very cheaply, or even locally, would also make expensive cloud AI subscriptions a hard sell. And expensive subscriptions are the only way this pans out.

          It hasn't yet been shown that AI is a gas that will fill all the available capacity, and keep it filled. If bread were 10 times cheaper, would you buy 10x as much? That has more or less happened to food availability in the West over the last 200 years and OpenBread and BunVidia don't dominate the economy.

          None of that is sure to happen, and maybe the AI hype train is right and huge expensive LLMs specifically drive a gigantic productivity boom¹ and are worth, say, 0.2*GDP forever. But if it isn't, and it turns out $5 a month gets you all people actually need, it's going to be untidy.

          ¹: in which case, why is GDP not growing from the AI we already have?

          • simianwords 7 hours ago
            It is very interesting to see two completely different positions on the economics AI but come to the same conclusion that AI is a bubble

            1. LLM's are so economically unfeasible that companies won't be able to make a profit and investing in datacenters will turn out to be a bad bet because AI companies themselves are a bubble

            2. LLM's will become so cheap that datacenters will be useless and people will just use local models so investing in datacenters is a bad bet

            I see both positions in this thread so which one is true?

            • grues-dinner 7 hours ago
              It's a spectrum of possibilities upon which, as yet, no one is sure where it will land. Invested proponents say it will eat the world. Sceptics say it's all a crock of shit. Anyone who says they know the answer, doesn't.

              The two positions there aren't really different, they're mostly that the profitability of AI can be eroded from several sides. One: the cost to run (power and hardware) being high and bring unable to recover it from revenue. Or two, commoditisation and efficiencies (which can also be operational convenience rather than only about power) driving down costs and therefore also revenue, and being unable to compensate by selling more AI more cheaply. Three: AI didn't actually help as many people make money as hoped and thus they don't want to pay, also depressing revenue.

              In the middle is the three-axis happy AI place where costs are not too high, but also AI is too hard to have someone else do it cheaper, and it's useful enough to be paid for.

              My guess is AI ending up roughly as impactful overall as cloud computing. A big industry, makes a lot of money, touches and enables very many businesses but hasn't replaced the entire economy, profitable especially if you can stake out a moat, with low-margin battlegrounds for the price-sensitive.

              • simianwords 6 hours ago
                I agree with all your points. But I still do feel it is interesting that the vibe is on both ends of the opposite spectrum - both highly efficient and highly expensive.

                Maybe it just works out like CPUs or as you said cloud computing? CPU's got cheaper and demand increased and more people use them but everyone still made a profit.

                • grues-dinner 6 hours ago
                  It just goes to show how little information there really is about what will happen here. Literally no one knows. It's uncharted territory, but it's filled with map-sellers.

                  Another good example is railways in the US. That was an huge, huge boom, around a fifth of GDP. No one knew what the railway-based economy of the 1900s would look like but it was surely going to be spectacular. All that money! The speed! The cargo! All those people! Railways absolutely were a commerce multiplier, and made stacks of revenue very quickly and got investment from around the world to build build build. But, eventually, the (over)building was done, there were bankruptcies and consolidations and it ultimately did not become the dominant industry. And yet, it's still a big industry that enables a lot of other economic activity. Trains are still expensive to operate, but moving goods is pretty cheap. Obviously there's a natural physical monopoly at play there that AI doesn't have so again, who knows.

                  Which leads to another thought the AI investors, both commercial and national, maybe should eventually have: is there an automobile or airliner to their railway?

        • csoups14 7 hours ago
          You don't think it's a bigger assumption that plowing hundreds of billions of dollars into a novel and often misunderstood technology is going to net out positive? We're not talking "people like feeds of cat photos" money and risk here, we're talking about a bubble with the potential to tank the entire economy. On top of that, it's also a higher disruptive technology that should it ramp as quickly as it might will lead to massive amounts of societal strife at a time where we're already stretched a little thin to say the least. "What do you mean Credit Default Swaps don't work? They've been functioning perfectly well these past few years and so far the impacts have only been positive. Tons of people are getting mortgages on houses they weren't in the past, it's the American dream!"
        • piva00 7 hours ago
          People like LLMs, will they pay for LLMs to offset the investments? Will they pay enough so these companies can stay afloat when the slush funds inevitably run dry?

          Will LLMs create a significant shift in productivity where its usage will create enough overall value to the economy to justify the hoovering of capital from other industries?

          Those are the unknowns, I don't think many people are saying that LLMs have no value like NFTs, it's that the money being pushed onto this novelty is such an absurd amount that it might pull down everything else if/when it's discovered that there won't be enough value generated to compensate for trillions of USD in investments. Hence the comparison to the dotcom bubble, we came out of that with the infrastructure for the current internet even though it was painful for a lot of people when it crashed, will we have a 2nd internet-esque revolution after this whole thing crashes?

          The technology is definitely valuable, and quite fantastic from a technical standpoint, is it going to lift all the other boats in the rest of the economy like the internet did though? No one can tell that yet.

    • lm28469 1 hour ago
      > 800 million weekly active users from ChatGPT implies that people actually like LLM’s

      800m total users, 25m paying customers... Most people use free accounts and would likely never pay any substantial amount of money for them

      https://www.theverge.com/openai/640894/chatgpt-has-hit-20-mi...

    • empiko 8 hours ago
      The question is if it is economically feasible. People can enjoy generating funny AI images to share with their friends, but it might not be economically feasible to invest $100B to give them this toy. There is a question of how much value using GenAI generates for the economy.
      • simianwords 7 hours ago
        The cost of LLMs have gone down over 30 times in the last 1-2 years. At what point would you think it is economically feasible? I think this a question to ask so that we can tackle the fundamental economics of LLMs.
        • empiko 7 hours ago
          It becomes feasible when people are willing to pay more than it costs to run it. But I think this will be a pretty uphill battle, as many use cases are hard to monetize (eg proofreading) and for many use cases you will feel pressure from smaller models (you don't need the most expensive SOTA model to generate an email). There is probably just a very limited amount of use cases that are in the goldilocks zone with their difficulty so that people will be willing to pay for them, AND AI is able to solve them. I think programming might be one of them.
          • simianwords 7 hours ago
            > It becomes feasible when people are willing to pay more than it costs to run it.

            This is what I want to challenge. At what point do you think people will pay more than it costs? Lets try to come up with a number because the price of LLMs have dropped more than 30 times in the last 2 years.

            It may continue to drop and AI companies will continue to be in loss because the new things will be unlocked due to new efficiences and the same debate over LLM economics will continue.

            I think it is already profitable and people are more than willing to pay for the actual costs.

            • empiko 5 hours ago
              > I think it is already profitable and people are more than willing to pay for the actual costs.

              If people are willing to pay for the costs, where are the profitable AI companies?

              • simianwords 1 hour ago
                They don’t make profit because they invest in research and development.
        • ehnto 7 hours ago
          We would also need to know how many people are willing to pay for it, on a consumer level. Additionally business will need to see real ROI for using it.

          At the moment everyone is trying their best to implement it, but it remains to be clear if it actually increases a company's profitability. Time will tell, and I think there are a lot of things obfuscating the reality right now that the market will eventually make clear.

          Additionally the economics of training new models may not work out, another detail that's currently obfuscated by venture capital footing the bill.

        • linhns 6 hours ago
          And yet 90% of AI companies do not break a profit. If it was economically feasible, big corps would start cashing in immediately and halt development. They have gone deep with AI so now must go deeper to get to that cashing-in point.
    • nylonstrung 3 hours ago
      Forget profitability, the major Western LLM providers still have profoundly negative gross margins. What would that 800M userbase look like if free tier users had to pay the bill for inference and training costs?

      I think it will only be economically sound as a business if you're Google and can start serving ads OR when we switch over from GPUs to wildly more efficient TPUs/ASICs

      All the data center CapEx is going into compute that will be obsolete once that happens

    • nylonstrung 3 hours ago
      Forget profitability, the major Western LLM providers still have profoundly negative gross margins

      I think it will only be economically sound as a business if you're Google and can start serving ads OR when we switch over from GPUs to wildly more efficient TPUs/ASICs

      All the data center CapEx is going into compute that will be obsolete once that happens

    • techpression 8 hours ago
      They like them when they’re free and/or cheap. But will that be sustainable? The answer to that question is far less certain. Maybe Ads will save them, maybe royalties, maybe price hikes. But it’s far from certain at least.
      • simianwords 8 hours ago
        I really don’t get this. People actually pay and use and enjoy llms. My company has paid for Gemini and ChatGPT subscriptions and people actually use them.

        Why do you automatically assume that people won’t pay for it?

        • fabian2k 8 hours ago
          Because they probably would have to pay a lot more to make this profitable at the levels current valuations and investments indicate. And the barrier for private persons to pay is much larger than for companies. I don't think anyone has a really solid handle on the economics here yet, as the field is changing very quickly.

          But there is a big difference here compared to most software companies. The product does cost significant money per additional customer and usage.

          There is a real product here. And you can likely earn money with it. But the question is "how much money?", and whether these huge data center investments will actually pay off.

          • simianwords 8 hours ago
            > Because they probably would have to pay a lot more to make this profitable at the levels current valuations and investments indicate.

            I keep hearing this but this is very unlikely to be true. The cost of LLMs have gone down by more than 30 times in the past 1 year. How much more should it go down until you consider it economically feasible?

            • fabian2k 7 hours ago
              Why are they building so many data centers then? That is all cost that has to be earned back. And using them as agents creates much higher costs per interactions than just chatting. We also don't know if the current prices are in any way economical, or how they are related to actual development and interference costs.
              • simianwords 7 hours ago
                Do you think people should have not invested in data centers because of moores law that also applies for cpus? Same mechanics applies there - turns out that when things get efficient, demand increases and more possibilities are unlocked.
              • raincole 7 hours ago
                When Moore's Law was still effective, did you ask why people produced chips?
        • techpression 7 hours ago
          Because all numbers point towards it being incredibly far away from being profitable? We also pay for Google Workspace and at 10 euro a month we get Gemini Pro. So while we might pay for it, it’s more of a free addon, we would’ve paid 10 without it too.

          You can also do a simple analysis on the Anthropic Max plan and how it successively gets more and more limited, they don’t have the OpenAI VC flow to burn so I believe it’s a indicator of what’s to come, and I could of course be wrong.

          • simianwords 7 hours ago
            It’s not profitable because of massive reinvestments to r and d.

            If you want to question to on the fundamental economics of LLm themselves then how efficient should LLMs get till you decide that it’s cheap enough to be economically viable? 2 times more efficient? 10 times? It has already gotten more than 30 times over last 2 years.

            • techpression 7 hours ago
              And Claude is more expensive than ever, efficiency gains and all. Those investments doesn’t necessarily pay off, and historical performance is not indicative of future ditto.

              I don’t think it’s a matter of efficiency at current pricing but increased pricing. It would be a lot more sane if the use cases became more advanced and less people used them, because building enormous data centers to house NVIDIA hardware so that people can chat their way to a recipe for chocolate cake is societal insanity.

              • simianwords 7 hours ago
                > And Claude is more expensive than ever, efficiency gains and all

                This is not true for any LLM and not just Claude.

                > I don’t think it’s a matter of efficiency at current pricing but increased pricing.

                I don't know what this means - efficiency determines price.

                > It would be a lot more sane if the use cases became more advanced and less people used them, because building enormous data centers to house NVIDIA hardware so that people can chat their way to a recipe for chocolate cake is societal insanity.

                Do you think same thing could have been said during the internet boom? "It would be more sane if the use cases become more advanced and less people used them, because building enormous data centers to house INTEL hardware so that people can use AOL is societal insanity".

                • techpression 6 hours ago
                  Weird how Sonnet 3.7 cost the same (when released) as Sonnet 4.5. That is with all those efficiency gains you speak about. 4.5 is even more expensive on bigger prompts.

                  Efficiency doesn’t determine price, companies does. Efficiencies tend to give more returns, not lower prices.

                  Internet scaled very well, AI hasn’t so far. You can have millions of users on a single machine doing their business, you need a lot of square footage for millions of users working with LLM’s. It’s not even in the same ballpark.

                  Did we build many single company data centers the scale of manhattan before AI?

                  • simianwords 6 hours ago
                    > Weird how Sonnet 3.7 cost the same (when released) as Sonnet 4.5

                    Then I think we agree that while the cost remained the same, the performance dramatically increased.

                    FWIW Sonnet 3.7 costs 2.5x as much as GPT-5 while also being slightly worse.

                    • techpression 6 hours ago
                      Well with a 30x increase in efficiency and far from 30x more performance that would be a price increase in this context, the efficiencies clearly doesn’t trickle down to customers.

                      As for OpenAI I don’t think anyone is working on the API side of things since GPT-5 has had months of extreme latency issues.

    • schnitzelstoat 7 hours ago
      Yeah, I don't think ChatGPT is going to take all our jobs, but it might replace Google for a lot of searches.

      That alone will be a monumental shakeup for the industry.

    • throwaway290 4 hours ago
      "People actually like LLM's"

      People also like pizza. How many million weekly active consumers of pizza? how about rice?

      Really what people like here is cheap stuff and having a job that pays money to buy it. chatgpt so far loses boatloads of money. Soon they jack up prices, add adds, and people realize that it was all trained on them & threatens their job. So really right now chatgpt is sweating hard to make itself too big to fail.

      https://news.ycombinator.com/item?id=45511368

    • csomar 7 hours ago
      We have hit a plateau for many months for the performance of LLMs. Anthropic recently released 4.5, and while it improved on some contexts, it failed to make a commit message for me a few times on a workflow I had. 3.5 to 4 had close to zero failure rates on this workflow and it was surprising to see 4.5 fail. It seems that gains in certain benchmarks will affect quality elsewhere.

      LLMs are very useful, I can’t see myself walking back to the old way of doing things. But the amounts invested expect major breakthrough that we are not anywhere near. It’s a gamble and that’s what innovation is; but you gamble on a small portion of your wealth. Not your house and certainly you do not gamble a huge country like the US on a single thing.

    • catmanjan 8 hours ago
      Imagine how many millions would drive a Ferrari if they gave them away for free
      • simianwords 8 hours ago
        It’s not true that people are only using it because it’s free.

        It’s actually quite interesting to see these contradictory positions play out:

        1. LLMs are useless and everyone is making a stupid bet on it. The users of llms are fooled into using it and the companies are fooled into betting on it

        2. Llms are getting so cheap that the investments into data centers won’t pay off because apparently they will get good enough to run on your phone

        3. Llms are bad and they are bad for environment, bad for the brain, bad because they displace workers and bad because they make rich people richer

        4. AI is only kept up because there’s a conspiracy to keep it propped up by Nvidia, oracle, OpenAI (something something circular economy)

        5. AI is so powerful that it should not be built or humanity would go extinct

        • catmanjan 5 hours ago
          It is true that none of the LLM providers are profitable though, so there is some number above free that they need to charge and I am not convinced that number is compelling
          • simianwords 5 hours ago
            None of LLM providers being profitable is exactly the situation I would expect. Them being profitable is so absurd on the contrary! Why wouldn't they put the money back into R&D and marketing?
            • catmanjan 4 hours ago
              I'm not well versed with the accountant terminology, whatever the word is to describe the operating cost, I am not convinced consumers will ever pay enough to cover those costs
              • simianwords 2 hours ago
                Do you think if LLM's become 10 times more efficient it might covert he costs? What efficiency increase would you think is enough?
      • re-thc 8 hours ago
        > Imagine how many millions would drive a Ferrari if they gave them away for free

        0.

        Ferrari is a luxury sports brand. What's the point of it if it flooded the streets?

        • high_na_euv 7 hours ago
          Good looking, powerful, reliable car?
  • Animats 9 hours ago
    US annual population growth is about 0.5%. So per capita GDP not counting data centers went down.
    • stinkbeetle 8 hours ago
      Yes, and GDP per capita with them went up. And removing a source of GDP that has been shrinking might make GDP growth look even higher. What does it tell us? Anything?
    • Tarsul 7 hours ago
      I think this year the population actually goes down. So per capita GDP growth is even bigger than 0,1%!
    • mytailorisrich 8 hours ago
      That's what has been happening in the UK for several years.

      GDP per capita is more important than overall GDP to see the trend in prosperity, IMHO.

  • freetonik 7 hours ago
    Reminds me of many discussions in Ireland like "if we didn't have big tech giants, we'd have a government budget deficit". Sure, but we do have big tech giants. If I didn't have a job, I'd be poor.
    • defrost 7 hours ago
      The very real concern here is that the AI bubble inflating the economic tent and providing all those jobs for vibers has all the nutritional value of that famous potato crop in Ireland just before the blight struck.
  • netfortius 8 hours ago
    Source: Harvard - an institution this administration doesn't like, so data will probably be categorized as fake news.
  • skywal_l 7 hours ago
    Zuckerberg and al are expecting a bust so they can consolidate for cheap. We've seen this before again and again. Nothing new here.
  • zundunka 8 hours ago
    They estimate data centers/software are responsible for 92% of GDP growth? If that's true, then we are up to some bad news soon.
  • throw-10-8 5 hours ago
    Not a surprise to anyone trying to fundraise outside the AI space.

    AI sucked the air out of the room for almost no return, the crash is going to be something to behold.

  • stkdump 8 hours ago
    I am surprised that Taiwan and China are not mentioned in the article or in these comments. Given the threat that China will get to Taiwan and capture control over the unrivaled TSMC, investing into silicon while we can is pure and simple de-risking in a de-globalizing world.
  • grafmax 4 hours ago
    Apparently we address the climate crisis by investing entirely in new energy consumption.
  • ottomanbob 7 hours ago
    Wow, deflation is becoming so obvious that even the geniuses at Harvard are noticing.
  • justlikereddit 8 hours ago
    Behold the miracle! AI is already saving the economy and we don't even have AGI yet :^)
  • aussieguy1234 7 hours ago
    so much for the AI productivity boost...
  • floppiplopp 8 hours ago
    Oh well, I guess we have another once-in-a-lifetime economic crisis ahead of us. Ready your tax dollars, prepare the bailouts! We've got some billionaires to save.
  • epistasis 8 hours ago
    See also this Bloomberg article on how circular the deals are:

    https://www.bloomberg.com/news/features/2025-10-07/openai-s-...

    This is another way that bubbles form, a cabal of cross-dealing giants that don't have solid revenue to ground the valuations is a very scary position.

    I believe that a lot of AI is real, but the realness of AI's impact on the economy does not prevent a bubble. The dot com bubble didn't make the internet any less real or impactful on everyone's lives. So it feels like very scary times ahead.

    Also, the devaluation of the dollar is an extremely tricky situation for the US. Morgan Stanley puts it at 10% less value in 2025, and another 10% drop by the end of 2026:

    https://www.morganstanley.com/insights/articles/us-dollar-de...

    I was never scared with the inflation during Biden because it seemed like we would be on track to put the economy in the right position, because it was global and the US was doing so much better than the rest of the world. But now, it feels like the US is intentionally entering recession and choosing a future of poverty.

    • hiimkeks 8 hours ago
      Cory also has a giant writeup about the whole situation, including more links.

      https://pluralistic.net/2025/09/27/econopocalypse/

    • rixed 8 hours ago

        > the devaluation of the dollar is an extremely tricky situation for the US
      
      For many, the revaluation of the dollar is actually the scary scenario. See for instance this (maybe too) subtle analysis from Yanis Varoufakis:

      https://unherd.com/2025/02/why-trumps-tariffs-are-a-masterpl...

      • epistasis 12 minutes ago
        Though I agree with dollar revaluation also causing problems, I find approximately zero to agree with in that article. Low dollar value hurts me today, not the elites as this article claims. Low dollar value does little to hurt the elites whose wealth is managed to ride that out without impact. And in particular tariffs hurt working people, not the elites. We have been through this a century go, it's one of the better understood things in economics. The pain doesn't start today with the tariffs, it starts next year.
    • grumpy-de-sre 7 hours ago
      Over the last few years I've been drifting away from focusing on pure technology plays. Somehow I feel like maybe that might save my bacon someday ... or maybe AI will completely consume the world and we'll all be left behind.
    • mgh2 8 hours ago
    • ciconia 8 hours ago
      Very cheap cloud hosting coming soon to a data center near you...
  • throawayonthe 6 hours ago
    (U.S. GDP)
  • pavlov 9 hours ago
    AI bubble masking the self-inflicted tariff recession.
  • jeisc 7 hours ago
    Under Nazi rule, Germany had full employment because not having a job was a ticket to a concentration work camp
  • mg 8 hours ago
    So $400B per year is currently spent on datacenter buildout?

    To see if that is too much or not, we have to put that number in relation to the value those datacenters will create in the future.

    If global GDP is $100T and labor is 50% of that, then the current TAM for intelligence is around $50T.

    How much of that has to be automated per year to justify $400B of investment? For a 10% ROI it would be around 1%, right? But the datacenters will not be the only cost of generating artificial work. We also need energy, software and robots. So let's say 2%.

    So it comes down to the question whether those datacenters built for $400B will automate 2% of global GDP in the foreseeable future.

    And there is another option: That the TAM increases. That we use the new possibilities we have to build more products and services. And see global GDP grow. 2% AI driven global GDP growth would also justify the $400B datacenter buildout.

    So let's think about a mix: 1% labor automation and 1% GDP growth per year via AI. That would be needed to justify continued spending of $400B per year for the AI buildout.