Arm AGI CPU

(newsroom.arm.com)

194 points | by RealityVoid 4 hours ago

33 comments

  • tombert 1 hour ago
    The name of this CPU is bordering on securities fraud. When people see the term "AGI" now, they are assuming "Artificial General Intelligence", not "Agentic AI Infrastructure".

    Of course people don't realize that, and people will buy ARM stock thinking they've cracked AGI. The people running Arm absolutely know this, so this name is what we in the industry call a "lie".

    • torginus 1 hour ago
      Considering AGI has been degraded into a generic feelgood marketing word, I can't wait to get my AGI-scented deodorant.
      • bensyverson 50 minutes ago
        You can already drink AGI! Oh sorry, AG1. The resemblance must be a complete coincidence.
        • krogenx 43 minutes ago
          Pretty sure in that case AG stands for Athletic Greens.

          I think the name change also came before the AI hype.

      • SecretDreams 1 hour ago
        > I can't wait to get my AGI-scented deodorant.

        Old spice for me, thanks!

    • imglorp 47 minutes ago
      The marketers did this for 5G also, calling their product 5G before it was actually deployed, only because theirs came after 4G but wanted to ride the upcoming 5G buzz.

      It seems marketing /depends/ on conflating terms and misleading consumers. Shakespeare might have gotten it wrong with his quip about lawyers.

      https://www.pbs.org/newshour/economy/att-to-drop-misleading-...

    • bhouston 1 hour ago
      If you showed someone what our computers can do with the latest LLMs now to someone 5 years ago they would probably say it sure looks a lot like AGI.

      We have to keep defining AGI upwards or nitpick it to show that we haven't achieved it.

      I would argue that LLMs are actually smarter than the majority of humans right now. LLMs do not have quite the agency that humans have, but their intelligence is pretty decent.

      We don't have clear ASI yet, but we definitely are in a AGI-era.

      I think we are missing an ego/motiviations in the AGI and them having self-sufficiency independent of us, but that is just a bit of engineering that would actually make them more dangerous, it isn't really a significant scientific hurdle.

      • tombert 1 hour ago
        Ok, but it's not AGI. People five years ago would have been wrong. People who don't have all the information are often wrong about things.

        ETA:

        You updated your comment, which is fine but I wanted to reply to your points.

        > I would argue that LLMs are actually smarter than the majority of humans right now. LLMs do not have quite the agency that humans have, but their intelligence is pretty decent.

        I would actually argue that they are decidedly not smarter than even dumb humans right now. They're useful but they are glorified text predictors. Yes, they have more individual facts memorized than the average person but that's not the same thing; Wikipedia, even before LLMs also had many more facts than the average person but you wouldn't say that Wikipedia is "smarter" than a human because that doesn't make sense.

        Intelligence isn't just about memorizing facts, it's about reasoning. The recent Esolang benchmarks indicate that these LLMs are actually pretty bad at that.

        > We don't have clear ASI yet, but we definitely are in a AGI-era.

        Nah, not really.

        • bhouston 1 hour ago
          > They're useful but they are glorified text predictors.

          There is a long history of people arguing that intelligence is actually the ability to predict accurately.

          https://www.explainablestartup.com/2017/06/why-prediction-is...

          > Intelligence isn't just about memorizing facts, it's about reasoning.

          Initially, LLMs were basically intuitive predictors, but with chain of thought and more recently agentic experimentation, we do have reasoning in our LLMs that is quite human like.

          That said, there is definitely a biased towards training set material, but that is also the case with the large majority of humans.

          For the Esoland benchmarks, I would be curious how much adding a SKILLS.md file for each language would boost performance?

          I am pretty confidence that we are in the AGI era. It is unsettling and I think it gives people cognitive dissonance so we want to deny it and nitpick it, etc.

          • tombert 1 hour ago
            > There is a long history of people arguing that intelligence is actually the ability to predict accurately.

            There sure is, and in psychological circles that it appears that there's an argument that that is not the case.

            https://gwern.net/doc/psychology/linguistics/2024-fedorenko....

            > Initially, LLMs were basically intuitive predictors, but with chain of thought and more recently agentic experimentation, we do have reasoning in our LLMs that is quite human like.

            If you handwave the details away, then sure it's very human like, though the reasoning models just kind of feed the dialog back to itself to get something more accurate. I use Claude code like everyone else, and it will get stuck on the strangest details that humans actively wouldn't.

            > For the Esoland benchmarks, I would be curious how much adding a SKILLS.md file for each language would boost performance?

            Tough to say since I haven't done it, though I suspect it wouldn't help much, since there's still basically no training data for advanced programs in these languages.

            > I am pretty confidence that we are in the AGI era. It is unsettling and I think it gives people cognitive dissonance so we want to deny it and nitpick it, etc.

            Even if you're right about this being the AGI era, that doesn't mean that current models are AGI, at least not yet. It feels like you're actively trying to handwave away details.

            • bhouston 59 minutes ago
              > though the reasoning models just kind of feed the dialog back to itself to get something more accurate.

              Much of our reasoning is based on stimulating our sensory organs, either via imagination (self-stimulation of our visual system) or via subvocalization (self-stimulation of our auditory system), etc.

              > it will get stuck on the strangest details that humans actively wouldn't.

              It isn't a human. It is AGI, not HGI.

              > It feels like you're actively trying to handwave away details.

              Maybe. I don't think so though.

        • saganus 1 hour ago
          What does AGI look like in your opinion?

          Personally, I've used LLMs to debug hard-to-track code issues and AWS issues among other things.

          Regardless of whether that was done via next-token prediction or not, it definitely looked like AGI, or at least very close to it.

          Is it infallible? Not by a long shot. I always have to double-check everything, but at least it gave me solid starting points to figure out said issues.

          It would've taken me probably weeks to find out without LLMd instead of the 1 or 2 hours it did.

          In that context, I have a hard time thinking how would a "real" AGI system look like, that it's not the current one.

          Not saying current LLMs are unequivocally AGI, but they are darn close for sure IMO.

          • root_axis 38 minutes ago
            If we had AGI we wouldn't need to keep spending more and more money to train these models, they could just solve arbitrary problems through logic and deduction like any human. Instead, the only way to make them good at something is to encode millions of examples into text or find some other technique to tune them automatically (e.g. verifiable reward modeling of with computer systems).

            Why is it that LLMs could ace nearly every written test known to man, but need specialized training in order to do things like reliably type commands into a terminal or competently navigate a computer? A truly intelligent system should be able to 0-shot those types of tasks, or in the absolute worst case 1-shot them.

          • tombert 54 minutes ago
            > What does AGI look like in your opinion?

            Being able to actually reason about things without exabytes of training data would be one thing. Hell, even with exabytes of training data, doing actual reasoning for novel things that aren't just regurgitating things from Github would be cool.

            Being able to learn new things would be another. LLMs don't learn; they're a pretrained model (it's in the name of GPT), that send in inputs and get an output. RAGs are cool but they're not really "learning", they're just eating a bit more context in order to kind of give a facsimile of learning.

            Going to the extreme of what you're saying, then `grep` would be "darn close to AGI". If I couldn't grep through logs, it might have taken me years to go through and find my errors or understand a problem.

            I think that they're ultimately very neat, but ultimately pretty straightforward input-output functions.

            • adamsb6 40 minutes ago
              Why should implementation matter at all? You should be able to classify a black box as AGI or not.

              Well, I guess you lose artificial if there’s a human brain hidden in the box.

      • dubcanada 1 hour ago
        A human can think logically with reason, not to say they are smart or smarter. But LLMs cannot. You can convince a LLM anything is correct and it will believe you. You can't convince a human anything is correct.

        I can't argue that LLMs do not know an absolute insane amount of information about everything. But you can't just say LLMs are smarter then most humans. We've already decided that smartness is not about how much data you know, but thinking about that data with logical reasoning. Including the fact it may or may not be true.

        I can run a LLM through absolutely incorrect data, and tell it that data is 100% true. Then ask it questions about that data and get those incorrect results as answers. That's not easy to do with humans.

      • nananana9 48 minutes ago
        My definition of AGI hasn't changed - it's something that can perform, or learn to perform, any intellectual task that a human can.

        5 years ago we thought that language is the be-all and end-all of intelligence and treated it as the most impressive thing humans do. We were wrong. We now have these models that are very good at language, but still very bad at tasks that we wrongly considered prerequisites for language.

      • root_axis 56 minutes ago
        > If you showed someone what our computers can do with the latest LLMs now to someone 5 years ago they would probably say it sure looks a lot like AGI.

        Would they? Perhaps if you only showed them glossy demos that obscure all the ways in which LLMs fail catastrophically and are very obviously nowhere even close to AGI.

        Certainly, they wouldn't expect that an AI able to score 150 on an IQ test is unable to play a casual game of chess because it isn't coherent enough to play without making illegal moves.

        • bykhun 54 minutes ago
          > Certainly, they wouldn't expect that an AI able to score 150 on an IQ test is unable to play a casual game of chess because it isn't coherent enough to play without making illegal moves.

          To be fair, I am pretty sure Claude Code will download and run stockfish, if you task it to play chess with you. It's not like a human who read 100 books about chess, but never played, would be able to play well with their eyes closed, and someone whispering board position into their ear

          • root_axis 30 minutes ago
            There are a lot of problems with this analogy, but even if you were to take a photo of the board after every move and send it to the model, it would still be unable to play competently.
      • hermanzegerman 1 hour ago
        No they aren't

        ChatGPT Health failed hilariously bad at just spotting emergencies.

        A few weeks ago most of them failed hilariously bad at the question if you should drive or walk to the service station if you want to wash your car

        • xp84 1 hour ago
          Idk about the health story, but in my use, ChatGPT has dramatically improved my understanding of my health issues and given sound and careful advice.

          The second question sounds like a useless and artificial metric to judge on. The average person might miss such a “gotcha” logical quiz too, for the same reason - because they expect to be asked “is it walking distance.”

          No one has ever relied on anyone else’s judgment, nor an AI, to answer “should I bring my car to the carwash.” Same for the ol’ “how many rocks shall I eat?” that people got the AI Overview tricked with.

          I’m not saying anything categorically “is AGI” but by relying on jokes like this you’re lying to yourself about what’s relevant.

        • bhouston 1 hour ago
          I would accuse you of nitpicking. My experience is that LLMs are generally as smart as the average human +90% of the time. A lack of perfect to me doesn't mean it isn't AGI.
          • phkahler 1 hour ago
            >> My experience is that LLMs are generally as smart as the average human +90% of the time. A lack of perfect to me doesn't mean it isn't AGI.

            In my experience, they contain more information than any human but they are actually quite stupid. Reasoning is not something they do well at all. But even if I skip that, they can not learn. Inference is separate from training, so they can not learn new things other than trying to work with words in a context window, and even then they will only be able to mimic rather than extrapolate anything new.

            It's not the lack of perfect, it's the lack of reasoning and learning.

            • bhouston 1 hour ago
              I 100% agree that learning is missing. We make up for it in SKILLS.md and README.md files and RAGs of various types. And we train the LLMs to deal with these structures.

              I've seen a lot of reasoning in the latest models while engaging in agentic coding. It is often decent at debugging and experimentational, but around 30% it goes does wrong paths and just adds unnecessary complexity via misdiagnoses.

      • rootusrootus 1 hour ago
        > LLMs are actually smarter than the majority of humans right now

        I consider myself a bit of a misanthrope but this makes me an optimist by comparison.

        Even stupid people are waaaaaay smarter than any LLM.

        The problem is the continued habit humans have of anthropomorphizing computers that spit out pretty words. It’s like Eliza only prettier. More useful for sure. Still just a computer.

        • bhouston 1 hour ago
          > Still just a computer.

          I don't believe in a separation of mind and spirit. So I do think fundamentally, outside of a reliance on quantum effects in cognition (some of theorized but it isn't proven), its processes can be replicated in a fashion in computers. So I think that intelligence likely can be "just a computer" in theory and I think we are in the era where this is now true.

          • tombert 1 hour ago
            I don't believe in "spirits" from the get go. I think it's certainly theoretically possible that we could mimic human thought with a computer (quantum or otherwise) but I do not think that the LLMs we have now are doing that. I'd say that what we have right now is "just a computer".

            This doesn't mean they aren't useful, I like Claude a lot, but I don't buy that it's AGI.

      • flowardnut 1 hour ago
        "look, it completely lied about params that don't exist in a CLI!"
        • bhouston 1 hour ago
          AGI doesn't mean perfect. It means human like and the latest models are pretty human like in terms of their fallibility and capabilities.
    • rootbear 14 minutes ago
      This sort of thing really bugs me! Marketing departments appropriate an existing term and use it in some new, often deceptive way. This goes all the way back to when IBM released “The IBM Personal Computer”, at a time when “personal computer” was a category name. Then Microsoft released Windows, when “windows” was a generic term for windowing systems. Intel did it with their “core” architecture. The list goes on.

      (Disclosure: I am a casual investor in ARM.)

    • kergonath 1 hour ago
      AGI is a poorly-defined concept anyway. It’s just vibes, nothing descriptive.
    • 0x3f 1 hour ago
      > Of course people don't realize that, and people will buy ARM stock thinking they've cracked AGI.

      Doesn't seem like a very credible assertion. Picking stocks in this way would remove you from the market pretty quickly.

      • PessimalDecimal 1 hour ago
        Didn't random companies add block chain to their names only just a few years ago and get 30+% jumps in stock price immediately?
        • 0x3f 29 minutes ago
          > Just because the stock goes up doesn't mean anyone was tricked. People invest in sentiment, in momentum, in all kinds of second order effects.
      • wiml 1 hour ago
        Yes, that's how fraud works a lot of the time. It removes you from the market but not until after it's removed your money. And there's an endless supply of new people ready to make the same mistake after you've learned your lesson.
      • tombert 1 hour ago
        I didn't say it would be a wise decision to pick stocks that way, but this has already happened: https://en.wikipedia.org/wiki/Long_Blockchain_Corp.

        Does an iced tea company changing their name to Long Blockchain make any sense? No, not really, it's pretty stupid actually, but it managed to bump the stock by apparently 380%.

        The stock market can be pretty dumb sometimes. Let's not forget the weird GME bubble.

        • 0x3f 30 minutes ago
          You're making claims not found in evidence. Just because the stock goes up doesn't mean anyone was tricked. People invest in sentiment, in momentum, in all kinds of second order effects.

          GME was hardly a trick either. If you actually read the subreddits at the time they were all perfectly aware of the nature of the thing. They literally go around calling it degenerate behavior (i.e. risky, frothy, baseless).

          Why is the assumption that you are smarter than everyone else? That you can interpret the world but everyone else needs protecting?

    • Ucalegon 1 hour ago
      Marketing is marketing, nothing about it was ever about being factual when there is a total addressable market to go after and dollars to be made! This is inline with much of the other marketing that exists in the AI space as it stands now, not mention the use of AGI within the space as it stands currently.
      • tombert 1 hour ago
        Sure, but there are plenty of cases where a deceptive name has been considered enough to at least warrant an investigation: https://en.wikipedia.org/wiki/Long_Blockchain_Corp.

        I'm not saying anything is going to happen, ARM holdings has a lot more money and lawyers than Long Blockchain did, but I'm just saying that it's not weird to think that a deceptive name could be considered false advertising.

        • Ucalegon 58 minutes ago
          That would not hold up considering that they consistently use 'agentic' in their press release and make no mention of 'artificial general intelligence'. Just because two things have the same acronym does not mean that they stand for the same thing. Marketing being cheeky is not a crime.
          • tombert 51 minutes ago
            It's not "being cheeky". They know that the holy grail for AI is AGI. They know that people are going to see the acronym AGI and assume Artificial General Intelligence. They know that people aren't going to read the full article.

            This isn't just a crass joke or a pun, it's outright deception. I'm not a lawyer, maybe it wouldn't hold up in court, but you cannot convince me that they aren't doing this on purpose.

            • Ucalegon 36 minutes ago
              of course they did it on purpose but thats not illegal. They are not at fault for individuals not reading what the acronym stands for and the intent that they place within the press release, which is very, very clear. They are not obligated or liable for others lack of due diligence.
    • LeifCarrotson 1 hour ago
      Those in the industry don't call it a lie, they call it "marketing".

      It's those out of the industry who call them lies.

      • tombert 1 hour ago
        Touché. I guess I should have said "I call it a lie".
    • monegator 1 hour ago
      In case you haven't noticed, this whole thing has been a grift since 2022. It's kind of amazing that nobody thought of making AGI processors before
    • alfalfasprout 1 hour ago
      the whole AI space is rife with much worse example of what could be considered securities fraud tbh
    • emptyfile 2 minutes ago
      [dead]
    • throwaway613746 57 minutes ago
      [dead]
  • aurareturn 2 hours ago
    This is just a Neoverse CPU that Arm will manufacture themselves at TSMC and then sell directly to customers.

    It isn't an "AI" CPU. There is nothing AI about it. There is nothing about it that makes it more AI than Graviton, Epyc, Xeon, etc.

    This was already revealed in the Qualcomm vs Arm lawsuit a few years ago. Qualcomm accused Arm of planning to sell their CPUs directly instead of just licensing. Arm's CEO at the time denied it. Qualcomm ends up being right.

    I wrote a post here on why Arm is doing this and why now: https://news.ycombinator.com/item?id=47032932

    • jasoneckert 1 hour ago
      This was exactly my first thought when I saw the title. And after reading the contents of the blog, it's pretty clear that ARM is laser focused on getting a piece of their customer's cake by competing with them. This is likely why they are riding the AI hype train hard with their ill-suited name (AGI).

      Unfortunately for them, I think hardware vendors will see past the hype. They'll only buy the platform if it is very competitively priced (i.e., much cheaper) since fortune favours long-lived platforms and organizations like Apple and Qualcomm.

    • benob 1 hour ago
      This reminds me of Intel talking about faster web browsing with the new Pentium
  • steve1977 2 hours ago
    I think the interesting bit is actually this:

    For the first time in our more than 35-year history, Arm is delivering its own silicon products

    • HerbManic 57 minutes ago
      I can imagine a lot of ARM engineers being frustrated at seeing their cores being used in stupid ways for decades to finally flex what they can do (outside of Apple).
    • djmips 32 minutes ago
      But really how different is TSMC than VLSI making the ARM1? By your logic I would say that ARM has already delivered it's own silicon product.
    • joshstrange 1 hour ago
      Agreed, it will be _very_ interesting to see what waves this causes. It would be like TSMC deciding to make and sell their own CPUs, now ARM is directly competing with some of their clients.
      • jballanc 59 minutes ago
        Eh, I'm not so sure it'll be that big a deal. The whole supply chain is so twisted and tangled all the way up and down. Shuffling out one piece doesn't seem like it will, on its own, be so major. Samsung made the chips for the iPhone, then made their own phone, then Apple designed their own chips made by TSMC, now Apple is exploring the possibility of having Samsung make those chips again.

        Also, it takes a willful ignorance of history for ARM to claim this is the first time they've manufactured hardware. I mean, maaaaybe, teeeeechnically that's true, but ARM was the Acorn RISC Machine, and Acorn was in the hardware business...at least as much as Apple was for the first iPhone.

    • brcmthrowaway 1 hour ago
      Do they need to higher Design Verification engineers for this?

      Thats a huge cost compared to the average RTL jockey

      • lizknope 38 minutes ago
        ARM already had tons of DV engineers. No company would license the RTL or any IP unless it has already been run through millions of simulations in DV.
    • lenerdenator 1 hour ago
      What would be the real advantage of doing that?
  • rafram 2 hours ago
    AGI (Agentic AI Infrastructure) is joining CSS (Compute Subsystems) in their lineup, apparently. Who’s naming this stuff?
    • LikesPwsh 2 hours ago
      The same people who abbreviate "generative" AI in a way that misleadingly conflates it with "general" AI.

      Fraud is just the default lifestyle of marketers.

    • LollipopYakuza 2 hours ago
      So Artificial General Intelligence and Cascading Style Sheets are not joining forces?
      • lenerdenator 1 hour ago
        If there's ever a singularity as a result of AGI, it will likely look at CSS and decide that extermination is simply too good for the human race.
      • rafram 1 hour ago
        Always have been :)
  • throwa356262 3 hours ago
    AGI = Agentic AI Infrastructure

    In case you were thinking about some other abbreviation...

    • conductr 2 hours ago
      Missed opportunity to call it AAII and market it as twice as powerful as regular AI.
      • jayd16 1 hour ago
        We put AI in our AI so the AI is already baked in.
      • flopsamjetsam 1 hour ago
        A^2I^2 or (AI)^2
    • ux266478 2 hours ago
      I think this is a poetic encapsulation of the AI industry at this point. A beautifully poignant vignette.
    • RealityVoid 3 hours ago
      It's... really something. Not good. Something.
    • bee_rider 2 hours ago
      It’s like they decided to moon all the onlookers while jumping the shark…

      I don’t know if it was intentional or they were so far out over their skis that they got their bathing suit caught, but it’s impressive either way.

    • ww520 1 hour ago
      Should have called it A^3I^2 - Arm Agentic Artificial Intelligence Infrastructure.
    • monegator 2 hours ago
      what lenghts are they going to, just to say we have achieved AGI... now who's moving the goalpost?
    • charcircuit 2 hours ago
      AGI stands for Artificial General Intelligence.
      • lock1 2 hours ago
        Pretty sure it stands for "Artificial abbreviation & hype GeneratIon" nowadays
      • hagbard_c 2 hours ago
        Are you sure it doesn't stand for Advanced Guessing Instrument? That's what the result often seem to indicate after all.
    • hootz 2 hours ago
      What a terrible, terrible name.
    • esafak 2 hours ago
      The coast is clear to come up with your own expansion for AI!
    • lupajz 3 hours ago
      I mean, they could at least use AI to figure out how to name their AI product.
      • embedding-shape 2 hours ago
        > I work at ARM, we're launching a new CPU optimized for LLM usage. We're thinking of calling it "Arm Agentic AI Infrastructure CPU", or "Arm AGI CPU" for short. Do you think this is a good idea?

        > No. I would not use it as the product name. “AGI CPU” will be read as artificial general intelligence, not “agentic AI infrastructure,” so it invites confusion and sounds hypey.

        To bad these executives seemingly don't have access to ChatGPT.

      • _ache_ 2 hours ago
        They did ask AI if AGI what a great name. It said that it was the greatest name possible. It's bold, aspirational, and ... polarizing?!

        Oh god! Mistral tell me it's highly polarizing, will make the buzz and it's risky but anyway people will know that ARM is doing CPU again now (maybe I did put too many context).

      • foolproofplan 2 hours ago
        maybe they did and why they got this slop?
    • artyom 2 hours ago
      Not bait at all
    • SilverElfin 2 hours ago
      They pathetically don’t mention what it stands for anywhere in this press release. Deceptive marketing at worst, shameless AI-washing at best.
    • WhrRTheBaboons 2 hours ago
      I would've went for Agentic Neural Infrastructure personally

      ARMANI for short /s

  • mkl 2 hours ago
    This is like naming your kid World President Smith.
    • rboyd 2 hours ago
      This could work. Right? https://psycnet.apa.org/record/2002-12744-001

      My realtor's last name is House

      • conductr 2 hours ago
        > Studies 1-5 showed that people are disproportionately likely to live in places whose names resemble their own first or last names (e.g., people named Louis are disproportionately likely to live in St. Louis).

        When I lived in Austin, it seemed like a third of boys born were being named Austin. I presume many of them will end up living there as adults but not because of this particular bias, because they were raised there and have family’s there seems to be a more likely driver.

        • chrisweekly 2 hours ago
          "Nominative determinism" is everywhere once you look for it. My vet's last name is McStay.
          • krrrh 1 hour ago
            I just listened to an interview with Carl Trueman about his new book which criticizes transhumanism.
      • hn_acc1 1 hour ago
        Seems more likely this falls under the replication crisis umbrella. My wife's favorite numbers are my birthday (mm-dd), which is a small reason she fell in love with me. Neither of those numbers are related to her birthday. My favorite number(s) do not overlap with my birthday. Maybe my mm-dd values just aren't low enough, like 02-02?
      • technothrasher 2 hours ago
        > Studies 1-5 showed that people are disproportionately likely to live in places whose names resemble their own first or last names

        There are several cities in the US that share my last name. I don't live near any of them.

        > Study 6 extended this finding to birthday number preferences.

        D'oh!

      • tombert 1 hour ago
        My urologist, and I swear I'm not making this up, has the last name "Wiener".
        • rootbear 3 minutes ago
          My friend M. Goode’s father was a urologist named Dr. P. Goode. For real.
      • IshKebab 2 hours ago
        Reporting bias.
  • HeyMeco 31 minutes ago
    The non marketing fluff version of the press release can be found here: https://news.ycombinator.com/item?id=47506641
  • RealityVoid 4 hours ago
    Arm apparently now sells their own CPU's.
  • JSR_FDED 36 minutes ago
    This can’t come fast enough, I’ll finally be able to use CSS.
  • moritzwarhier 1 hour ago
    I miss the all-capitals ARM spelling.

    Seeing "Arm AGI" spelled out on a page with an "arm" logo looks slightly cheesy.

    But maybe it's actually a good fit for the societal revolution driven by AGI, comparable to the one driven by the DOT.com RevoLut.Ion. (dot com).

    Anyways, it sounds like an A.R.M. branded version of the AppleSilicon revolution?

    But maybe that's just my shallow categorization.

  • yabutlivnWoods 3 hours ago
    How fun would it be if due to improved chips handling more model state RAM needs are reduced and Sama cannot make all those RAM purchases he booked?

    VC without a degree who has no grasp of hardware engineering failed up when all he had to do was noodle numbers in an Excel sheet.

    He is so far behind the hardware scene he thinks its sitting still and RAM requirements will be a nice linear path to AGI. Not if new chips optimized for model streaming crater RAM needs.

    Hilarious how last decades software geniuses are being revealed as incompetent finance engineers whose success was all due to ZIRP offering endless runway.

    • gtowey 2 hours ago
      The thing they are good at is bullshitting and selling hype. Which we see here doesn't mean they are actually going to be good at running a business. Smart leaders understand they are not omnipotent and omniscient so they surround themselves who know how to get things done. Weak, narcissist leaders think they're the smartest one in the room and fail.

      Unfortunately failing upwards is still somehow common, probably because the skill of parting fools from their money is still valuable.

      • thereitgoes456 2 hours ago
        No, he is also good at networking. When OpenAI was mission-driven and Sam was more respected, he could convince the most talented people to work for him.

        Now the talent is going to other places for a variety of reasons, not all due to Sam (one of which is little room for options to grow). However it’s hard to believe his tanking reputation is not badly hurting the company. Other than Jakub and Greg, I believe there are not many top tier people left, those in top positions are there because they are yes-men to Sam.

    • mhjkl 2 hours ago
      What RAM? OpenAI booked the silicon wafers, they can print anything they want on them. I wouldn't call them "far behind" on hardware when OpenAI are actively buying Cerebras chips.
      • yabutlivnWoods 1 hour ago
        Yes exactly; he is behind in that he has to buy others chips with little say on how they work.

        Apple and Google control their own designs.

        Sama is 100% an outsider, merely a customer. The chip insiders are onto his effort to pivot out of meme-stock hyping, into owning a chunk of their fiefdom. They laughed off his claims a couple years ago as insane VC gibberish (third hand paraphrase from social network in chip and hardware land).

        No way he can pivot and print whatever. Relative to hardware industry he is one of those programmers who can say just enough to get an interview but whiffs the code challenge.

        He has no idea where the bleeding edge is so he will just release dated designs. Chip IP is a moat.

        Plus a bunch of RAM companies would be left hanging; no orders, no wafers. Sama risks being Jimmy Hoffa'd imploding the asset values of other billionaires.

  • varenc 22 minutes ago
    "AGI" continues to lose all meaning.
  • papichulo2023 3 hours ago
    What does "Built for rack-scale agentic efficiency" even means?
    • throwa356262 2 hours ago
      If you read past the marketing talk, this is basically a massively multicore system (136) with significantly reduced power usage (300W).

      Where does Agentic come into this? ARMs explanation is that future Agentic workloads will be both CPU and GPU bound thus the need for significant CPU efficiency.

    • inerte 2 hours ago
      It's volume of tokens consumed x number of agents x rack space. Basically agentic computation density.
    • ray_v 3 hours ago
      We just say words now that sound good for marketing but have no real meaning.
      • girvo 6 minutes ago
        > now

        I’d argue we have always done that, and in fact it’s basically the definition of marketing!

    • thewebguyd 2 hours ago
      Big "but mongodb is web scale" vibes
    • varispeed 3 hours ago
      It's a code sentence for let's go to the utility room to cross pollinate ideas.
    • r_lee 2 hours ago
      I was gonna say just big DCs in marketing yap but really wtf does that mean?
    • otabdeveloper4 2 hours ago
      It's when LLM agents are inefficient that you need a whole rack of servers to get shit done.
    • sdwvit 2 hours ago
      Translation: “Can you give us some money pretty please?”
  • bt1a 1 hour ago
    Oh wow already in use by Meta, OpenAI, and more ?? https://www.arm.com/products/cloud-datacenter/arm-agi-cpu/ec...

    The TDP to memory bandwidth& capacity ratio form these blades is in a class of its own, yes?

  • ahmedfromtunis 1 hour ago
    Poor TSMC (and ASML)! They were already struggling with capacity to fulfill orders from their established customers. With ARM now joining the party, I don't know how they're going to cope.

    Edit: The new CPU will be built with the soon-to-be-former leading edge process of 3nm lithography.

    • bigyabai 1 hour ago
      TSMC has multiple fabs being constructed, they'll be okay. The biggest losers here are AMD, Intel and Apple who will be forced to pay AI-hype prices to mass-produce boring consumer hardware.
  • midnightdiesel 2 hours ago
    What a product name choice! I wasn’t expecting ARM to pivot to selling snake oil.
  • bobmcnamara 1 hour ago
    6GB/s/core

    That's...not much right? Maybe it's a lot times N-cores? But I really hope each individual core isn't limited to that.

    Edit: 17 minutes to sum RAM?

    • jeffbee 1 hour ago
      It isn't obvious to me that they intended to give this as the maximum single-core performance, or just the proportional share of 844GB/s across 136 cores. Implementations of Neoverse V2 by Nvidia and Amazon hit 20-30GB/s in single-threaded work.
  • wewewedxfgdf 51 minutes ago
    Seems like hubris to use this name.
  • vsgherzi 1 hour ago
    is this a cpu that's meant for AI training or is it more for serving inference? I don't quite get why I would want to buy an arm CPU over a nvidia GPU for ai applications.
  • josemanuel 2 hours ago
    Interesting that Jensen Huang joined in the congratulations for this new product!
  • twostorytower 2 hours ago
    And the stock is down >2% today
  • zackmorris 30 minutes ago
    It only took a quarter century, but I'm glad that somebody is finally adding a little multicore competition since Moore's law began failing in the mid-2000s.

    I looked around a bit, and the going rate appears to be about $10,000 per 64 cores, or around $150 per core. Here is an Intel Xeon Platinum 8592+ 64 Core Processor with 61 billion transistors:

    https://www.itcreations.com/product/144410

    So that's about 500 million transistors per dollar, or 1 billion transistors for $2.

    It looks like Arm's 136 core Neoverse V3 has between 150 and 200 billion transistors, so it should cost around $400. Each blade has 2 of those chips, so should be around $800-1000 for compute. It doesn't say how much memory the blades come with, but that's a secondary concern.

    Note that this is way too many cores for 1 bus, since by Amdahl's law, more than about 4-8 cores per bus typically results in the remaining cores getting wasted. Real-world performance will be bandwidth-limited, so I would expect a blade to perform about the same as a 16-64 core computer. But that depends on mesh topology, so maybe I'm wrong (AI thinks I might be):

      Intel Xeon Scalable: Switched from a Ring to a Mesh Architecture starting with Skylake-SP to handle higher core counts.
      
      Arm Neoverse V3 / AGI: Uses the Arm CMN-700 (Coherent Mesh Network), which is a high-bandwidth 2D mesh designed specifically to link over 100 cores and multiple memory controllers.
    
    I find all of this to be somewhat exhausting. We're long overdue for modular transputers. I'm envisioning small boards with 4-16 cores between 1-4 GHz and 1-16 GB of memory approaching $100 or less with economies of scale. They would be stackable horizontally and vertically, to easily create clusters with as many cores as one desires. The cluster could appear to the user as an array of separate computers, a single multicore computer running in a unified address space, or various custom configurations. Then libraries could provide APIs to run existing 3D, AI, tensor and similar SIMD code, since it's trivial to run SIMD on MIMD but very challenging to run MIMD on SIMD. This is similar to how we often see Lisp runtimes written in C/C++, but never C/C++ runtimes written in Lisp.

    It would have been unthinkable to design such a thing even a year ago, but with the arrival of AI, that seems straightforward, even pedestrian. If this design ever manifests, I do wonder how hard it would be to get into a fab. It's a chicken and egg problem, because people can't imagine a world that isn't compute-bound, just like they couldn't imagine a world after the arrival of AI.

    Edit: https://news.ycombinator.com/item?id=47506641 has Arm AGI specs. Looks like it has DDR5-8800 (12x DDR5 channels) so that's just under 12 cores per bus, which actually aligns well with Amdahl's law. Maybe Arm is building the transputer I always wanted. I just wish prices were an order of magnitude lower so that we could actually play around with this stuff.

  • oxag3n 1 hour ago
    Why not ASI? They aim too low.
  • SilverElfin 2 hours ago
    Call this an “AGI CPU” just feels like the most out of touch, terrible marketing possible. Maybe this is unfair but it makes me think ARM as a whole is incompetent just because it is so tasteless.

    > Arm has additionally partnered with Supermicro on a liquid-cooled 200kW design capable of housing 336 Arm AGI CPUs for over 45,000 cores.

    Also just bad timing on trying to brag about a partnership with Supermicro, after a founder was just indicted on charges of smuggling Nvidia GPUs. Just bizarre to mention them at all.

  • rvz 3 hours ago
    Meta are heavily invested in building their own chips with ARM to reduce their reliance on Nvidia as everyone is going after their (Nvidia) data center revenues.

    This is why Meta acquired a chip startup for this reason [0] months ago.

    [0] https://www.reuters.com/business/meta-buy-chip-startup-rivos...

  • torusle 2 hours ago
    ARM riding the "everything is AI" train.

    So sad.

  • myhf 1 hour ago
    finally, a CPU capable of making API calls to cloud providers
  • einpoklum 1 hour ago
    If I try to cut through the hype, it seems the main features of this processor, or rather processor + memory controller + system architecture, is < 100 ns for accessing anything in system memory and 6 GB/sec for each of a large-ish number of cores, so a (much?) higher overall bandwidth than what we would see in a comparable Intel x86_64 machine.

    Am I right or am I misunderstanding?

  • nurettin 3 hours ago
    I was wondering who convinced ARM to manufacture hardware. Turns out it was Meta.
    • cmrdporcupine 2 hours ago
      Now if only they would go back to being "Acorn RISC Machines" and make a nice desktop home computer again...

      One can dream.

      • wmf 1 hour ago
        DGX Spark is pretty nice. It could be cheaper if they removed the NIC though.
        • cmrdporcupine 1 hour ago
          I have the ASUS variant. I like it well enough.

          I see the NIC as a form of future proofing, but we'll see.

          My Ryzen 9 mini-PC from 2 years ago outperforms this thing in raw CPU Though.

    • walterbell 2 hours ago
      Nuvia/Qualcomm lawsuit and Softbank.
    • redwood 2 hours ago
      Fabless. Like AMD and Nvidia. So I would think about it more as branding and packaging than Manufacturing
      • anvuong 2 hours ago
        Huh, many companies use TSMC, in fact, probably all of them use TSMC, including Intel, yet there are only a few who dominates in performance. There are much more in designing chips than what you just listed.
        • i_am_a_peasant 1 hour ago
          Intel uses its own fabs for certain IP, tsmc for others yeah. As far as I've seen the latest greatest Panther Lake that stuff is made in intel's arizona fabs.
      • IshKebab 2 hours ago
        There's a big difference between just providing IP and actually doing the physical design, manufacturing and packaging. You can't just send your RTL to TSMC and magically get packaged chips back.

        I haven't ever ordered an ARM SoC but I also wouldn't be surprised if there were significant parts that they left up to integrators before - PLLs, pads, SRAM etc.

  • jeffbee 2 hours ago
    Many of these words are unexplained. "Memory and I/O on the same die". Oh? What does this mean? All of the DRAM in the photo/render is still on sticks. Do they mean the memory controller? Or is there an embedded DRAM component?
    • ahoka 2 hours ago
      All processors have memory on the same die.
      • jeffbee 1 hour ago
        How much, what kind, and what is your source?

        All mainstream server CPUs have a megabyte or two of SRAM on a core, of course.

  • DeathArrow 1 hour ago
    Now every product will have the AI buzzword in it's name, just like 25 years ago product names started with letter e, from electronic.

    So we will see AI Toilet Paper launching in the next months.

  • vova_hn2 3 hours ago
    I found this article extremely frustrating to read. Maybe I lack some required prior knowledge and I am not the target audience for this.

    > built on the Arm Neoverse platform

    What the heck is "Arm Neoverse"? No explanation given, link leads to website in Chinese. Using Firefox translating tool doesn't help much:

    > Arm Neoverse delivers the best performance from the cloud to the edge

    What? This is just a pile of buzzwords, it doesn't mean anything.

    The article doesn't seem to contain any information on how much it costs or any performance benchmarks to compare it with other CPUs. It's all just marketing slop, basically.

    • nicoburns 3 hours ago
      > The ARM Neoverse is a group of 64-bit ARM processor cores licensed by Arm Holdings. The cores are intended for datacenter, edge computing, and high-performance computing use. The group consists of ARM Neoverse V-Series, ARM Neoverse N-Series, and ARM Neoverse E-Series.

      https://en.wikipedia.org/wiki/ARM_Neoverse

    • snek_case 2 hours ago
      I feel like this is most products in the AI space lately. More marketing fuzz than substance. Hard to figure out what thing even does.