With the same data augmentation / 'test time training' setting, the vanilla Transformers do pretty well, close to the "breakthrough" HRM reported. From a brief skim, this paper is using similar settings to compare itself on ARC-AGI.
I too, want to believe in smaller models with excellent reasoning performance. But first understand what ARC-AGI tests for, what the general setting is -- the one that commercial LLMs use to compare against each other -- and what the specialised setting HRM and this paper uses as evaluation.
The naming of that benchmark lends itself to hype, as we've seen in both HRM and this paper.
Not exactly "vanilla Transformer", but rather "a Transformer-like architecture with recurrence".
Which is still a fun idea to play around with - this approach clearly has its strengths. But it doesn't appear to be an actual "better Transformer". I don't think it deserves nearly as much hype as it gets.
The TRM paper addresses this blog post. I don't think you need to read the HRM analysis very carefully, the TRM has the advantage of being disentangled compared to the HRM, making ablations easier. I think the real value of the arcprize HRM blog post is to highlight the importance of ablation testing.
I think ARC-AGI was supposed to be a challenge for any model. The assumption being that you'd need the reasoning abilities of large language models to solve it. It turns out that this assumption is somewhat wrong. Do you mean that HRM and TRM are specifically trained on a small dataset of ARC-AGI samples, while LLMs are not? Or which difference exactly do hint at?
> Do you mean that HRM and TRM are specifically trained on a small dataset of ARC-AGI samples, while LLMs are not? Or which difference exactly do hint at?
Yes, precisely this. The question is really what is ARC-AGI evaluating for?
1. If the goal is to see if models can generalise to the ARC-AGI evals, then models being evaluated on it should not be trained on the tasks. Especially IF ARC-AGI evaluations are constructed to be OOD from the ARC-AGI training data. I don't know if they are. Further, there seems to be usage of the few-shot examples in the evals to construct more training data in the HRM case. TRM may do this via the training data via other means.
2. If the goal is that even _having seen_ the training examples, and creating more training examples (after having peeked at the test set), these evaluations should still be difficult, then the ablations show that you can get pretty far without universal/recurrent Transformers.
If 1, then I think the ARC-prize organisers should have better rules laid out for the challenge. From the blog post, I do wonder how far people will push the boundary (how much can I look at the test data to 'augment' my training data?) before the organisers say "This is explicitly not allowed for this challenge."
If 2, the organisers of the challenge should have evaluated how much of a challenge it would actually have been allowing extreme 'data augmentation', and maybe realised it wasn't that much of a challenge to begin with.
I tend to agree that, given the outcome of both the HRM and this paper, is that the ARC-AGI folks do seem to allow this setting, _and_ that the task isn't as "AGI complete" as it sets out to be.
I should probably also add: It's long been known that Universal / Recursive Transformers are able to solve _simple_ synthetic tasks that vanilla transformers cannot.
The challenge is actually scaling them up to be useful as LLMs as well (I describe why it's a challenge in the SUT paper).
It's hard to say with the way ARC-AGI is allowed to be evaluated if this is actually what is at play. My gut tells me, given the type of data that's been allowed in the training set, that some leakage of the evaluation has happened in both HRM and TRM.
But because as a field we've given up on actually carefully ensuring training and test don't contaminate, we just decide it's fine and the effect is minimal. Especially considering LLMs, the test set example leaking into the dataset is merely a drop in the bucket (I don't believe we should be dismissing it this way, but that's a whole 'nother conversation).
With these models that are challenge-targeted, it becomes a much larger proportion of what influences the model behaviour, especially if the open evaluation sets are there for everyone to look at and simply generate more. Now we don't know if we're generalising or memorising.
Makes me think once again about the similarity to Finite Impulse Response[1] filters (traditional LLMs) and Infinite Impulse Response[2] filters (recursive models). Not that it's a very good or original analogy.
Anyway, with FIR you typically need many, many times the coefficients to get similar filter cutoff performance as a what few IIR coefficients can do.
You can convert a IIR to a FIR using for example the window design method[3], where if you use a rectangular window function you essentially unroll the recursion but stop after some finite depth.
Similarly it seems unrolling the TRM you end up with the traditional LLM architecture of many repeated attention+ff blocks, minus the global feedback part. And unlike a true IIR, the TRM does implement a finite cut-off, so in that sense is more like a traditional FIR/LLM than the structure suggest.
So, would perhaps be interesting to compare the TRM network to a similarly unrolled version.
Then again, maybe this is all mad ramblings from a sleep deprived mind.
Wow, so not only are the findings from https://arxiv.org/abs/2506.21734 (posted on HN a while back) confirmed, they're generalizable? Intriguing. I wonder if this will pan out in practical use cases, it'd be transformative.
Also would possibly instantly void the value of trillions of pending AI datacenter capex, which would be funny. (Though possibly not for very long.)
This here looks like a stripped down version of HRM - possibly drawing on the ablation studies from this very analysis.
Worth noting that HRMs aren't generally applicable in the same way normal transformer LLMs are. Or, at least, no one has found a way to apply them to the typical generative AI tasks yet.
I'm still reading the paper, but I expect this version to be similar - it uses the same tasks as HRMs as examples. Possibly quite good at spatial reasoning tasks (ARC-AGI and ARC-AGI-2 are both spatial reasoning benchmarks), but it would have to be integrated into a larger more generally capable architecture to go past that.
That's a good read also shared by another poster above, thanks! If I'm reading this right, it contextualizes, but doesn't negate the findings from that paper.
I've got a major aesthetic problem with the fact LLMs require this much training data to get where they are, namely, "not there yet"; it's brute force by any other name, and just plain kind of vulgar. Although more importantly it won't scale much further. Novel architectures will have to feature in at some point, and I'll gladly take any positive result in that direction.
Evolution is brute force by any other name. Nothing elegant about it. Nonetheless, here you are.
Poor sample efficiency of the current AIs is a well known issue - but you should keep in mind what kind of grisly process was required to give you the architecture that makes you as sample efficient as you are.
We don't know yet what kind of architectural quirks enable this sample efficiency in the human brain. It could be something like a non-random initialization process that confers the right inductive biases, a more efficient optimizer, recurrent background loops... or just more raw juice.
It might be that one biological neuron is worth 10000 LLM weights, and a big part of how the brain is so sample efficient is that it's hilariously overparametrized.
Yeaaaaaah, I kinda doubt there's much coming from evolutionary biases.
If it's a matter of clever initialization bias, it's gotta be pretty simple to survive the replication via DNA and procedural generative process in the meat itself, alongside all of the other stuff which /doesn't/ differentiate us from chimpanzees. Likely simple enough that we would just find something similar ourselves through experimentation. There's also plenty of examples of people learning Interesting Unnatural Stuff using their existing hardware (eg, echolocation, haptic vision, ...) which suggests generality of learning mechanisms in the brain.
The brain implements some kind of fairly general learning algorithm, clearly. There's too little data in the DNA to wire up 90 billion neurons the way we can just paste 90 billion weights into a GPU over a fiber optic strand. But there's a lot of innate scaffolding that actually makes the brain learn the way it does. Things like bouba and kiki, instincts, all the innate little quirks and biases - they add up to something very important.
For example, we know from neuroscience that humans implement something not unlike curriculum learning - and a more elaborate version of it than what we use for LLMs now. See: sensitive periods. Or don't see sensitive periods - because if you were born blind, but somehow regained vision in adulthood, it'll never work quite right. You had an opportunity to learn to use the eyes well, and you missed it.
Also, I do think that "clever initialization" is unfortunately quite plausible. Unfortunately - because yes, it has to be simple enough to be implemented by something like a cellular automata, so the reason why we don't have it already is that the search space of all possible initializations a brain could implement is still extremely vast and we're extremely dumb. Plausible - because of papers like this one: https://arxiv.org/abs/2506.20057
If we can get an LLM to converge faster by "pre-pre-training" it on huge amounts of purely synthetic, algorithmically generated meaningless data? Then what are the limits of methods like that?
That analysis provided a very non-abrasive wording of their evaluation of HRM and its contributions. The comparison with a recursive / universal transformer on the same settings is telling.
"These results suggest that the performance on ARC-AGI is not an effect of the HRM architecture. While it does provide a small benefit, a replacement baseline transformer in the HRM training pipeline achieves comparable performance."
Also would possibly instantly void the value of trillions of pending AI datacenter capex
GPU compute is not just for text inferencing. The video generation demand is something I don’t think we’ll ever saturate for quite a while, even with breakthroughs.
It doesn't matter how much compute you have, you'll always be able to saturate it one way or another with ai and having more compute will forever be an advantage.
If breakthrough in ai happens you'll get multiplied benefits, not loss.
" With only 7M parameters,
TRM obtains 45% test-accuracy on ARC-AGI-
1 and 8% on ARC-AGI-2, higher than most
LLMs (e.g., Deepseek R1, o3-mini, Gemini 2.5
Pro) with less than 0.01% of the parameters"
That is very impressive.
Side note:
Superficially reminds me of Hierarchical Temporal Memory from Jeff Hawkins "On Intelligence".
Although this doesn't have the sparsity aspect, its hierarchical and temporal aspects are related.
Hierarchical Reasoning Model (HRM) is a novel approach using two small neural networks recursing at different frequencies.
This biologically inspired method beats Large Language models (LLMs) on hard puzzle tasks such as Sudoku, Maze, and ARC-AGI while trained with small models (27M parameters) on small data (around 1000 examples). HRM holds great promise for solving hard problems with small networks, but it is not yet well understood and may be suboptimal.
We propose Tiny Recursive Model (TRM), a much simpler recursive reasoning approach that achieves significantly higher generalization than HRM, while using a single tiny network with only 2 layers.
With only 7M parameters, TRM obtains 45% test-accuracy on ARC-AGI-1 and 8% on ARC-AGI-2, higher than most LLMs (e.g., Deepseek R1, o3-mini, Gemini 2.5 Pro) with less than 0.01% of the parameters.
"With only 7M parameters, TRM obtains 45% test-accuracy on ARC-AGI-1 and 8% on ARC-AGI-2, higher than most LLMs (e.g., Deepseek R1, o3-mini, Gemini 2.5 Pro) with less than 0.01% of the parameters."
Well, that's pretty compelling when taken in isolation. I wonder what the catch is?
It won't be any good at factual questions, for a start; it will be reliant on an external memory. Everything would have to be reasoned from first principles, without knowledge.
My gut feeling is that this will limits its capability, because creativity and intelligence involve connecting disparate things, and to do that you need to know them first.
Though philosophers have tried, you can't unravel the mysteries of the universe through reasoning alone. You need observations, facts.
What I could see it good for is a dedicated reasoning module.
Basic english is about 2000 words. So a small scale LLM that would be capable of reasoning in basic english, and transforming a problem in normal english to basic english by automatically including the relevant word/phrase definitions from a dictionary, could easily beat a large LLM (by being more consistent).
I think this is where all reasoning problems of LLMs will end up. We will use LM to transform problem in informal english (human language) into a formal logical language (possibly fuzzy and modal), from that possibly into an even simpler logic, then we will solve the problem in the logical domain using traditional reasoning approaches, and convert the answer back to informal english. That way, you won't need to run a large model during the reasoning. Larger models will be only useful as a fuzzy K-V stores (attention mechanism) to help drive heuristics during reasoning search.
I suspect the biggest obstacle to AGI is philosophical, we don't really have a good grasp/formalization of human/fuzzy/modal epistemology. Even if you look at formalization of mathematics, it's mostly about proofs, but we lack understanding what is e.g. an interesting mathematical problem, or how to even express in formal logic that something is a problem, or that experiments suggest something, that one model has an advantage over the other in this respect, that there is a certain cost associated with testing a hypothesis etc. Once we figure out what we actually want in epistemology, I am sure the algorithm required will be greatly reduced.
Take the knowledge the average human has about integrating visual information with texture of an object. Nearly every adult can take a quick glance around a room and have a good idea what it will feel like to run your fingers along its surface, or your lips, or even your tongue, and be able to describe the experience. We have this knowledge because when we were infants and toddlers, everything we encountered was picked up, pulled towards our mouth, and touched by our hands. An AGI inside a computer cannot have that experience today, so it will lack the foundations of intelligence that humans have built up by interacting with the real world.
At some point it will become possible to either collect that data or simulate an experience sufficiently accurately to mimic the development a human child goes through. Until that happens, true AGI will be out of reach as it will have deficiencies the average human does not.
That said, a lot of people will try to get to that point using other means, and they'll probably get pretty close, albeit with really weird hallucinations in the corner cases.
We'll need a memory system, an executive function/reasoning system as well as some sort of sense integration - auditory, visual, text in the case of LLMs, symbolic probably.
A good avenue of research would be to see if you could glue opencyc to this for external "knowledge".
If we could somehow weave in a reasoning tool directly into the inference process, without having to use the context for it, that’s be something. Perhaps compile to weights and pretend this part is pretrained…? No idea if it’s feasible, but it’d definitely be a breakthrough if AI had access to z3 in hidden layers.
I implemented HRM for educational purposes and got good results for path finding. But then I started to do ablation experiments and came to the same conclusions as the ARC-AGI team (the HRM architecture itself didn’t play a big role): https://github.com/krychu/hrm
This was a bit unfortunate. I think there is something in the idea of latent space reasoning.
Overall I really like these transformer RNNs. They are basically EBMs learning an energy landscape that falls into a solution, relaxing a discrete problem into a smooth convex one. Reminds me of other iterative methods like neural cellular automata and flow matching / diffusion. This method looks promising for control problems: just tumble your way down the state space, where each step is constrained to be a valid action.
You'll first need to frame Towers of Hanoi as a supervised learning problem. I suspect the answer to your question will differ depending on what you pick as the input-output pairs to train the model on.
So what happens when we figure out how to 10x both scale and throughput on existing hardware by using it more efficiently? Will gigantic models still be useful?
Of course! We still have computers the size of mainframes that ran on vacuum tubes. They are just built with vastly more powerful hardware and are used for specialized tasks that supercomputing facilities care about.
But it has the potential to alter the economics of AI quite dramatically
With the same data augmentation / 'test time training' setting, the vanilla Transformers do pretty well, close to the "breakthrough" HRM reported. From a brief skim, this paper is using similar settings to compare itself on ARC-AGI.
I too, want to believe in smaller models with excellent reasoning performance. But first understand what ARC-AGI tests for, what the general setting is -- the one that commercial LLMs use to compare against each other -- and what the specialised setting HRM and this paper uses as evaluation.
The naming of that benchmark lends itself to hype, as we've seen in both HRM and this paper.
Which is still a fun idea to play around with - this approach clearly has its strengths. But it doesn't appear to be an actual "better Transformer". I don't think it deserves nearly as much hype as it gets.
With recurrence: The idea has been around: https://arxiv.org/abs/1807.03819
There are reasons why it hasn't really been picked up at scale, and the method tends to do well on synthetic tasks.
I think ARC-AGI was supposed to be a challenge for any model. The assumption being that you'd need the reasoning abilities of large language models to solve it. It turns out that this assumption is somewhat wrong. Do you mean that HRM and TRM are specifically trained on a small dataset of ARC-AGI samples, while LLMs are not? Or which difference exactly do hint at?
Yes, precisely this. The question is really what is ARC-AGI evaluating for?
1. If the goal is to see if models can generalise to the ARC-AGI evals, then models being evaluated on it should not be trained on the tasks. Especially IF ARC-AGI evaluations are constructed to be OOD from the ARC-AGI training data. I don't know if they are. Further, there seems to be usage of the few-shot examples in the evals to construct more training data in the HRM case. TRM may do this via the training data via other means.
2. If the goal is that even _having seen_ the training examples, and creating more training examples (after having peeked at the test set), these evaluations should still be difficult, then the ablations show that you can get pretty far without universal/recurrent Transformers.
If 1, then I think the ARC-prize organisers should have better rules laid out for the challenge. From the blog post, I do wonder how far people will push the boundary (how much can I look at the test data to 'augment' my training data?) before the organisers say "This is explicitly not allowed for this challenge."
If 2, the organisers of the challenge should have evaluated how much of a challenge it would actually have been allowing extreme 'data augmentation', and maybe realised it wasn't that much of a challenge to begin with.
I tend to agree that, given the outcome of both the HRM and this paper, is that the ARC-AGI folks do seem to allow this setting, _and_ that the task isn't as "AGI complete" as it sets out to be.
Just check out the original UT paper, or some of it's follow ups: Neural Data Router, https://arxiv.org/abs/2110.07732; Sparse Universal Transformers (SUT), https://arxiv.org/abs/2310.07096. There is even theoretical justification for why: https://arxiv.org/abs/2503.03961
The challenge is actually scaling them up to be useful as LLMs as well (I describe why it's a challenge in the SUT paper).
It's hard to say with the way ARC-AGI is allowed to be evaluated if this is actually what is at play. My gut tells me, given the type of data that's been allowed in the training set, that some leakage of the evaluation has happened in both HRM and TRM.
But because as a field we've given up on actually carefully ensuring training and test don't contaminate, we just decide it's fine and the effect is minimal. Especially considering LLMs, the test set example leaking into the dataset is merely a drop in the bucket (I don't believe we should be dismissing it this way, but that's a whole 'nother conversation).
With these models that are challenge-targeted, it becomes a much larger proportion of what influences the model behaviour, especially if the open evaluation sets are there for everyone to look at and simply generate more. Now we don't know if we're generalising or memorising.
Anyway, with FIR you typically need many, many times the coefficients to get similar filter cutoff performance as a what few IIR coefficients can do.
You can convert a IIR to a FIR using for example the window design method[3], where if you use a rectangular window function you essentially unroll the recursion but stop after some finite depth.
Similarly it seems unrolling the TRM you end up with the traditional LLM architecture of many repeated attention+ff blocks, minus the global feedback part. And unlike a true IIR, the TRM does implement a finite cut-off, so in that sense is more like a traditional FIR/LLM than the structure suggest.
So, would perhaps be interesting to compare the TRM network to a similarly unrolled version.
Then again, maybe this is all mad ramblings from a sleep deprived mind.
[1]: https://en.wikipedia.org/wiki/Finite_impulse_response
[2]: https://en.wikipedia.org/wiki/Infinite_impulse_response
[3]: https://en.wikipedia.org/wiki/Finite_impulse_response#Window...
Also would possibly instantly void the value of trillions of pending AI datacenter capex, which would be funny. (Though possibly not for very long.)
https://arcprize.org/blog/hrm-analysis
This here looks like a stripped down version of HRM - possibly drawing on the ablation studies from this very analysis.
Worth noting that HRMs aren't generally applicable in the same way normal transformer LLMs are. Or, at least, no one has found a way to apply them to the typical generative AI tasks yet.
I'm still reading the paper, but I expect this version to be similar - it uses the same tasks as HRMs as examples. Possibly quite good at spatial reasoning tasks (ARC-AGI and ARC-AGI-2 are both spatial reasoning benchmarks), but it would have to be integrated into a larger more generally capable architecture to go past that.
I've got a major aesthetic problem with the fact LLMs require this much training data to get where they are, namely, "not there yet"; it's brute force by any other name, and just plain kind of vulgar. Although more importantly it won't scale much further. Novel architectures will have to feature in at some point, and I'll gladly take any positive result in that direction.
Poor sample efficiency of the current AIs is a well known issue - but you should keep in mind what kind of grisly process was required to give you the architecture that makes you as sample efficient as you are.
We don't know yet what kind of architectural quirks enable this sample efficiency in the human brain. It could be something like a non-random initialization process that confers the right inductive biases, a more efficient optimizer, recurrent background loops... or just more raw juice.
It might be that one biological neuron is worth 10000 LLM weights, and a big part of how the brain is so sample efficient is that it's hilariously overparametrized.
If it's a matter of clever initialization bias, it's gotta be pretty simple to survive the replication via DNA and procedural generative process in the meat itself, alongside all of the other stuff which /doesn't/ differentiate us from chimpanzees. Likely simple enough that we would just find something similar ourselves through experimentation. There's also plenty of examples of people learning Interesting Unnatural Stuff using their existing hardware (eg, echolocation, haptic vision, ...) which suggests generality of learning mechanisms in the brain.
For example, we know from neuroscience that humans implement something not unlike curriculum learning - and a more elaborate version of it than what we use for LLMs now. See: sensitive periods. Or don't see sensitive periods - because if you were born blind, but somehow regained vision in adulthood, it'll never work quite right. You had an opportunity to learn to use the eyes well, and you missed it.
Also, I do think that "clever initialization" is unfortunately quite plausible. Unfortunately - because yes, it has to be simple enough to be implemented by something like a cellular automata, so the reason why we don't have it already is that the search space of all possible initializations a brain could implement is still extremely vast and we're extremely dumb. Plausible - because of papers like this one: https://arxiv.org/abs/2506.20057
If we can get an LLM to converge faster by "pre-pre-training" it on huge amounts of purely synthetic, algorithmically generated meaningless data? Then what are the limits of methods like that?
No, it's not.
"These results suggest that the performance on ARC-AGI is not an effect of the HRM architecture. While it does provide a small benefit, a replacement baseline transformer in the HRM training pipeline achieves comparable performance."
I think they would just adopt this idea and use it to continue training huge but more capable models.
GPU compute is not just for text inferencing. The video generation demand is something I don’t think we’ll ever saturate for quite a while, even with breakthroughs.
If breakthrough in ai happens you'll get multiplied benefits, not loss.
https://en.wikipedia.org/wiki/Jevons_paradox
That is very impressive.
Side note: Superficially reminds me of Hierarchical Temporal Memory from Jeff Hawkins "On Intelligence". Although this doesn't have the sparsity aspect, its hierarchical and temporal aspects are related.
https://en.wikipedia.org/wiki/Hierarchical_temporal_memory https://www.numenta.com
Hierarchical Reasoning Model (HRM) is a novel approach using two small neural networks recursing at different frequencies.
This biologically inspired method beats Large Language models (LLMs) on hard puzzle tasks such as Sudoku, Maze, and ARC-AGI while trained with small models (27M parameters) on small data (around 1000 examples). HRM holds great promise for solving hard problems with small networks, but it is not yet well understood and may be suboptimal.
We propose Tiny Recursive Model (TRM), a much simpler recursive reasoning approach that achieves significantly higher generalization than HRM, while using a single tiny network with only 2 layers.
With only 7M parameters, TRM obtains 45% test-accuracy on ARC-AGI-1 and 8% on ARC-AGI-2, higher than most LLMs (e.g., Deepseek R1, o3-mini, Gemini 2.5 Pro) with less than 0.01% of the parameters.
Well, that's pretty compelling when taken in isolation. I wonder what the catch is?
My gut feeling is that this will limits its capability, because creativity and intelligence involve connecting disparate things, and to do that you need to know them first. Though philosophers have tried, you can't unravel the mysteries of the universe through reasoning alone. You need observations, facts.
What I could see it good for is a dedicated reasoning module.
I think this is where all reasoning problems of LLMs will end up. We will use LM to transform problem in informal english (human language) into a formal logical language (possibly fuzzy and modal), from that possibly into an even simpler logic, then we will solve the problem in the logical domain using traditional reasoning approaches, and convert the answer back to informal english. That way, you won't need to run a large model during the reasoning. Larger models will be only useful as a fuzzy K-V stores (attention mechanism) to help drive heuristics during reasoning search.
I suspect the biggest obstacle to AGI is philosophical, we don't really have a good grasp/formalization of human/fuzzy/modal epistemology. Even if you look at formalization of mathematics, it's mostly about proofs, but we lack understanding what is e.g. an interesting mathematical problem, or how to even express in formal logic that something is a problem, or that experiments suggest something, that one model has an advantage over the other in this respect, that there is a certain cost associated with testing a hypothesis etc. Once we figure out what we actually want in epistemology, I am sure the algorithm required will be greatly reduced.
Take the knowledge the average human has about integrating visual information with texture of an object. Nearly every adult can take a quick glance around a room and have a good idea what it will feel like to run your fingers along its surface, or your lips, or even your tongue, and be able to describe the experience. We have this knowledge because when we were infants and toddlers, everything we encountered was picked up, pulled towards our mouth, and touched by our hands. An AGI inside a computer cannot have that experience today, so it will lack the foundations of intelligence that humans have built up by interacting with the real world.
At some point it will become possible to either collect that data or simulate an experience sufficiently accurately to mimic the development a human child goes through. Until that happens, true AGI will be out of reach as it will have deficiencies the average human does not.
That said, a lot of people will try to get to that point using other means, and they'll probably get pretty close, albeit with really weird hallucinations in the corner cases.
We'll need a memory system, an executive function/reasoning system as well as some sort of sense integration - auditory, visual, text in the case of LLMs, symbolic probably.
A good avenue of research would be to see if you could glue opencyc to this for external "knowledge".
LLM's are fundamentally a dead end.
Github link: https://github.com/SamsungSAILMontreal/TinyRecursiveModels
Why not go nuts with it and put it in the speculative decoding algorithm.
This was a bit unfortunate. I think there is something in the idea of latent space reasoning.
But it has the potential to alter the economics of AI quite dramatically