From a systems engineering standpoint, the purpose of LLMs is to construct, verify, and "push down" abstractions and deterministic layers. Deterministic layers are able to cope reliably with the law of medium numbers.
Anthropic published a profile on what we're building at Kepler. Sharing because the architectural argument (LLM for intent, deterministic code for retrieval and computation, every number traceable to source) is the part I'd actually want HN to push on. Happy to answer questions in the thread.
Very interesting. What size team does it take to build this, incl. analysts, project managers, product managers etc.? How long did you spend in analysis before building and the how long to first customer using it?
Well... You have a 'tool' that you cannot trust. Present everywhere due to unholly alliance between the LLM- companies and the exhilirated office worker cretins who "use" them to do "workflows". Now they fuck up stuff. Sounds like friction to me, or do you value the LLMs as net positive? WHy should I do something to fix their problems instead?
You're misunderstanding something about the problem space they're describing. The deterministic infra is for an underlying "execution layer"; the LLMs are providing utility by figuring out how to express English language queries in terms of the primitives of that verifiable layer. That way, you can describe your results deterministically even though the process of arriving at them was not necessarily deterministic.
Oh. I may have misread indeed. Ao its like, still LLM bullshit, but with really strongly worded .md instruction files begging them to please be correct?
We are living in an age of hot air.
On the one hand, very encouraging to see plain old deterministic infra w/o using slop machines.
On the other hand, this is a recognition that LLMs are just additional friction in the system that we would better off without in the first place!