But… why not just write pseudo code, or any language you actually know, and just ask the AI to port it to the language you want? That’s a serious question by the way, is there some use case here I’m not seeing where learning this new syntax and running this actually helps instead of being extra steps nobody needs?
I've been thinking about something along these lines, but coupled with deterministic inference. At each "macro" invocation you'd also include hash-of-model, and hash-of-generated-text. (Note, determinism doesn't require temperature 0, so long as you can control the rng seed. But there are a lot of other things that make determinism hard)
You could take it a step further and have a deterministic agent inside a deterministic VM, and you can share a whole project as {model hash, vm image hash, prompt, source tree hash} and have someone else deterministically reproduce it.
Is this useful? Not sure. One use case I had in mind as a mechanism for distributing "forbidden software". You can't distribute software that violates DMCA, for example, but can you distribute a prompt?
Deterministic inference is mechanically indistinguishable from decompression or decryption, so if there's a way to one-weird-trick DMCA, it's probably not this.
You’d think that, but it sees like big business and governments are treating inference as somehow special. I dunno, maybe low temperatures can highlight this weird situation?
Temperature is an easy knob to twist, after all. Somebody (not me I’m too poor to pay the lawyers) should do a search and find where the crime starts.
I've found that writing pseudocode in a markdown file with little to no definitions (I may put a few non-obvious notes in the CLAUDE/AGENTS files) and telling the agent what language to turn it into generally works.
So instead of auto-completing bits of LLM-generated code into the codebase, you preprocess it in. I can imagine a lot of devs won't like the ergonomics of that, but I like the idea that you can keep both original .glp and generated source files in version control.
I'd strongly recommend going over the README by hand. What you currently have is redundant and disorganized, and header sizes/depths don't make a lot of sense. The "manual build" instructions should also describe the dependencies that the install script is setting up.
The language feels like a solution in search of a problem, and the mostly-generated README reduces my confidence in the quality of the project before I've even learned that much about it.
One example:
Best of all, they work together. You can store your .glp blueprints in a Docker container—creating software that is immortal in both environment and logic.
This is nonsensical. The entire point of a container is it ought to contain only what's necessary to run the underlying software. It's just the production filesystem. Why would I put LLM prompts that don't get used at runtime in a container?
What other language-agnostic methods of describing complex systems is your project inspired by? In competition with?
---
By using this tool, a programmer or team is sending the message that:
"We expect LLM generated code to remain a deeply coupled part of our delivery process, indefinitely"
But we didn't know about LLMs 5 years ago. What is the argument for defining your software in a way that depends on such a young technology? Most of the "safety" features here are related to how unsafe the tech itself still is.
"Nontrivial LLM driven rewrites of the code are expected, even encouraged"
Why is the speedy rewriting of a system in a new language such a popular flex these days? Is it because it looks impressive, and LLMs make it easy? It's so silly.
And if the language allows for limiting the code the LLM is allowed to modify, how is it going to help us keep our overall project language-agnostic?
You could take it a step further and have a deterministic agent inside a deterministic VM, and you can share a whole project as {model hash, vm image hash, prompt, source tree hash} and have someone else deterministically reproduce it.
Is this useful? Not sure. One use case I had in mind as a mechanism for distributing "forbidden software". You can't distribute software that violates DMCA, for example, but can you distribute a prompt?
Temperature is an easy knob to twist, after all. Somebody (not me I’m too poor to pay the lawyers) should do a search and find where the crime starts.
I'd strongly recommend going over the README by hand. What you currently have is redundant and disorganized, and header sizes/depths don't make a lot of sense. The "manual build" instructions should also describe the dependencies that the install script is setting up.
One example:
This is nonsensical. The entire point of a container is it ought to contain only what's necessary to run the underlying software. It's just the production filesystem. Why would I put LLM prompts that don't get used at runtime in a container?What other language-agnostic methods of describing complex systems is your project inspired by? In competition with?
---
By using this tool, a programmer or team is sending the message that:
"We expect LLM generated code to remain a deeply coupled part of our delivery process, indefinitely"
But we didn't know about LLMs 5 years ago. What is the argument for defining your software in a way that depends on such a young technology? Most of the "safety" features here are related to how unsafe the tech itself still is.
"Nontrivial LLM driven rewrites of the code are expected, even encouraged"
Why is the speedy rewriting of a system in a new language such a popular flex these days? Is it because it looks impressive, and LLMs make it easy? It's so silly.
And if the language allows for limiting the code the LLM is allowed to modify, how is it going to help us keep our overall project language-agnostic?