One of the most important papers in software engineering, which I believe everyone in this profession should read and internalize.
Every time I see another startup trying use LLMs for code generation I sigh in despair. As AI technology improves and becomes better at producing code, what looks like a win in the short term will end up creating more and more code that has been created without a human going through the necessary thought processes and problem solving steps to build the theory of the software as described in this paper.
It's also why its critically important for companies to do what they can to retain the people who built the software in the first place, or at least ensure there's enough continuity as new people join the team so they can build their mental model by working alongside the original developers.
> without a human going through the necessary thought processes and problem solving steps to build the theory of the software as described in this paper
We might not be there yet (well we definitely are not) but it does not seem out of the question that within a generous 10 years we will have systems which can leverage graphs, descriptive language, interpreters, and so on to plan out and document and iterate and refine the structure of a problem and its architectural solution in tandem with developing the solution itself iteratively at a very effective level, given a sufficient explanation of the goals/problem - or more importantly/phrased another way, following the initial theory of a problem formulated by the human; the kind of documentation produced by such systems can also be more easily ingested by other non-human systems, potentially remedying some of the challenges with outlining/documenting/transferring the theory of the problem that humans have.
And what prevents a human from doing code review on such a system’s outputs? Now maybe your point was that the simple expense of a human’s time is the barrier, especially given that you were talking about the context of companies using LLMs to speed up their code production (read: eliminate cost centers), but in that case the errors that may come from poorly designed procedurally generated codebases just reads like bad project management to me for which the chickens will ultimately come home to roost; the companies which can successfully integrate such procedurally codegen engines while still maintaining strong design principles, maintainability, simplicity, etc ought to outcompete their competitors’ slop in the long run, right?
Having said all that, I think the more important loss is that the human fails to build as much intuition for the problem space themself by not being on the ground in the weeds solving the problems with their own solutions, and this will struggle to develop their own effective theories of the problem (as indicated by the title of the article in the first place).
What you're describing is the siren call of No Code, which has been tempting manager-types for decades and which has so far failed every single time.
The trouble with No Code is that your first paragraph is already my job description: I plan out and document and refine the structure of a problem and its architectural solution while simultaneously developing the system itself. The "sufficient explanation of the goals/problem" is the code—anything less is totally insufficient. And once I have the code, it is both the fully-documented problem and the spec for the solution.
I won't pretend to know the final end state for these tools, but it's definitely not that engineers will write natural-language specs and the LLMs will translate them, because code (in varying degrees of high- and low-level languages) is the preferred language for solution specification for a reason. It's precise, unambiguous, and well understood by all engineers on a project. There is no need to be filled by swapping that out with natural language unless you're taking engineers out of the loop entirely.
> The "sufficient explanation of the goals/problem" is the code—anything less is totally insufficient.
somewhat in that spirit, I like Gerald Sussman's interpretation of software development as "problem solving by debugging-almost right plans", in e.g. https://www.youtube.com/watch?v=2MYzvQ1v8Ww
> First, we want to establish the idea that a computer language is not just a way of getting a computer to perform operations, but rather that it is a novel formal medium for expressing ideas about methodology. Thus, programs must be written for people to read, and only incidentally for machines to execute.
I mostly agree with what you were saying, but I don’t think I was advocating for “no code” entirely, and certainly not the elimination of engineers entirely.
I was trying to articulate the idea that code generation tools will become increasingly sophisticated and capable, but still be tools that require operation by engineers for maximal effect. I see them as just another abstraction mechanism that will exist within the various layers that separate a dev from the metal. That doesn’t mean the capabilities of such tools are limited to where they are today, and it doesn’t mean that programmers won’t need to learn new ways of operating their tools.
I also hinted at it, but there’s nothing to say that our orchestration of such systems needs to be done in natural language. We are already skilled at representing procedures and systems in code like you said; there’s no reason to think we wouldn’t be adept at learning new languages specialized for specifying higher order designs in a more compact but still rigorous form to codegen systems. it seems reasonable to think that we will start developing DSLs and the like for communicating program and system design to codegen systems in a precise manner. One obvious way of thinking about that is by specifying interfaces and test cases in a rigorous manner and letting the details be filled in - obviously attempts at that now exhibit lots of poor implementation decisions inside of the methods, but that is not a universal phenomenon that will always hold.
The DSL paradigm is generally how I go about using LLMs on new projects, I.e use the LLM to design a language that best represents the abstractions and concepts of the project - and once the language is defined, the LLM can express usecases with the DSL and ultimately convert them into an existing high level language like Python.
That is s great idea. I’ve used ChatGPT to help me define the names of the functions of an API. Next time I face a problem where it calls for DSL I will give it a try.
Earlier an HN user had given an example of using Prolog as an intermediate DSL in the prompt to an LLM so as to transform English declarative -> Imperative code - https://news.ycombinator.com/item?id=41549823
In general, we already have plenty of mechanisms for specifying interfaces/api specs, tests, relationships, etc in a declarative but more formal manner than natural language which probably all work , and I can only imagine we will continue to see the development of more options tailored to this use case.
is out of print, as is _Concise survey of computer methods_ and rather pricey.
Oddly, _Knowing and the Mystique of Logic and Rules_ (which has an even lengthier title after a colon...) has four entries at Goodreads and is listed under "P. Naur" and is even pricier, quite expensive on Amazon:
It was reprinted elsewhere, in an agile book (which one?) which this (more readable than linked) copy [1] is from. I think the other one might be from another edition of the same book. I ordered Computing A Human Activity a few weeks ago, its still in shipping, probably got the cheapest remaining copy.
I don't think using AI to write code precludes learning deeply about the problem domain and even the solution. However, it could lead to those problems depending on how it's done. But done well you can still have a very knowledgeable team that understands the domain and large portions of the code, I believe anyway.
I think software engineers will drift towards only understanding the domain and creating tasks and then reviewing code written by AI. But the reviews will be necessary and will matter, at least for a while.
Respectfully, this seems upside down to me. Tools incorporating LLMs will be the knowledge repository for s/w projects of the future, and will capture and then summarize ideas, create mocks and finally render code (on command with guidance and iterations involving teams). My point being that the LLM era will be a deeper realization of code as theory building.
As relevant as ever, arguably more relevant than ever as more programs are being written and need to be adapted, in more and more complex domains.
Note what Naur means with Theory here. Quoting from the paper:
"What will be considered here is the suggestion that the programmers' knowledge properly should be regarded as a theory, in the sense of Ryle [Gilbert Ryle, The Concept of Mind, 1946]. Very briefly, a person who has or possesses a theory in this sense knows how to do certain things and in addition can support the actual doing with explanations, justifications, and answers to queries, about the activity of concern."
This is not "theory" in the sense we sometimes encounter in colloquial speech in the sense of (exclusively) "assumption", especially not with the connotation "unjustified assumption". It is also not a set of rules:
"The dependence of a theory on a grasp of certain kinds of similarity between situations and events of the real world gives the reason why the knowledge held by someone who has the theory could not, in principle, be expressed in terms of rules. In fact, the similarities in question are not, and cannot be, expressed in terms of criteria, no more than the similarities of many other kinds of objects, such as human faces, tunes, or tastes of wine, can thus be expressed."
Yet, it plays a central role in programming:
"For a program to retain its quality it is mandatory that each modification is firmly grounded in the theory of it. Indeed, the very notion of qualities such as simplicity and good structure can only be understood in terms of the theory of the program, since they characterize the actual program text in relation to such program texts that might have been written to achieve the same execution behaviour, but which exist only as possibilities in the programmer's understanding."
This has so many implications for software team design
Like hiring that one unicorn dev to solve X hard problem isn't a great "theory building" exercise. It can build theories for that one person, but without feedback they're never tested, they're never adopted by the whole team
So you actually NEED juniors, 'stupid' questions, outside points of view, and ways of openly and scientifically evaluating theories instead of defaulting to the authority of supposed experts. You also need to retain seniors who have context and a good historical working definition of the problem.
But a lot of teams are focused on just the next problem and "shipping it". Rather than using "shipping" to help the team develop a better theory of the problem.
The value isn't what's shipped, its the working knowledge of the team.
Value of a product tends to be measured by the number of features shipped, the quality of service and time to market. But knowledge of the team is hard to evaluate and to sell to a manager.
It is good if developer has already it, he is more productive then. But when he explicitly puts effort into gaining knowledge, then he does not deliver during that time so maybe he should not be paid for it.
I can't imagine a relationship between a manager and a developer where knowledge is valued higher than delivery. It could work only if the manager also believes in this value. I think he could believe in it only if he is sure that this project will pay off in the long run. In the era of a fast-changing world, he is putting the value of delivery and satisfying stakeholders on a higher rung.
This also explains the unreasonable effectiveness of solo programmers and small teams, and why the famous adage is so true: adding programmers to a late project makes it even later.
Doesn’t declarative programming and by extension functional programming adhere more to the ethos of ‘Programming as Theory Building’ ?
I recently started building mobile apps using Flutter after a decade of developing apps using imperative programming languages and I’m really in love with the declarative nature of flutter.
Similarly for web development, I always loved HTML and so HTMX has been a boon for me. I’m using Go for backend, but I’ve been thinking whether I should move on to a proper functional programming language like Elixir with Phoenix since I’m liking declarative programming very much?
It seems to me that one consequence of the "Theory Building View" is that: instead of focusing on delivering the artifact or the documentation of said artifact, one should instead focus on documenting how the artifact can be re-implemented by somebody else. Or in other words optimise for "revival" of a "dead" programs.
This seems especially relevant in open source, or in blog posts / papers, where we rarely have teams which continuously transfer theories to newcomers. Focusing on documenting "how it works under the hood" and helping others re-implement your ideas also seems more useful to break silos between programming language communities.
For example a blog post that introduces some library in some programming language and only explains how to use its API to solve some concrete problems is of little use to programmers that use other programming languages, compared to a post which would explain how the library works on a level where other programmers could build a theory and re-implement it themselves in their language of choice.
I also feel like there's a connection between the "Theory Building View" and the people that encourage rewriting your software. For example in the following interview[0] Joe Armstrong explains that he often wrote a piece of code and the next day he threw it away and rewrote it from scratch. Perhaps this has to do with the fact that after your first iteration, you've a better theory and therefore in a better position to implement it in a better way?
I also believe there's some connection to program size here. In the early days of Erlang it was possible to do a total rewrite of the whole language in less than a week. New language features were added in one work session, if you couldn’t get the idea out of your brain and code it up in that time then you didn’t do it, Joe explained[1] (17:10).
In a later talk[2] he elaborated saying:
“We need to break systems down into small understandable components with message passing between them and with contracts describing whats going on between them so we can understand them, otherwise we just won’t be able to make software that works. I think the limit of human understandability is something like 128KB of code in any language. So we really need to box things down into small units of computation and formally verify them and the protocols in particular.”
I found the 128KB interesting. It reminds me of Forth here you are forced to fit your code in blocks (1024 chars or 16 lines on 64 characters).
Speaking of Forth, Chuck Moore also appears to be a rewriter. He said[3] something in similar:
“Instead of being rewritten, software has features added. And becomes more complex. So complex that no one dares change it, or improve it, for fear of unintended consequences. But adding to it seems relatively safe. We need dedicated programmers who commit their careers to single applications. Rewriting them over and over until they’re perfect.” (2009)
Chuck re-implemented the his Forth many times, in fact Forth’s design seems to be centered around being easily re-implementable on new hardware (this was back when new CPUs had new instruction sets). Another example is Chuck’s OKAD, VLSI design tools, to which he comments:
“I’ve spent more time with it that any other; have re-written it multiple times; and carried it to a satisfying level of maturity.”
Something I’m curious about is: what would tools and processes that encourage the "Theory Building View" look like?
> It seems to me that one consequence of the "Theory Building View" is that: instead of focusing on delivering the artifact or the documentation of said artifact, one should instead focus on documenting how the artifact can be re-implemented by somebody else. Or in other words optimise for "revival" of a "dead" programs.
Arguably, this is the entire spirit of academia, which mildly serves as a counter example, or at least illustrates the challenges with what you are describing - even in something where the stated goal is reproducibility, you still have a replication crisis. Though to be fair, I think part of the problem there is that, like you said, people focus too much on “documenting the artifact” and not “documenting how to produce the artifact,” but this is often because the process is often “merely” technical and not theoretical (and thus not publishable) despite being where most of the hard work and problem solving and edge case resolution and so on happened.
Edit: oh, and I would also mentioned, that the kind of comment you’ve described which focuses on why some process exists in the form it does to better explain how it does what it does aligns closely with Osterhout’s notion of a good comment in A Philosophy of Software Design.
I couldn't easily count the number of re-writes for my current project, but it keeps getting better, and each new iteration has had an updated architecture allowing for new features. When I re-wrote it as a Literate Program (first a .dtx, now a "normal" .tex) things got much more expressive and easier to work with.
How good are LLMs at reducing code? For example, will they recognize a common problem and build an abstraction around it? I imagine that the solutions they produce tend to have a lot of repetition with small differences that could be improved by abstraction.
The best programmers eventually become experts in a problem domain they’ve worked on, because to teach a computer to automate a process well requires thinking like an expert and resolving incoherences. Weak programmers complain stakeholders don’t know what they want or that there’s no spec; I have a hunch these are going to be replaced by AI.
I think you nerds need to stop reading obsolete academic fad papers from 1985. Imagine if your girlfriend was unironically reading articles of Cosmo from 1985 to figure out what to wear.
A computer program is a "model" of some thing. For example:
float m = 1e10f;
float a = 9.8f;
float F = m*a;
Another example:
if(employee is still employed):
float paycheque = getSalary(employee);
else:
float paycheque = 0.00f;
Fashion changes quickly over time, while good models of real-life processes are infrequently supplanted.
For your argument to work, you need to prove that the original article is closer to a 1985 Cosmo article than it is to something like Clayton Christensen's 1995 article on Disruptive Innovation, which remains relevant today (or disprove one of the premises in my comment).
Every time I see another startup trying use LLMs for code generation I sigh in despair. As AI technology improves and becomes better at producing code, what looks like a win in the short term will end up creating more and more code that has been created without a human going through the necessary thought processes and problem solving steps to build the theory of the software as described in this paper.
It's also why its critically important for companies to do what they can to retain the people who built the software in the first place, or at least ensure there's enough continuity as new people join the team so they can build their mental model by working alongside the original developers.
We might not be there yet (well we definitely are not) but it does not seem out of the question that within a generous 10 years we will have systems which can leverage graphs, descriptive language, interpreters, and so on to plan out and document and iterate and refine the structure of a problem and its architectural solution in tandem with developing the solution itself iteratively at a very effective level, given a sufficient explanation of the goals/problem - or more importantly/phrased another way, following the initial theory of a problem formulated by the human; the kind of documentation produced by such systems can also be more easily ingested by other non-human systems, potentially remedying some of the challenges with outlining/documenting/transferring the theory of the problem that humans have.
And what prevents a human from doing code review on such a system’s outputs? Now maybe your point was that the simple expense of a human’s time is the barrier, especially given that you were talking about the context of companies using LLMs to speed up their code production (read: eliminate cost centers), but in that case the errors that may come from poorly designed procedurally generated codebases just reads like bad project management to me for which the chickens will ultimately come home to roost; the companies which can successfully integrate such procedurally codegen engines while still maintaining strong design principles, maintainability, simplicity, etc ought to outcompete their competitors’ slop in the long run, right?
Having said all that, I think the more important loss is that the human fails to build as much intuition for the problem space themself by not being on the ground in the weeds solving the problems with their own solutions, and this will struggle to develop their own effective theories of the problem (as indicated by the title of the article in the first place).
The trouble with No Code is that your first paragraph is already my job description: I plan out and document and refine the structure of a problem and its architectural solution while simultaneously developing the system itself. The "sufficient explanation of the goals/problem" is the code—anything less is totally insufficient. And once I have the code, it is both the fully-documented problem and the spec for the solution.
I won't pretend to know the final end state for these tools, but it's definitely not that engineers will write natural-language specs and the LLMs will translate them, because code (in varying degrees of high- and low-level languages) is the preferred language for solution specification for a reason. It's precise, unambiguous, and well understood by all engineers on a project. There is no need to be filled by swapping that out with natural language unless you're taking engineers out of the loop entirely.
somewhat in that spirit, I like Gerald Sussman's interpretation of software development as "problem solving by debugging-almost right plans", in e.g. https://www.youtube.com/watch?v=2MYzvQ1v8Ww
> First, we want to establish the idea that a computer language is not just a way of getting a computer to perform operations, but rather that it is a novel formal medium for expressing ideas about methodology. Thus, programs must be written for people to read, and only incidentally for machines to execute.
I was trying to articulate the idea that code generation tools will become increasingly sophisticated and capable, but still be tools that require operation by engineers for maximal effect. I see them as just another abstraction mechanism that will exist within the various layers that separate a dev from the metal. That doesn’t mean the capabilities of such tools are limited to where they are today, and it doesn’t mean that programmers won’t need to learn new ways of operating their tools.
I also hinted at it, but there’s nothing to say that our orchestration of such systems needs to be done in natural language. We are already skilled at representing procedures and systems in code like you said; there’s no reason to think we wouldn’t be adept at learning new languages specialized for specifying higher order designs in a more compact but still rigorous form to codegen systems. it seems reasonable to think that we will start developing DSLs and the like for communicating program and system design to codegen systems in a precise manner. One obvious way of thinking about that is by specifying interfaces and test cases in a rigorous manner and letting the details be filled in - obviously attempts at that now exhibit lots of poor implementation decisions inside of the methods, but that is not a universal phenomenon that will always hold.
Earlier an HN user had given an example of using Prolog as an intermediate DSL in the prompt to an LLM so as to transform English declarative -> Imperative code - https://news.ycombinator.com/item?id=41549823
In general, we already have plenty of mechanisms for specifying interfaces/api specs, tests, relationships, etc in a declarative but more formal manner than natural language which probably all work , and I can only imagine we will continue to see the development of more options tailored to this use case.
But: “And what prevents a human from doing code review on such a system’s outputs?” One word: cost.
At least in my experience, at least right now, it is more effort to review and correct, as doing from scratch.
https://www.goodreads.com/book/show/4594604-computing
is out of print, as is _Concise survey of computer methods_ and rather pricey.
Oddly, _Knowing and the Mystique of Logic and Rules_ (which has an even lengthier title after a colon...) has four entries at Goodreads and is listed under "P. Naur" and is even pricier, quite expensive on Amazon:
https://www.amazon.com/Knowing-Mystique-Logic-Rules-Statemen...
even as an ebook.
It would be more influential if it was affordably in print....
[1] https://pablo.rauzy.name/dev/naur1985programming.pdf
I think software engineers will drift towards only understanding the domain and creating tasks and then reviewing code written by AI. But the reviews will be necessary and will matter, at least for a while.
Note what Naur means with Theory here. Quoting from the paper:
"What will be considered here is the suggestion that the programmers' knowledge properly should be regarded as a theory, in the sense of Ryle [Gilbert Ryle, The Concept of Mind, 1946]. Very briefly, a person who has or possesses a theory in this sense knows how to do certain things and in addition can support the actual doing with explanations, justifications, and answers to queries, about the activity of concern."
This is not "theory" in the sense we sometimes encounter in colloquial speech in the sense of (exclusively) "assumption", especially not with the connotation "unjustified assumption". It is also not a set of rules:
"The dependence of a theory on a grasp of certain kinds of similarity between situations and events of the real world gives the reason why the knowledge held by someone who has the theory could not, in principle, be expressed in terms of rules. In fact, the similarities in question are not, and cannot be, expressed in terms of criteria, no more than the similarities of many other kinds of objects, such as human faces, tunes, or tastes of wine, can thus be expressed."
Yet, it plays a central role in programming:
"For a program to retain its quality it is mandatory that each modification is firmly grounded in the theory of it. Indeed, the very notion of qualities such as simplicity and good structure can only be understood in terms of the theory of the program, since they characterize the actual program text in relation to such program texts that might have been written to achieve the same execution behaviour, but which exist only as possibilities in the programmer's understanding."
Like hiring that one unicorn dev to solve X hard problem isn't a great "theory building" exercise. It can build theories for that one person, but without feedback they're never tested, they're never adopted by the whole team
So you actually NEED juniors, 'stupid' questions, outside points of view, and ways of openly and scientifically evaluating theories instead of defaulting to the authority of supposed experts. You also need to retain seniors who have context and a good historical working definition of the problem.
But a lot of teams are focused on just the next problem and "shipping it". Rather than using "shipping" to help the team develop a better theory of the problem.
The value isn't what's shipped, its the working knowledge of the team.
It is good if developer has already it, he is more productive then. But when he explicitly puts effort into gaining knowledge, then he does not deliver during that time so maybe he should not be paid for it.
I can't imagine a relationship between a manager and a developer where knowledge is valued higher than delivery. It could work only if the manager also believes in this value. I think he could believe in it only if he is sure that this project will pay off in the long run. In the era of a fast-changing world, he is putting the value of delivery and satisfying stakeholders on a higher rung.
Programming as Theory Building (1985) - https://news.ycombinator.com/item?id=38907366 - Jan 2024 (12 comments)
Programming as Theory Building (1985) [pdf] - https://news.ycombinator.com/item?id=37263121 - Aug 2023 (36 comments)
Programming as Theory Building (1985) [pdf] - https://news.ycombinator.com/item?id=33659795 - Nov 2022 (1 comment)
Naur on Programming as Theory Building (1985) [pdf] - https://news.ycombinator.com/item?id=31500174 - May 2022 (4 comments)
Naur on Programming as Theory Building (1985) [pdf] - https://news.ycombinator.com/item?id=30861573 - March 2022 (3 comments)
Programming as Theory Building (1985) - https://news.ycombinator.com/item?id=23375193 - June 2020 (35 comments)
Programming as Theory Building (1985) [pdf] - https://news.ycombinator.com/item?id=20736145 - Aug 2019 (11 comments)
Peter Naur – Programming as Theory Building (1985) [pdf] - https://news.ycombinator.com/item?id=10833278 - Jan 2016 (15 comments)
Naur’s “Programming as Theory Building” (2011) - https://news.ycombinator.com/item?id=7491661 - March 2014 (14 comments)
Programming as Theory Building (by Naur of BNF) - https://news.ycombinator.com/item?id=121291 - Feb 2008 (2 comments)
https://futureofcoding.org/episodes/061.html
I recently started building mobile apps using Flutter after a decade of developing apps using imperative programming languages and I’m really in love with the declarative nature of flutter.
Similarly for web development, I always loved HTML and so HTMX has been a boon for me. I’m using Go for backend, but I’ve been thinking whether I should move on to a proper functional programming language like Elixir with Phoenix since I’m liking declarative programming very much?
This seems especially relevant in open source, or in blog posts / papers, where we rarely have teams which continuously transfer theories to newcomers. Focusing on documenting "how it works under the hood" and helping others re-implement your ideas also seems more useful to break silos between programming language communities.
For example a blog post that introduces some library in some programming language and only explains how to use its API to solve some concrete problems is of little use to programmers that use other programming languages, compared to a post which would explain how the library works on a level where other programmers could build a theory and re-implement it themselves in their language of choice.
I also feel like there's a connection between the "Theory Building View" and the people that encourage rewriting your software. For example in the following interview[0] Joe Armstrong explains that he often wrote a piece of code and the next day he threw it away and rewrote it from scratch. Perhaps this has to do with the fact that after your first iteration, you've a better theory and therefore in a better position to implement it in a better way?
I also believe there's some connection to program size here. In the early days of Erlang it was possible to do a total rewrite of the whole language in less than a week. New language features were added in one work session, if you couldn’t get the idea out of your brain and code it up in that time then you didn’t do it, Joe explained[1] (17:10).
In a later talk[2] he elaborated saying:
I found the 128KB interesting. It reminds me of Forth here you are forced to fit your code in blocks (1024 chars or 16 lines on 64 characters).Speaking of Forth, Chuck Moore also appears to be a rewriter. He said[3] something in similar:
Chuck re-implemented the his Forth many times, in fact Forth’s design seems to be centered around being easily re-implementable on new hardware (this was back when new CPUs had new instruction sets). Another example is Chuck’s OKAD, VLSI design tools, to which he comments: Something I’m curious about is: what would tools and processes that encourage the "Theory Building View" look like?[0]: https://vimeo.com/1344065#t=8m30s
[1]: https://dl.acm.org/action/downloadSupplement?doi=10.1145%2F1...
[2]: https://youtu.be/rQIE22e0cW8?t=3492
[3]: https://www.red-gate.com/simple-talk/opinion/geek-of-the-wee...
Arguably, this is the entire spirit of academia, which mildly serves as a counter example, or at least illustrates the challenges with what you are describing - even in something where the stated goal is reproducibility, you still have a replication crisis. Though to be fair, I think part of the problem there is that, like you said, people focus too much on “documenting the artifact” and not “documenting how to produce the artifact,” but this is often because the process is often “merely” technical and not theoretical (and thus not publishable) despite being where most of the hard work and problem solving and edge case resolution and so on happened.
Edit: oh, and I would also mentioned, that the kind of comment you’ve described which focuses on why some process exists in the form it does to better explain how it does what it does aligns closely with Osterhout’s notion of a good comment in A Philosophy of Software Design.
http://literateprogramming.com/
I couldn't easily count the number of re-writes for my current project, but it keeps getting better, and each new iteration has had an updated architecture allowing for new features. When I re-wrote it as a Literate Program (first a .dtx, now a "normal" .tex) things got much more expressive and easier to work with.
https://pages.cs.wisc.edu/~remzi/Naur.pdf
A computer program is a "model" of some thing. For example:
Another example:For your argument to work, you need to prove that the original article is closer to a 1985 Cosmo article than it is to something like Clayton Christensen's 1995 article on Disruptive Innovation, which remains relevant today (or disprove one of the premises in my comment).