Considerably better than most such articles that I have read on this but I think if the Forth community wants to get people into Forth it really needs to stop talking about how it can fit in a boot sector and the REPL; the former is not of interest or use to most programmers and the latter is probably a major cause of the misconception of Forth code being impossible to read.
What I see as the real strength of Forth is that if you write your program in source files, there is no abstraction. You stick your word definitions in those source files and let the application you are writing dictate the words you define instead of relying on the dictionary you built up on REPL and things quickly start becoming easy and the code remains readable. It might seem like a lot of work and endlessly reinventing the wheel but if you start with a featureful Forth like gforth it does not take that much time or effort once you have the basics down; you can build complex applications with gforth as easily as you can build up a full Forth with that minimal Forth that fits in your boot sector.
The main thing is learning that application development with Forth is top down design and bottom up writing. You break the application up into its major functions, break those into smaller functions, break those into words and then break those words into those simple sort words that everyone in the Forth community says you should write. Then you start writing code. I am just starting to get the hang of Forth and it is surprisingly quick and powerful once you start getting the sense of it.
Do you have any references to a quick and powerful Forth examples of responding to a web request or manipulating text; you know, typical stuff we deal with every day
Not sure if that is the best example, but go to the "network" section and you can see plenty of examples of connection stuff. Also cool things in the map, graph, console, hardware, DB (database), and nuklear (GUI) sections.
It is commercial though (albeit with a free tier iirc), so that may or may not be attractive for you if you wanted to see all source. For me, I just wanted to spend time playing around with a well polished ~forth that had all these things builtin, so fine for my more limited use cases. Coming from Python as my daily driver, I found it really easy to pickup the tooling and have fun building some super simple toy apps. The most default data structure is basically JSON, which is a pretty unconventional forth approach, but just clicked with me as I'm used to Python dictionaries. You might also be able to do all that with gForth, but not sure (referring to the ease of use from high level data structures).
There was also a forth-like language written in C# that was open source I think and pretty cool. It might have been retroforth which is available in various formats and has been talked about on here many times. I think the source comes with the zip file, but haven't looked in years. I assume it has some utility libraries for doing normal things.
I'm familiar enough with Forth, picked it up in the 80's with Blazin' Forth on a C64 with Leo Brodie's book at my side. Played a little with the boot environment on some Sun boxes, made some simple contributions to another Forth project on x86 that I can't remember the name of. I never got the wide-eyed wonder some people seem to have about it though, let alone saw it as a silver bullet. The tax bracket code at the bottom of the article is a pretty good illustration of why: it's a really elegant sort of macro assembler, but I'm not really interested in writing whole apps in macro assembly.
Oh, cool! SmithForth[0] is how I originally learned about x86-64 microarchitecture. It's a Forth that bootstraps off hand-coded x86-64 opcodes. I decided to go the other direction and decompile the binary by hand. It really is a beautiful piece of code. Highly recommended reading.
Also, you're excited by Forth and Lisp, you might like Forsp[1]. It uses a call-by-push-value evaluation strategy in a way that really makes the language feel like a true hybrid of the two. The implementation is also brilliantly simple.
Anyway, thank you for the article. I lurk on the DuskOS mailing list and wish I could find out where the Forthers gather IRL so I could osmote more of their world into my own.
There is an aspect of the history of Forth and C I have been trying to wrap my head around.
The early B compiler was reported to generate threaded code (like Forth). The threaded code was abandoned fairly early in the port to the PDP11 from the PDP7 as it was deemed to slow to write an operating system in.
At which point unix and C lost a very interesting size optimization. With the net result that Forth was more portable and portable between machines circa 1970 and Unix had to wait until circa 1975 with UnixV6.
I have been trying to go back through the history to see if I could understand why threaded code was deemed too slow. Today the most part of executables is code that is not run often and would probably benefit from a smaller representation (less memory and cache space making the system faster overall). So this is a practical question even today.
I found a copy of unix for the PDP7. Reproduced from old printouts typed in. If I have read the assembly correctly the B compiler was not using an efficient form of threaded code at all.
The PDP7 is an interesting machine. It's cells were 18 bits wide. The adress bus was 12bits wide. Which meant there was room for an opcode and a full address in every cell.
As I read the B compiler it was using a form of token threading with everything packed into a single 18 bit cell. The basic operations of B were tokens and an if token encoded with a full address in the cell. Every token had to be decoded via a jump table, the address of the target code was then plugged into a jump instruction which was immediately run.
Given the width of the cells, I wonder what the conclusions about performance of B would have been if subroutine threading or a similar technique using jmp instructions would have been.
Does anyone know if Forth suffers measurably in inner loops from have to call words that perform basic operations?
Is this where a Forth programmer would be accustomed to write the inner loop in assembly to avoid the performance penalty?
I can probably shed some light on that. I've used Forth on 8 bit platforms (6502, 6809), 16 bit platforms (80286) and 32 bit platforms (68K), as well as assembly, and on the 16 and 32 bit platforms C. Putting these side-by-side and assuming roughly equivalent programmer competence levels at the time assembler would win out, C would get you to maybe half the speed of assembly on a good day and Forth was about 10x slower than that.
Which was still incredibly fast for the day, given that Forth was compiled to an intermediary format with the Forth interpreter acting as a very primitive virtual machine. This interpretation step had considerable overhead, especially in inner loops with few instructions the overhead would be massive. For every one instruction doing actual work you'd have a whole slew of them assigned to bookkeeping and stack management. What in C would compile to a few machine instructions (which a competent assembly programmer of the time would be able to significantly improve upon) would result in endless calls to lower and lower levels.
There were later Forth implementations that improved on this by compiling to native code but I never had access to those when I was still doing this.
For a lark I wrote a Forth in C rather than bootstrapping it through assembly and it performed quite well, Forth is ridiculously easy to bring up, it is essentially a few afternoons work to go from zero to highway speeds on a brand new board that you have a compiler for. Which is one of the reasons it is still a favorite for initial board bring-up.
One area where Forth usually beat out C by a comfortable margin was code size, Forth code tends to be extremely compact (and devoid of any luxury). On even the smallest micro controllers (8051 for instance, and later, MicroChip and such) you could get real work done in Forth.
It is no so much the parts of the code that run infrequently that contribute to poor performance, but the very tiny <1% of all code that does run frequently, and should be running completely in cache. So code size doesn't have an enormous impact on speed of execution.
The overhead of threading seems pretty obvious: call and return instructions are expensive compared to the cost of the one equivalent instruction that would have been executed in a compiled implementation. And placing arguments on a stack means that all operands have to to be read from and written to memory, incurring additional ferocious overhead, whereas a compiler would enregister values, particularly in performance-critical code. Not unreasonable to expect that Forth code is going to run at least an order of magnitude slower than compiled code.
Some of those could be partially fixed in hardware. Examples:
- returns can run in as good as zero time
- when calling a word, the CPU could prefetch the cache line containing the next word to be called, on the assumption that it would be called soon (an assumption that would be correct for many “almost leaf” calls)
- the top of the stack could be kept in register-speed memory.
“The internal structure of the NC4016 is designed for single clock cycle instruction execution. All primitive operations except memory fetch, memory store, and long literal fetch execute in a single clock cycle. This requires many more on-chip interconnection paths than are present on the Canonical Stack Machine, but provides much better performance.
[…]
The NC4016 subroutine return bit allows combining a subroutine return with other instructions in a similar manner. This results in most subroutine exit instructions executing "for free" in combination with other instructions. An optimization that is performed by NC4016 compilers is tail-end recursion elimination. Tail-end recursion elimination involves replacing a subroutine call/subroutine exit instruction pair by an unconditional branch to the subroutine that would have been called.”
(¿Almost?) all modern hardware is designed for other language paradigms, though, so these won’t help with hardware you can buy.
> Does anyone know if Forth suffers measurably in inner loops from have to call words that perform basic operations?
Yes, the slowdown is of the order of 8× for DTC, ITC, and bytecode ("token threading"). Eliminating the jump table reduces the overhead a bit, but it's still order 8×.
The B compiler bundled a copy of the bytecode interpreter into each executable; that might have made it less appealing as a size optimization. For a big enough program it would still have won.
Subroutine threading is really just compact native code, but it still suffers from typically about 4× overhead for basic operations like dup, @, +, or exit (the traditional name for the runtime effect of ;). The primitive operations these execute are typically one or two cycles on a RISC such as a Cortex-M4, while a subroutine call and return are two more cycles, often plus two to four cycles of pipeline bubble (if the processor doesn't have good enough branch prediction). Presumably on the PDP-7 a subroutine call would have needed an additional memory cycle to store the return address into memory and another one to fetch it, plus two more memory cycles to fetch the call and return instructions. (I'm not familiar with the -7's instruction set, so correct me if I'm wrong.)
With respect to dup, though, commonly dup, drop, swap, and over represent operations that don't appear in optimized native code—they just tell the following operations which data to operate on, a purpose which is normally achieved by operand fields in native code. So the runtime overhead of stack-bytecode interpretation is a worse than it appears at first: each bytecode instruction takes time of the order of 4× or 8× the time of as a native instruction doing the same thing, but you have to run about twice as many bytecode instructions because about half of them are stack manipulation. So your total slowdown is maybe 8× or 16×.
You may also be interested in looking at the program dc, which IIRC was one of the programs Unix was originally written to run. It's a stack bytecode designed to be written by hand, like HP desk calculators of the time but with arbitrary precision.
>The project's README then proceeds into lengthy detail about how to implement
>Forth in C.
Seems people have/are more fun/interested implementing Forth interpreters/compilers than actually using Forth. Same with CHIP-8. It's all about making emulators.
Then you can have fun implementing Forth in Forth, with a metacompiler that compiles itself!
Here's some of Mitch Bradley's beautiful code from OpenFirmware, his Forth kernel meta-compiler written in Forth, which supports 8, 16, 32, an 64 bit, big-endian and little-endian architectures, as well as direct, indirect, and token threaded code, with or without headers, etc:
The OpenFirmware kernel is a Forth meta-compiler, which can compile itself on any architecture, and also cross-compile for different target architectures.
It has cross-architecture extensions to FORTH (like \16 \32 comments and /n /n* generically typed words) that make it possible to write platform, word size, byte order, and threading independent code, and compile images (stripped or with headers) for embedded systems (like the OLPC boot ROMs) and new CPU architectures (like Sun's transition from 68K to SPARC), and share code with a more powerful development environments.
What I like about Forth is that it can be expressed at the lowest level of computation, and that it can be used to bridge that to the highest level of computation. For example, Forth only requires about 12 opcodes to run, which can be implemented in a few dozen chips. But now that you have that, since it's Turing-complete, you can now pull across a lisp or C compiler, and build a working operating system from there. Granted, that would be a lot of work, but it's relatively straightforward work, and that's always impressed me.
Article is lame in multiple ways, and also eForth was written by Bill Muench. Dr Ting adopted Muench's version to use assembly language bootstrapping instead of metacompilation. Bootstrapping is possibly easier for beginners to understand, but metacompilation is part of Forth's fiendish cleverness and it's a shame for an aficionado to miss out on it.
Oh, I'll have to correct this! I've only seen eForth mentioned with Dr Ting's name all over. Thank you.
The metacompilation part is really nice. Did the self-modification section of the essay not convey that to you? Because that's what it was :s I'll have to revise it.
I really want this essay to be definitive, so even after 4 revisions there is still some way to go. All the comments have been extremely helpful to further reach that goal :)
You should also look at how cmForth's metacompilation worked. It's even more fiendish, but relied on cmForth's separate interpreter and compiler dictionaries, which apparently was annoying to use. https://dl.acm.org/doi/10.1145/382125.382916
I didn't notice anything in the article that made me think of metacompilation, but maybe I missed it and should re-read.
Added: you should also look at Bill Muench's version of eForth including its metacompiler. I'm not that big a fan of eForth for practical use (it's TOO minimal) but wow it is simple.
This is a pretty good explanation. I think it maybe undersells the importance of the REPL a bit. (I'm not a Forth expert, but I did write StoneKnifeForth, a self-compiling compiler in a Forth subset, and I've frequently complained about the quality of Forth explainers.)
>I think it maybe undersells the importance of the REPL a bit.
Howo? Or would you agree that value is perhaps a more suitable word than importance? For me I think these articles have such a tendency to fixate on the strengths of Forth to the extent that they have reduced Forth to those strengths in the eyes of many. TFA does a fair job of avoiding this and shows Forth more as a powerful and flexible general purpose language than a very niche language, but I think it still focuses a bit much on the strengths at cost of the general.
You are a Forth expert compared to me and probably most of HN, so try and keep that in mind with your response if you could.
Edit: I am probably asking for insight into your workflow with Forth, but maybe not?
You can see my workflow with Forth in https://asciinema.org/a/621404, which should help reinforce my point that I'm not an expert.
What I mean is that Forth as a programming language is kind of... not great? Like, it's kind of hard to read and hard to write.
For years I thought this might be just a question of familiarity, but not I'm resigned to the fact that I will probably not learn to read Forth as easily as I can read conventional infix syntax within my natural lifetime. I wrote my first RPN programs on an HP-38E calculator in about 01985, I wrote an RPL program to search my address book on my HP-48GX in the 90s, I wrote a parametric CAD system for laser cutting in PostScript, I wrote a quasi-Forth compiler that compiles itself to machine code, and I think I just have to give up on being able to read
o ->s dup * o ->c dup * + sqrt 400 / amp !
as easily as
long amp = sqrt(o->s*o->s + o->c*o->c) / 400.0;
It might still be a problem of familiarity, rather than some kind of objective truth, but it's one I'm going to have to live with. (I'm absolutely sure that the reason I have to sound out Greek words letter by letter instead of reading them instantly the way I do in English, Spanish, French, or Portuguese is 100% a question of familiarity; it's completely implausible that Greek is objectively harder to read. I just have the Latin alphabet wired into my brain by decades of constant practice.)
There's a familiarity problem I flatter myself to think is separate, with Forth's vocabulary; for example, within takes its arguments in the order x min max, and because I've programmed much less in Forth than in other languages, I always have to look things like that up, whereas I know the order of arguments to read() or strcat() without having to do so.
It's not a huge difference from C; Forth has more metaprogramming and reflection power than C, but the syntax is less readable, and it's more error-prone in a variety of ways (parameter passing, recursion, types). Presumably infallible programmers would prefer Forth to C, since those weaknesses would not affect them, and they'd have less code to write. I am far from an infallible programmer.
But a lot of those weaknesses are because Forth is designed as a single language for the whole system: assembler, high-level programming, editor commands, debugger, "shell" commands, the whole works. So you have things like ? which is simply defined as : ? @ . ; and seems kind of goofy from a programming-language perspective—why would you dedicate a precious single-character word to printing out the value of a memory location? How often do you want to do that in the middle of your program? Why not just write @ . instead? Wouldn't ? be more valuable in a switch/case statement or something?
However, in the context where Forth grew up, the sibling of DTSS BASIC and DEBUG.COM and DDT, it makes perfect sense; if you've just tested a word (subroutine) that is supposed to change the value of a variable x, you want to be able to say x ? rather than typing out the whole x @ . phrase. It sounds trivial, but it's actually really important, especially if you can't touch-type, as most programmers couldn't at the time. BASIC did the same thing for the same reason: instead of typing print x or even printx you could type ?x to see the value of x.
> The small overhead [of extra punctuation] is tolerable, though sucky, when you program, because you write the piece of code once and while you're doing it, you're concentrating on the task and its specifics, like the language syntax. When you're interacting with a command shell though, it's a big deal. You're not writing a program – you're looking at files, or solving equations, or single-stepping a processor. I have a bug, I'm frigging anxious, I gotta GO GO GO as fast as I can to find out what it is already, and you think now is the time to type parens, commas and quotation marks?! Fuck you! By which I mean to say, short code is important, short commands are a must.
So, Forth is designed so that you can use it as a command language and a high-level programming language and an assembly language. It's like Robert A. Heinlein's ideal unspecialized Renaissance-man language: it can change a diaper, plan an invasion, butcher a hog, program a computer, etc. This (necessarily in my view) involves some compromises—the best possible result will often be worse as a high-level programming language than a language that's designed for just that, and worse as a command language than a language that's designed for just that, and maybe worse as an assembly language too.
You can make a convincing argument for the general case of this with a 2×2 matrix of candidate language design features:
╭────────────┬──────────────────┬──────────────────╮
│ │ good for │ bad for │
│ │ command language │ command language │
├────────────┼──────────────────┼──────────────────┤
│ good for │ │ │
│ high-level │ 0 │ 1 │
│ language │ │ │
├────────────┼──────────────────┼──────────────────┤
│ bad for │ │ │
│ high-level │ 2 │ 3 │
│ language │ │ │
╰────────────┴──────────────────┴──────────────────╯
The argument is simply that the set of candidate language design features that go in boxes 1 and 2 is not exactly the empty set. It would be an astounding coincidence if it were, wouldn't it? And every time you add a feature from box 1 to your language, you make it better as a programming language and worse as a command language, and vice versa for box 2. Omitting a feature from box 1 makes your language better as a command language and worse as a programming language, and vice versa for box 2.
The more difficult argument to make is that the compromises are substantial. A skeptic might wonder whether the only compromises are trivial things like the ? I mentioned above. I think it's an argument Yossi has made well in the post I linked above, which has nothing specifically to do with Forth. Also, though, I think that a lot of Forth's design decisions that are unorthodox for programming languages, such as its lack of typing, its lack of syntax, and its lack of stack frames with local variables, are easily understood as accommodations for interactive use, and I think that they do in fact make it substantially worse as a programming language. This is highly debatable, and debated, but it is my current point of view.
In my view, the REPL somewhat makes up for Forth's weaknesses as a programming language in two ways: first, by allowing you to interactively test your code as you write it, and second, by freeing you from having to write user interface code that does things like parse command lines.
There are a few different ways that Forth encourages writing your code as a ravioli-code soup of tiny one-line definitions. Single-line definitions are easier to test interactively, and statically allocating your local variables allows you to share them between multiple definitions, which reduces the required parameter passing (the abstraction penalty). Implicit parameter passing also reduces the syntactic abstraction penalty of subroutine calls. And, barring inlining compiler optimizations, Forth is faster at calling subroutines than any other language (arguably except for other Forth-like things like FOCAL), reducing the abstraction penalty at runtime as well.
This is both good and bad. Ravioli code is more flexible, because you can call existing definitions in new contexts, but harder to understand, because the definition you're editing might be called from a context you aren't seeing. If you were infallible, this greater composability would enable you to bootstrap from nothing to whatever application you wanted to build with less total code. This makes Forth's drawbacks less serious for throwaway code (which doesn't need to be understood or maintained) and for infallible programmers.
Independent of any of this, the REPL is a huge advantage if you're exploring an unknown hardware platform that might be buggy. You probably need one, whether Forth or something else.
So, that's why I think the REPL is very important for understanding Forth—both its virtues and its vices. It is valuable, but it also makes UX demands on other parts of the language which makes them worse in other ways.
What... what are those "development effort estimates"? They seem to assume an average rate of approximately 11 lines of code written per day which seems a bit too low if you ask me.
Besides, once a C compiler is written for one platform, porting it to another one takes significantly less time than writing from scratch (especially if the compiler is written with portability in mind).
> They seem to assume an average rate of approximately 11 lines of code written per day which seems a bit too low if you ask me.
I didn't calculate it but if you're just dividing one number by the other then you're assuming the final code arrived fully-formed in one go - don't forget about refactoring, debugging, testing, etc. etc.
Yes, 11 lines of debugged, delivered source lines of code written per day is within the normal range. The basic COCOMO model used by David A. Wheeler's "SLOCCount" estimates person-months as 2.4 * (KSLOC**1.05), which was calibrated with painfully-won experience on a variety of software projects in the 80s. If you approximate that as 2.4 months per kSLOC, it works out to 13.7 lines of code per calendar day, or 18.5 lines of code per workday. Other studies found lower speeds. Like 10: https://softwareengineering.stackexchange.com/questions/4506...
Every programmer thinks this is stupid the first time they see it, because we can all remember times we wrote 50 or 100 or 200 or 300 lines of code in a day, and that code worked. But other days we write zero lines of code, because we spend them in meetings, or debugging code, or arguing about the protocol design, or testing hardware we need to support, or testing our code. And sometimes we'll spend a week refactoring the codebase to reduce the number of lines of code (https://www.folklore.org/Negative_2000_Lines_Of_Code.html) so those 200 or 300 lines of code may not end up in the delivered product—if they were ever intended to, since test stubs and test scripts also don't count as delivered code.
And that's how we get to Peter Naur's "Programming as Theory Building": https://ratfactor.com/papers/naur1 because writing the code clearly isn't the bottleneck. It's figuring out how things have to work that bottlenecks us. It's more like proving a theorem than writing a novel. The code is a product of the process, but it's more like the exhaust of an engine than its power output.
And that's why the exponent of kSLOC is 1.05 instead of 1: bigger systems are harder to add to.
> Besides, once a C compiler is written for one platform, porting it to another one takes significantly less time than writing from scratch (especially if the compiler is written with portability in mind).
Why would you think that's different for any other language, like oh for example Forth?
I often use a heavily forth inspired script language in my bigger c# projects. I have a hidden repl and can input scripts. I like how easy it is to produce results with such low vocabulary. Also there is no expression parsing
I like the spirit of this article, but I find it strange that they open their article by quoting me, but then don't include Dusk OS's C compiler in the list.
Fairly counting SLOC is a tricky problem, but my count is 1119 lines of code for the C compiler (written in Forth of course), that's less than 8x the count of chibicc, described as the smallest.
(typing from phone) i simply had not known. it is a great example of a short path to a c dialect then! merci pour votre travaille au dusk et collapse! i will add a paragraph about it to the essay. this took me many days to write and many revisions and still i see it isnt perfect!
note the point of that section was really that anyone using gcc or clang should ack. the real cost when using them.
Possibly this is why it wasn't mentioned? There are enough differences in that list that I can't imagine any existing C library would compile unchanged.
After using cc<< for non-trivial programs, it's about as quirky as the Plan 9 C compiler, the lack of multi-dimensional arrays is the one thing that trips me up the most with cc<<
Relying on binutils and an assembler is fine. I don't think it really affects the internal complexity of the compiler much, but having textual assembly to look at can be handy for debugging the compiler, so it might reduce the human effort to get it working.
By design, it's not a fully compliant ANSI C compiler, so it's never going to be complete, but it's complete enough to, for example, successfully compile Plan 9's driver for the Raspberry Pi USB controller with a minimal porting effort.
So, Dusk's compiler is not apple-to-apple comparable to the other, but comparable enough to give a ballpark idea that its code density compares very, very favorably.
Indeed, and it might be why the author didn't try, but I still find it odd to not at least mention how small this C compiler is, even if it is to say that it's not apples-to-apples comparable. I mean, 8x smaller than the smallest C compiler listed is still something notable...
What I see as the real strength of Forth is that if you write your program in source files, there is no abstraction. You stick your word definitions in those source files and let the application you are writing dictate the words you define instead of relying on the dictionary you built up on REPL and things quickly start becoming easy and the code remains readable. It might seem like a lot of work and endlessly reinventing the wheel but if you start with a featureful Forth like gforth it does not take that much time or effort once you have the basics down; you can build complex applications with gforth as easily as you can build up a full Forth with that minimal Forth that fits in your boot sector.
The main thing is learning that application development with Forth is top down design and bottom up writing. You break the application up into its major functions, break those into smaller functions, break those into words and then break those words into those simple sort words that everyone in the Forth community says you should write. Then you start writing code. I am just starting to get the hang of Forth and it is surprisingly quick and powerful once you start getting the sense of it.
https://8th-dev.com/manual.html
Not sure if that is the best example, but go to the "network" section and you can see plenty of examples of connection stuff. Also cool things in the map, graph, console, hardware, DB (database), and nuklear (GUI) sections.
It is commercial though (albeit with a free tier iirc), so that may or may not be attractive for you if you wanted to see all source. For me, I just wanted to spend time playing around with a well polished ~forth that had all these things builtin, so fine for my more limited use cases. Coming from Python as my daily driver, I found it really easy to pickup the tooling and have fun building some super simple toy apps. The most default data structure is basically JSON, which is a pretty unconventional forth approach, but just clicked with me as I'm used to Python dictionaries. You might also be able to do all that with gForth, but not sure (referring to the ease of use from high level data structures).
There was also a forth-like language written in C# that was open source I think and pretty cool. It might have been retroforth which is available in various formats and has been talked about on here many times. I think the source comes with the zip file, but haven't looked in years. I assume it has some utility libraries for doing normal things.
some CLI program
https://8th-dev.com/
As for the manual's omission, I'm guessing it's a typo. I want to say it used to be there, but haven't looked in a long time.
PS. Didn't realize the existence of 8th, looks cool
Also, you're excited by Forth and Lisp, you might like Forsp[1]. It uses a call-by-push-value evaluation strategy in a way that really makes the language feel like a true hybrid of the two. The implementation is also brilliantly simple.
Anyway, thank you for the article. I lurk on the DuskOS mailing list and wish I could find out where the Forthers gather IRL so I could osmote more of their world into my own.
[0]:https://dacvs.neocities.org/SF/
[1]:https://xorvoid.com/forsp.html
The early B compiler was reported to generate threaded code (like Forth). The threaded code was abandoned fairly early in the port to the PDP11 from the PDP7 as it was deemed to slow to write an operating system in.
At which point unix and C lost a very interesting size optimization. With the net result that Forth was more portable and portable between machines circa 1970 and Unix had to wait until circa 1975 with UnixV6.
I have been trying to go back through the history to see if I could understand why threaded code was deemed too slow. Today the most part of executables is code that is not run often and would probably benefit from a smaller representation (less memory and cache space making the system faster overall). So this is a practical question even today.
I found a copy of unix for the PDP7. Reproduced from old printouts typed in. If I have read the assembly correctly the B compiler was not using an efficient form of threaded code at all.
The PDP7 is an interesting machine. It's cells were 18 bits wide. The adress bus was 12bits wide. Which meant there was room for an opcode and a full address in every cell.
As I read the B compiler it was using a form of token threading with everything packed into a single 18 bit cell. The basic operations of B were tokens and an if token encoded with a full address in the cell. Every token had to be decoded via a jump table, the address of the target code was then plugged into a jump instruction which was immediately run.
Given the width of the cells, I wonder what the conclusions about performance of B would have been if subroutine threading or a similar technique using jmp instructions would have been.
Does anyone know if Forth suffers measurably in inner loops from have to call words that perform basic operations?
Is this where a Forth programmer would be accustomed to write the inner loop in assembly to avoid the performance penalty?
Which was still incredibly fast for the day, given that Forth was compiled to an intermediary format with the Forth interpreter acting as a very primitive virtual machine. This interpretation step had considerable overhead, especially in inner loops with few instructions the overhead would be massive. For every one instruction doing actual work you'd have a whole slew of them assigned to bookkeeping and stack management. What in C would compile to a few machine instructions (which a competent assembly programmer of the time would be able to significantly improve upon) would result in endless calls to lower and lower levels.
There were later Forth implementations that improved on this by compiling to native code but I never had access to those when I was still doing this.
For a lark I wrote a Forth in C rather than bootstrapping it through assembly and it performed quite well, Forth is ridiculously easy to bring up, it is essentially a few afternoons work to go from zero to highway speeds on a brand new board that you have a compiler for. Which is one of the reasons it is still a favorite for initial board bring-up.
One area where Forth usually beat out C by a comfortable margin was code size, Forth code tends to be extremely compact (and devoid of any luxury). On even the smallest micro controllers (8051 for instance, and later, MicroChip and such) you could get real work done in Forth.
The overhead of threading seems pretty obvious: call and return instructions are expensive compared to the cost of the one equivalent instruction that would have been executed in a compiled implementation. And placing arguments on a stack means that all operands have to to be read from and written to memory, incurring additional ferocious overhead, whereas a compiler would enregister values, particularly in performance-critical code. Not unreasonable to expect that Forth code is going to run at least an order of magnitude slower than compiled code.
- returns can run in as good as zero time
- when calling a word, the CPU could prefetch the cache line containing the next word to be called, on the assumption that it would be called soon (an assumption that would be correct for many “almost leaf” calls)
- the top of the stack could be kept in register-speed memory.
For an example, see https://users.ece.cmu.edu/~koopman/stack_computers/sec4_4.ht...:
“The internal structure of the NC4016 is designed for single clock cycle instruction execution. All primitive operations except memory fetch, memory store, and long literal fetch execute in a single clock cycle. This requires many more on-chip interconnection paths than are present on the Canonical Stack Machine, but provides much better performance.
[…]
The NC4016 subroutine return bit allows combining a subroutine return with other instructions in a similar manner. This results in most subroutine exit instructions executing "for free" in combination with other instructions. An optimization that is performed by NC4016 compilers is tail-end recursion elimination. Tail-end recursion elimination involves replacing a subroutine call/subroutine exit instruction pair by an unconditional branch to the subroutine that would have been called.”
(¿Almost?) all modern hardware is designed for other language paradigms, though, so these won’t help with hardware you can buy.
Yes, the slowdown is of the order of 8× for DTC, ITC, and bytecode ("token threading"). Eliminating the jump table reduces the overhead a bit, but it's still order 8×.
The B compiler bundled a copy of the bytecode interpreter into each executable; that might have made it less appealing as a size optimization. For a big enough program it would still have won.
Subroutine threading is really just compact native code, but it still suffers from typically about 4× overhead for basic operations like dup, @, +, or exit (the traditional name for the runtime effect of ;). The primitive operations these execute are typically one or two cycles on a RISC such as a Cortex-M4, while a subroutine call and return are two more cycles, often plus two to four cycles of pipeline bubble (if the processor doesn't have good enough branch prediction). Presumably on the PDP-7 a subroutine call would have needed an additional memory cycle to store the return address into memory and another one to fetch it, plus two more memory cycles to fetch the call and return instructions. (I'm not familiar with the -7's instruction set, so correct me if I'm wrong.)
With respect to dup, though, commonly dup, drop, swap, and over represent operations that don't appear in optimized native code—they just tell the following operations which data to operate on, a purpose which is normally achieved by operand fields in native code. So the runtime overhead of stack-bytecode interpretation is a worse than it appears at first: each bytecode instruction takes time of the order of 4× or 8× the time of as a native instruction doing the same thing, but you have to run about twice as many bytecode instructions because about half of them are stack manipulation. So your total slowdown is maybe 8× or 16×.
You may also be interested in looking at the program dc, which IIRC was one of the programs Unix was originally written to run. It's a stack bytecode designed to be written by hand, like HP desk calculators of the time but with arbitrary precision.
Seems people have/are more fun/interested implementing Forth interpreters/compilers than actually using Forth. Same with CHIP-8. It's all about making emulators.
Here's some of Mitch Bradley's beautiful code from OpenFirmware, his Forth kernel meta-compiler written in Forth, which supports 8, 16, 32, an 64 bit, big-endian and little-endian architectures, as well as direct, indirect, and token threaded code, with or without headers, etc:
kernel.fth: https://github.com/MitchBradley/openfirmware/blob/master/for...
metacompile.fth: https://github.com/MitchBradley/openfirmware/blob/master/for...
The OpenFirmware kernel is a Forth meta-compiler, which can compile itself on any architecture, and also cross-compile for different target architectures.
It has cross-architecture extensions to FORTH (like \16 \32 comments and /n /n* generically typed words) that make it possible to write platform, word size, byte order, and threading independent code, and compile images (stripped or with headers) for embedded systems (like the OLPC boot ROMs) and new CPU architectures (like Sun's transition from 68K to SPARC), and share code with a more powerful development environments.
The metacompilation part is really nice. Did the self-modification section of the essay not convey that to you? Because that's what it was :s I'll have to revise it.
I really want this essay to be definitive, so even after 4 revisions there is still some way to go. All the comments have been extremely helpful to further reach that goal :)
I didn't notice anything in the article that made me think of metacompilation, but maybe I missed it and should re-read.
Added: you should also look at Bill Muench's version of eForth including its metacompiler. I'm not that big a fan of eForth for practical use (it's TOO minimal) but wow it is simple.
Code as structure could be more conveniently expressed as language data structures as structure nowdays.
[1] https://discord.gg/9DveEJ42
Howo? Or would you agree that value is perhaps a more suitable word than importance? For me I think these articles have such a tendency to fixate on the strengths of Forth to the extent that they have reduced Forth to those strengths in the eyes of many. TFA does a fair job of avoiding this and shows Forth more as a powerful and flexible general purpose language than a very niche language, but I think it still focuses a bit much on the strengths at cost of the general.
You are a Forth expert compared to me and probably most of HN, so try and keep that in mind with your response if you could.
Edit: I am probably asking for insight into your workflow with Forth, but maybe not?
What I mean is that Forth as a programming language is kind of... not great? Like, it's kind of hard to read and hard to write.
For years I thought this might be just a question of familiarity, but not I'm resigned to the fact that I will probably not learn to read Forth as easily as I can read conventional infix syntax within my natural lifetime. I wrote my first RPN programs on an HP-38E calculator in about 01985, I wrote an RPL program to search my address book on my HP-48GX in the 90s, I wrote a parametric CAD system for laser cutting in PostScript, I wrote a quasi-Forth compiler that compiles itself to machine code, and I think I just have to give up on being able to read
as easily as It might still be a problem of familiarity, rather than some kind of objective truth, but it's one I'm going to have to live with. (I'm absolutely sure that the reason I have to sound out Greek words letter by letter instead of reading them instantly the way I do in English, Spanish, French, or Portuguese is 100% a question of familiarity; it's completely implausible that Greek is objectively harder to read. I just have the Latin alphabet wired into my brain by decades of constant practice.)There's a familiarity problem I flatter myself to think is separate, with Forth's vocabulary; for example, within takes its arguments in the order x min max, and because I've programmed much less in Forth than in other languages, I always have to look things like that up, whereas I know the order of arguments to read() or strcat() without having to do so.
It's not a huge difference from C; Forth has more metaprogramming and reflection power than C, but the syntax is less readable, and it's more error-prone in a variety of ways (parameter passing, recursion, types). Presumably infallible programmers would prefer Forth to C, since those weaknesses would not affect them, and they'd have less code to write. I am far from an infallible programmer.
But a lot of those weaknesses are because Forth is designed as a single language for the whole system: assembler, high-level programming, editor commands, debugger, "shell" commands, the whole works. So you have things like ? which is simply defined as : ? @ . ; and seems kind of goofy from a programming-language perspective—why would you dedicate a precious single-character word to printing out the value of a memory location? How often do you want to do that in the middle of your program? Why not just write @ . instead? Wouldn't ? be more valuable in a switch/case statement or something?
However, in the context where Forth grew up, the sibling of DTSS BASIC and DEBUG.COM and DDT, it makes perfect sense; if you've just tested a word (subroutine) that is supposed to change the value of a variable x, you want to be able to say x ? rather than typing out the whole x @ . phrase. It sounds trivial, but it's actually really important, especially if you can't touch-type, as most programmers couldn't at the time. BASIC did the same thing for the same reason: instead of typing print x or even printx you could type ?x to see the value of x.
Similarly, the lack of syntax is important for things like editor interaction, or interactively poking at hardware registers, or whatever. As Yosef Kreinin wrote in https://yosefk.com/blog/i-cant-believe-im-praising-tcl.html:
> The small overhead [of extra punctuation] is tolerable, though sucky, when you program, because you write the piece of code once and while you're doing it, you're concentrating on the task and its specifics, like the language syntax. When you're interacting with a command shell though, it's a big deal. You're not writing a program – you're looking at files, or solving equations, or single-stepping a processor. I have a bug, I'm frigging anxious, I gotta GO GO GO as fast as I can to find out what it is already, and you think now is the time to type parens, commas and quotation marks?! Fuck you! By which I mean to say, short code is important, short commands are a must.
So, Forth is designed so that you can use it as a command language and a high-level programming language and an assembly language. It's like Robert A. Heinlein's ideal unspecialized Renaissance-man language: it can change a diaper, plan an invasion, butcher a hog, program a computer, etc. This (necessarily in my view) involves some compromises—the best possible result will often be worse as a high-level programming language than a language that's designed for just that, and worse as a command language than a language that's designed for just that, and maybe worse as an assembly language too.
You can make a convincing argument for the general case of this with a 2×2 matrix of candidate language design features:
The argument is simply that the set of candidate language design features that go in boxes 1 and 2 is not exactly the empty set. It would be an astounding coincidence if it were, wouldn't it? And every time you add a feature from box 1 to your language, you make it better as a programming language and worse as a command language, and vice versa for box 2. Omitting a feature from box 1 makes your language better as a command language and worse as a programming language, and vice versa for box 2.The more difficult argument to make is that the compromises are substantial. A skeptic might wonder whether the only compromises are trivial things like the ? I mentioned above. I think it's an argument Yossi has made well in the post I linked above, which has nothing specifically to do with Forth. Also, though, I think that a lot of Forth's design decisions that are unorthodox for programming languages, such as its lack of typing, its lack of syntax, and its lack of stack frames with local variables, are easily understood as accommodations for interactive use, and I think that they do in fact make it substantially worse as a programming language. This is highly debatable, and debated, but it is my current point of view.
In my view, the REPL somewhat makes up for Forth's weaknesses as a programming language in two ways: first, by allowing you to interactively test your code as you write it, and second, by freeing you from having to write user interface code that does things like parse command lines.
There are a few different ways that Forth encourages writing your code as a ravioli-code soup of tiny one-line definitions. Single-line definitions are easier to test interactively, and statically allocating your local variables allows you to share them between multiple definitions, which reduces the required parameter passing (the abstraction penalty). Implicit parameter passing also reduces the syntactic abstraction penalty of subroutine calls. And, barring inlining compiler optimizations, Forth is faster at calling subroutines than any other language (arguably except for other Forth-like things like FOCAL), reducing the abstraction penalty at runtime as well.
This is both good and bad. Ravioli code is more flexible, because you can call existing definitions in new contexts, but harder to understand, because the definition you're editing might be called from a context you aren't seeing. If you were infallible, this greater composability would enable you to bootstrap from nothing to whatever application you wanted to build with less total code. This makes Forth's drawbacks less serious for throwaway code (which doesn't need to be understood or maintained) and for infallible programmers.
Independent of any of this, the REPL is a huge advantage if you're exploring an unknown hardware platform that might be buggy. You probably need one, whether Forth or something else.
So, that's why I think the REPL is very important for understanding Forth—both its virtues and its vices. It is valuable, but it also makes UX demands on other parts of the language which makes them worse in other ways.
Besides, once a C compiler is written for one platform, porting it to another one takes significantly less time than writing from scratch (especially if the compiler is written with portability in mind).
I didn't calculate it but if you're just dividing one number by the other then you're assuming the final code arrived fully-formed in one go - don't forget about refactoring, debugging, testing, etc. etc.
Every programmer thinks this is stupid the first time they see it, because we can all remember times we wrote 50 or 100 or 200 or 300 lines of code in a day, and that code worked. But other days we write zero lines of code, because we spend them in meetings, or debugging code, or arguing about the protocol design, or testing hardware we need to support, or testing our code. And sometimes we'll spend a week refactoring the codebase to reduce the number of lines of code (https://www.folklore.org/Negative_2000_Lines_Of_Code.html) so those 200 or 300 lines of code may not end up in the delivered product—if they were ever intended to, since test stubs and test scripts also don't count as delivered code.
And that's how we get to Peter Naur's "Programming as Theory Building": https://ratfactor.com/papers/naur1 because writing the code clearly isn't the bottleneck. It's figuring out how things have to work that bottlenecks us. It's more like proving a theorem than writing a novel. The code is a product of the process, but it's more like the exhaust of an engine than its power output.
And that's why the exponent of kSLOC is 1.05 instead of 1: bigger systems are harder to add to.
We had a discussion of this here 17 years ago https://news.ycombinator.com/item?id=333650 and maybe see also Forth authority Phil Koopman's take at https://betterembsw.blogspot.com/2010/05/only-10-lines-of-co... and Brian's take at https://blog.ndepend.com/mythical-man-month-10-lines-per-dev....
Why would you think that's different for any other language, like oh for example Forth?
Fairly counting SLOC is a tricky problem, but my count is 1119 lines of code for the C compiler (written in Forth of course), that's less than 8x the count of chibicc, described as the smallest.
note the point of that section was really that anyone using gcc or clang should ack. the real cost when using them.
https://git.sr.ht/~vdupras/duskos/tree/master/item/fs/doc/co...
is shortcutting different from short circuiting?
So, Dusk's compiler is not apple-to-apple comparable to the other, but comparable enough to give a ballpark idea that its code density compares very, very favorably.
It has the most charming illustrations I've ever seen in a text book.
[0] https://www.forth.com/starting-forth/
https://howerj.github.io/subleq.htm
https://howerj.github.io/subleq.htm
The same, but multiplexing instructions (it runs much faster):
https://howerj.github.io/muxleq.htm