Physical substrate: Electronics
Computation theory: Turing machines
Smallest logical/physical parts: Transistors, Logic gates
Largest logical/physical parts: Chips
Lowest level programming: ARM/x64 instructions
First abstractions of programming: Assembly, C compiler
Software architecture: Unix kernel , Binary interfaces
User interface: Screen, Mouse, Keyboard, GNU
Does there exist a computer stack that changes all of these components?Or at the least uses electronics but substitutes something else for Turing machines and above.
1. Transmeta: https://en.wikipedia.org/wiki/Transmeta
2. Cell processor: https://en.wikipedia.org/wiki/Cell_(processor)
3. VAX: https://en.wikipedia.org/wiki/VAX (Was unusual for it's time, but many concepts have since been adopted)
4. IBM zArchitecture: https://en.wikipedia.org/wiki/Z/Architecture (This stuff is complete unlike conventional computing, particularly the "self-healing" features.)
5. IBM TrueNorth processor: https://open-neuromorphic.org/blog/truenorth-deep-dive-ibm-n... (Cognitive/neuromorphic computing)
The Wikipedia link has it as its first reference, but it's honestly worth linking directly here. I highly recommend trying to get someone to give you a TSO/E account and picking up HLASM.
Not TSO/E rather just plain old TSO. Not HLASM rather its predecessor Assembler F (IFOX00). Still, if you get the hang of the 1970s version, the 2020s version is just adding stuff. And some of the stuff it is adding is less unfamiliar (like Unix and Java)
Having a simulated 3033 running at 10+ MIPS is pretty nice though. (:
I've never used it, but there's a hacked up version that adds 31-bit addressing [0].
It is truly a hack though – porting 24-bit MVS to XA is a monumental task, not primarily due to address mode (you can always ignore new processor modes, just like how a 286-only OS will happily run on a 386 without realising it is doing so), but rather due to the fundamental incompatibility in the IO architecture.
The "proper" way to do it would be to have some kind of hypervisor which translates 370 IO operations to XA – which already exists, and has so for decades, it is called VM/XA, but sadly under IBM copyright restrictions just like MVS/XA is. I suppose someone could always write their own mini-hypervisor that did this, but the set of people with the time, inclination and necessary skills is approximately (if not exactly) zero.
So instead the hacky approach is to modify Hercules to invent a new hybrid "S/380" architecture which combines XA addressing with 370 IO. Given it never physically existed, I doubt Hercules will ever agree to upstream it.
Also, IIRC, it doesn't implement memory protection/etc for above-the-line addresses, making "MVS/380" essentially a single-tasking OS as far as 31-bit code goes. But the primary reason for its existence is that GCC can't compile itself under 24-bit since doing so consumes too much memory, and for that limited purpose you'd likely only ever run one program at a time anyway.
I guess the other way of solving the problem would been to have modify GCC to do application-level swapping to disk - which is what a lot of historical compilers did to fit big compiles into limited memory. But I guess making those kinds of radical changes to GCC is too much work for a hobby. Or pick a different compiler altogether – GCC was designed from the beginning for 32-bit machines, and probably alternatives would go better in very limited memory – but people had already implemented 370 code generation for GCC (contemporary GCC supports 64-bit and 31-bit; I don't think contemporary mainline GCC supports 24-bit code generation any more, but people use an old version or branch which did.) I wonder about OpenWatcom, since that's originally a 370 compiler, and I believe the 370 code generator is still in the source tree, although I'm not sure if anybody has tried to use it.
[0] https://mvs380.sourceforge.net/
To your note about a hypervisor though: I did consider going this route. I already find VM/370 something of a more useful system anyway, and having my own VM/XA of sorts is an entertaining prospect.
Primarily just intercept SIO/etc instructions, and replace them with the XA equivalent.
Another idea that comes to mind: you could locate the I/O instructions in MVS 3.8J and patch them over with branches to some new "I/O translation" module. The problem I think with that, is while the majority of IO goes through a few central places in the code (IOS invoked via SVC call made by EXCP), there's I/O instructions splattered everywhere in less central parts of the system (VTAM, TCAM, utilities, etc).
That there makes me feel a little less confident in an MVS 3.8j patching effort.
Z is definitely interesting from it's history with the IBM 360 and its 24 bit address space. (24 bit micros existed in the 1980s such as the 286 but I never had one that was straightforward to program in 24 bit mode) which, around the time the VAX came out, got expanded to 31-bits
https://en.wikipedia.org/wiki/IBM_System/370-XA
Making clear that we are talking about 24-bit physical or virtual addressing (machines with a 24-bit data word we’re quite rare, mainly DSPs, also some non-IBM mainframes like SDS 940):
286’s 16-bit protected mode was heavily used by Windows 3.x in Standard mode. And even though 386 Enhanced Mode used 32-bit addressing, from an application developer viewpoint it was largely indistinguishable from 286 protected mode, prior to Win32s. And then Windows NT and 95 changed all that.
286’s protected mode was also heavily used by earlier DOS extenders, OS/2 1.x, earlier versions of NetWare and earlier versions of Unix variants such as Xenix. Plus 32-bit operating systems such as Windows 9x/Me/NT/2000/XP/etc and OS/2 2.x+ still used it for backward compatibility when running older 16-bit software (Windows 3.x and OS/2 1.x)
Other widely used CPU architectures with 24-bit addressing included anything with a Motorola 68000 or 68010 (32-bit addressing was only added with the 68020 onwards, while the 68012 had 31-bit addressing). So that includes early versions of classic MacOS, AmigaOS, Atari TOS - and also Apple Lisa, various early Unix workstations, and umpteen random operating systems which ran under 68K which you may have never heard of (everything from CP/M-68K to OS/9 to VERSAdos to AMOS/L).
ARM1 (available as an optional coprocessor for the BBC Micro) and ARM2 (used in the earliest RISC OS systems) were slightly more than 24-bit, with 26-bit addressing. And some late pre-XA IBM mainframes actually used 26-bit physical addressing despite only having 24-bit virtual addresses. Rather similar to how 32-bit Intel processors ended up with 36-bit physical addressing via PAE
I didn't think the segment registers were a big hassle in the 8086 real mode in fact I thought it was fun to do crazy stuff in assembly such as use segment register values as long pointers to large objects (with 16 byte granularity) I think the segment registers would have felt like more of a hassle if I was writing larger programs (e.g. 64k data + 64k code + 64k stack gets you further towards utilizing a 640k address space than it does towards a 16M address space).
I recently discovered
https://en.wikipedia.org/wiki/Zilog_eZ80
and think that 2001 design is close to an ideal 24 bit micro in that both regular and index registers are extended to 24 bits, you have long pointers and all the facilities to work in a 24 bit "problem space" based on an architecture that is reasonable to write compilers for. It would be nice to have an MMU so you could have a real OS, even something with bounds registers would please me, but with many reasonably priced dev boards like
https://www.tindie.com/products/lutherjohnson/makerlisp-ez80...
it is a dream you can live. I don't think anything else comes close, certainly not
https://en.wikipedia.org/wiki/WDC_65C816
where emulating long pointers would have been a terrible hassle and which didn't do anything to address the compiler hostility of the 6502.
---
Now if I wanted the huge register file of the old 360 I'd got to the thoroughly 8-bit AVR-8 where I sometimes have enough registers for your inner loop and interrupt handler variables. I use 24-bit pointers on AVR-8 to build data structures stored in flash for graphics and such and since even 16-bit operations are spelled out, 24 bit is a comfortable stop on the way to larger things.
As an application programmer it wasn't much different from 16-bit real mode. Windows 3.x, OS/2 1.x and 16-bit DOS extenders gave you APIs for manipulating the segments (GDT/LDT/etc). You'd say you'd want to allocate a 64KB segment of memory, it would give you a selector number you could load into your DS or ES register – not fundamentally different from DOS. For an OS programmer perspective it was more complex, of course.
It was true that with 16-bit real mode you could relatively easily acquire >64KB of contiguous memory, in 16-bit protected mode that was more difficult to come by. (Although the OS could allocate you adjacent selector numbers–but I'm not sure if 16-bit Windows / OS/2 / etc actually offered that as an option.)
That said, 16-bit GDTs/LDTs have a relatively sensible logical structure. Their 32-bit equivalents were a bit of a mess due to backward compatibility requirements (the upper bits of the base and limit being stored in separate fields from the lower fields). And the 386, while it has a much richer feature set than the 286, those added features bring a lot of complexity that 286 OS developers didn't need to worry about. Even if you try your hardest (as contemporary OSes such as Linux and 32-bit Windows do) to avoid the 386's more esoteric features (such as hardware task switching and call gates)
Pretty much. The main issue with modern Unx on a VAX is memory size & performance, which combine to make native compiling under recent gcc versions "problematic", so cross building in gcc-10 or 12 is much easier.
The profusion of (from today's perspective) whacky addressing modes have made maintaining gcc for VAX more effort that it would be otherwise, but it's still there and in use for one modern UNx https://wiki.netbsd.org/ports/vax/ :)
You can download https://opensimh.org/ to get a VAX emulator and boot up to play
Simh also emulates a selection of other interesting and unusual architectures https://opensimh.org/simulators/
Whatever happened to them ...
They had a somewhat serious go at being "third wheel" back in the early 2000s, mid 1990s?
However between Transmeta's idea and shipping a product Intel's Israel location came up with the Intel Core series. MUCH more energy efficienct, much better performance per clock, and ideal for lower power platforms like laptops.
Sadly transmeta's no longer had a big enough advantage, sales decreased, and I heard many of the engineers ended up at Nvidia, which did use some of their ideas in a nvidia product.
Funny how that came about. Talent finds a way, Now they're all sitting in a 3T ship.-
https://fmt.ewi.utwente.nl/media/59.pdf
https://www.usenix.org/system/files/login/articles/546-mirtc...
http://doc.cat-v.org/plan_9/IWP9/2007/11.highperf.lucho.pdf
"computational model": finite state machine, Turing machine, Petri nets, data-flow, stored program (a.k.a. Von Neumann, or Princeton), dual memory (a.k.a. Harvard), cellular automata, neural networks, quantum computers, analog computers for differential equations
"instruction set architecture": ARM, x86, RISC-V, IBM 360
"instruction set style": CISC, RISC, VLIW, MOVE (a.k.a TTA - Transport Triggered Architecture), Vector
"number of addresses": 0 (stack machine), 1 (accumulator machine), 2 (most CISCs), 3 (most RISCs), 4 (popular with sequential memory machines like Turing's ACE or the Bendix G-15)
"micro-architecture": single cycle, multi-cycle, pipelines, super-pipelined, out-of-order
"system organization": distributed memory, shared memory, non uniform memory, homogeneous, heterogeneous
With these different dimensions for "computer architecture" you will have different answers for which was the weirdest one.
Not 'weird' but any architecture that doesn't have an 8-bit byte causes questions and discussion.
EG: Texas Instruments DSP chip family for digital signal processing, they're all about deep pipelined FFT computations with floats and doubles, not piddling about with 8-bit ASCII .. there's no hardware level bit operations to speak of, and the smallest addressable memory size is either 32 or 64 bits.
It's the response to the observation that most of the transistors in a computer are idle at any given instant.
There are a full rabbit hole worth of advantages to this architecture once you really dig into it.
Description https://esolangs.org/wiki/Bitgrid
Emulator https://github.com/mikewarot/Bitgrid
But as nobody has mentioned it yet surprisingly: https://www.greenarraychips.com/ albeit perhaps not weird; just different
There are secondary consequences of breaking computation down to a directed acyclic graph of binary logic operations. You can guarantee runtime, as you know a-priori how long each step will take. Splitting up computation to avoid the complications of Amdahl's law should be fairly easy.
I hope to eventually build a small array of Raspberry Pi Pico modules that can emulate a larger array than any one module can handle. Linear scaling is a given.
Regarding Amdahl's law and avoiding its complications this fits:
https://duckduckgo.com/?q=Apple-CORE+D-RISC+SVP+Microgrids+U...
(Not limited to SPARC, conceptually it's applicable almost anywhere else)
From the software-/programming-/compiler-side this fits right on top it:
https://duckduckgo.com/?q=Dybvig+Nanopass
(Also conceptually, doesn't have to be Scheme, but why not? It's nice.)
Website: https://www.greenarraychips.com/home/documents/index.php#arc...
PDF with technical overview of one of their chips: https://www.greenarraychips.com/home/documents/greg/PB003-11...
Discussed:
* https://news.ycombinator.com/item?id=23142322
* https://comp.lang.forth.narkive.com/y7h1mSWz/more-thoughts-o...
> The name, from "transistor" and "computer", was selected to indicate the role the individual transputers would play: numbers of them would be used as basic building blocks in a larger integrated system, just as transistors had been used in earlier designs.
https://en.wikipedia.org/wiki/Transputer
let us also not forget The Itanic
Mill CPU (so far only patent-ware but interesting nevertheless) : https://millcomputing.com/
zk-STARK virtual machines:
https://github.com/TritonVM/triton-vm
https://github.com/risc0/risc0
They're "just" bounded Turing machines with extra cryptography. The VM architectures have been optimized for certain cryptographic primitives so that you can prove properties of arbitrary programs, including the cryptographic verification itself. This lets you e.g. play turn-based games where you commit to make a move/action without revealing it (cryptographic fog-of-war):
https://www.ingonyama.com/blog/cryptographic-fog-of-war
The reason why this requires a specialised architecture is that in order to prove something about the execution of an arbitrary program, you need to arithmetize the entire machine (create a set of equations that are true when the machine performs a valid step, where these equations also hold for certain derivatives of those steps).
The basic limit is the curie point of the cores, and the source of clock drive signals.
https://en.m.wikipedia.org/wiki/Magnetic_logic
However... it's unclear how thermally conductive the "atmosphere" there is, it might make heat engines unworkable, no matter how powerful.
It was quite a while ago and my memory is hazy tbh, but I put some quick notes here at the time: https://liza.io/alife-2020-soft-alife-with-splat-and-ulam/
Though I think it will be difficult to switch the narrative of better-performance-at-all-costs mindset toward something like this. For almost every application, you'd probably be better off worrying about integrity than raw speed.
Now working on the T2 Tile Project https://www.youtube.com/@T2TileProject
https://en.wikipedia.org/wiki/Intel_iAPX_432
Lightmatter: matrix multiply via optical interferometers
Parametron: coupled oscillator phase logic
rapid single flux quantum logic: high-speed pulse logic
asynchronous logic
https://en.wikipedia.org/wiki/Unconventional_computing
https://beza1e1.tuxen.de/articles/accidentally_turing_comple...
https://gwern.net/turing-complete
https://web.archive.org/web/20210214020524/https://stedolan....
Removing all but the mov instruction from future iterations of the x86 architecture would have many advantages: the instruction format would be greatly simplified, the expensive decode unit would become much cheaper, and silicon currently used for complex functional units could be repurposed as even more cache. As long as someone else implements the compiler.
A C compiler exists already, based on LCC, and it's called the movfuscator.
https://github.com/xoreaxeaxeax/movfuscator
https://en.wikipedia.org/wiki/Burroughs_Large_Systems
They had an attached scientific processor to do vector and array computations.
https://bitsavers.org/pdf/burroughs/BSP/BSP_Overview.pdf
I knew some of the engineers at Unisys who were still supporting Clearpath and Libra. Even they thought Burroughs was weird...
There was interesting procedure names in the Master Control Program (yes, Tron fans, the real MCP) JEDGARHOOVER was central to system level security. I taught the customer facing MCP class for a few years.
In the early days they gave you the source code and it wasn't uncommon for people to make patches and share them around. Everyone sent patches into the plant and in a release or two you would see them come back as official code.
https://en.m.wikipedia.org/wiki/Reservoir_computing
https://news.ycombinator.com/item?id=11425533 https://www.cs.virginia.edu/~robins/Computing_Without_Clocks...
memristor - https://www.computer.org/csdl/magazine/mi/2018/05/mmi2018050...
There is also Soap Bubble Computing, or various form of annealing computing (like quantum annealing or Adiabatic quantum computation), where you set up your computation as the optimal value of a physical system you can define.
https://en.wikipedia.org/wiki/Barrelfish_(operating_system)
https://barrelfish.org/publications/TN-000-Overview.pdf
Machine code including instructions and data was all printable characters, so you could punch an executable program right onto a card, no translation needed. You could put a card in the reader, press a button, and the card image would be read into memory and executed, no OS needed. Some useful utilities -- list a card deck on the printer, copy a card deck to tape -- fit on a single card.
https://en.wikipedia.org/wiki/IBM_1401
https://en.m.wikipedia.org/wiki/Water_integrator
Computation Theory: Cognitive Processes
Smallest parts: Neurons
Largest parts: Brains
Lowest level language: Proprietary
First abstractions of programming: Bootstrapped / Self-learning
Software architecture: Maslow's Theory of Needs
User Interface: Sight, Sound
144 small computers in a grid that can communicate with each other
http://phys.org/news/2012-04-scientists-crab-powered.html
Yes, real crabs
If you are looking for strangeness, the 1990's to early 2000's microcontrollers had I/O ports, but every single I/O port was different. None of them had a standard so that we could (for example) plug in a 10-pin header and connect the same peripheral to any of the I/O ports on a single microcontroller, much less any microcontroller they made in a family of microcontrollers.
I think Transport-triggered architecture (https://en.wikipedia.org/wiki/Transport_triggered_architectu...) is something still not fully explored.
As for weird, try this: ENIAC instructions modified themselves. Back then, an "instruction" (they called them "orders") included the addresses of the operands and destination (which was usually the accumulator). So if you wanted to sum the numbers in an array, you'd put the address of the first element in the instructions, and as ENIAC repeated that instruction (a specified number of times), the address in the instruction would be auto-incremented.
Or how about this: a computer with NO 'jump' or 'branch' instruction? The ATLAS-1 was a landmark of computing, having invented most of the things we take for granted now, like virtual memory, paging, and multi-programming. But it had NO instruction for altering the control flow. Instead, the programmer would simply _write_ to the program counter (PC). Then the next instruction would be fetched from the address in the PC. If the programmer wanted to return to the previous location (a "subroutine call"), they'd be obligated to save what was in the PC before overwriting it. There was no stack, unless you count a programmer writing the code to save a specific number of PC values, and adding code to all subroutines to fetch the old value and restore it to the PC. I do admire the simplicity -- want to run code at a different address? Tell me what it is and I'll just go there, no questions asked.
Or maybe these shouldn't count as "weird", because no one had yet figured out what a computer should be. There was no "standard" model (despite Von Neumann) for the design of a machine, and cost considerations plus new developments (spurred by wanting better computers) meant that the "best" design was constantly changing.
Consider that post-WWII, some materials were hard to come by. So much so that one researcher used a Slinky (yes, the toy) as a memory storage device. And had it working. They wanted acoustic delay lines (the standard of the time), but the Slinky was more available. So it did the same job, just with a different medium.
I've spent a lot of time researching these early machines, wanting to find the path each item in a now-standard model of an idealized computer. It's full of twists and turns, dead ends and unintentional invention.
https://arstechnica.com/information-technology/2020/05/gears...
There is also the analog computer The Analog Thing https://the-analog-thing.org/
Compute with mushrooms, compute near black holes, etc.
great collection of interesting links - kudos to all! :=)
idk ... but isn't the "general" architecture of most of our computers "von neumann"!?
* https://en.wikipedia.org/wiki/Von_Neumann_architecture
but what i miss from the various lists, is the "transputer"-architecture / ecosystem from INMOS - a concept of heavily networked arrays of small cores from the 1980ties
about transputers
* https://en.wikipedia.org/wiki/Transputer
about INMOS
* https://en.wikipedia.org/wiki/Inmos
i had the possibility to take a look at a "real life" ATW - atari transputer workstation - back in the days at my university / CS department :))
mainly used with the Helios operating-system
* https://en.wikipedia.org/wiki/HeliOS
to be programmed in occam
* https://en.wikipedia.org/wiki/Occam_(programming_language)
the "atari transputer workstation" ~ more or less a "smaller" atari mega ST as the "host node" connected to an (extendable) array of extension-cards containing the transputer-chips:
* https://en.wikipedia.org/wiki/Atari_Transputer_Workstation
just my 0.02€
That's something I was also curious about and it turns out Arduinos use the Harvard architecture. You might say Arduinos are not really "computers" but after a bit of googling I found an Apple II emulator running on an Arduino and, well, an Apple II is generally accepted to be a computer :)
if i remember it correctly the main difference of the "harvard"-architecture was: it uses its own data and program/instruction buses ...
* https://en.wikipedia.org/wiki/Harvard_architecture
i think texas instruments 320x0 signal-processors used this architecture ... back in, you guessed it!, the 1980ties.
* https://en.wikipedia.org/wiki/TMS320
ah, they use a modified harvard architecture :))
* https://en.wikipedia.org/wiki/Modified_Harvard_architecture
cheers!
https://en.wikipedia.org/wiki/Logic_gate#Non-electronic_logi...
https://spectrum.ieee.org/superconductor-ics-the-100ghz-seco...
https://research.ece.cmu.edu/piperench/
https://dttw.tech/posts/rJHDh3RLb
https://en.wikipedia.org/wiki/Anton_(computer)
The original 2019 post by Garbage [1] attracted the most comments. But in a reply to one of the subsequent posts [2] I talk a bit about actually coding for the thing. :)
[1] https://news.ycombinator.com/item?id=7616831
[2] https://news.ycombinator.com/item?id=20565779
Usagi Electric 1-Bit Breadboard Computer P.01 – The MC14500 and 555 Clock Circuit https://www.youtube.com/watch?v=oPA8dHtf_6M&list=PLnw98JPyOb...