Ask HN: Weirdest Computer Architecture?

My limited understanding of “the stack” is:

  Physical substrate: Electronics
  Computation theory: Turing machines
  Smallest logical/physical parts: Transistors, Logic gates
  Largest logical/physical parts: Chips
  Lowest level programming: ARM/x64 instructions 
  First abstractions of programming: Assembly, C compiler
  Software architecture: Unix kernel , Binary interfaces
  User interface: Screen, Mouse, Keyboard, GNU
Does there exist a computer stack that changes all of these components?

Or at the least uses electronics but substitutes something else for Turing machines and above.

81 points | by bckr 43 days ago

52 comments

  • runjake 43 days ago
    Here are some architectures that might interest you. Note these are links that lead to rabbit holes.

    1. Transmeta: https://en.wikipedia.org/wiki/Transmeta

    2. Cell processor: https://en.wikipedia.org/wiki/Cell_(processor)

    3. VAX: https://en.wikipedia.org/wiki/VAX (Was unusual for it's time, but many concepts have since been adopted)

    4. IBM zArchitecture: https://en.wikipedia.org/wiki/Z/Architecture (This stuff is complete unlike conventional computing, particularly the "self-healing" features.)

    5. IBM TrueNorth processor: https://open-neuromorphic.org/blog/truenorth-deep-dive-ibm-n... (Cognitive/neuromorphic computing)

    • nrr 42 days ago
      For those really wanting to dig into z/Architecture: <https://www.ibm.com/docs/en/module_1678991624569/pdf/SA22-78...>

      The Wikipedia link has it as its first reference, but it's honestly worth linking directly here. I highly recommend trying to get someone to give you a TSO/E account and picking up HLASM.

      • skissane 39 days ago
        I put MVS 3.8J in a Docker image: https://github.com/skissane/mvs38j

        Not TSO/E rather just plain old TSO. Not HLASM rather its predecessor Assembler F (IFOX00). Still, if you get the hang of the 1970s version, the 2020s version is just adding stuff. And some of the stuff it is adding is less unfamiliar (like Unix and Java)

        • nrr 38 days ago
          About the only thing that's truly limiting about using such an old release of MVS is the 24-bit addressing and maybe the older pre-XA I/O architecture.

          Having a simulated 3033 running at 10+ MIPS is pretty nice though. (:

          • skissane 38 days ago
            > About the only thing that's truly limiting about using such an old release of MVS is the 24-bit addressing

            I've never used it, but there's a hacked up version that adds 31-bit addressing [0].

            It is truly a hack though – porting 24-bit MVS to XA is a monumental task, not primarily due to address mode (you can always ignore new processor modes, just like how a 286-only OS will happily run on a 386 without realising it is doing so), but rather due to the fundamental incompatibility in the IO architecture.

            The "proper" way to do it would be to have some kind of hypervisor which translates 370 IO operations to XA – which already exists, and has so for decades, it is called VM/XA, but sadly under IBM copyright restrictions just like MVS/XA is. I suppose someone could always write their own mini-hypervisor that did this, but the set of people with the time, inclination and necessary skills is approximately (if not exactly) zero.

            So instead the hacky approach is to modify Hercules to invent a new hybrid "S/380" architecture which combines XA addressing with 370 IO. Given it never physically existed, I doubt Hercules will ever agree to upstream it.

            Also, IIRC, it doesn't implement memory protection/etc for above-the-line addresses, making "MVS/380" essentially a single-tasking OS as far as 31-bit code goes. But the primary reason for its existence is that GCC can't compile itself under 24-bit since doing so consumes too much memory, and for that limited purpose you'd likely only ever run one program at a time anyway.

            I guess the other way of solving the problem would been to have modify GCC to do application-level swapping to disk - which is what a lot of historical compilers did to fit big compiles into limited memory. But I guess making those kinds of radical changes to GCC is too much work for a hobby. Or pick a different compiler altogether – GCC was designed from the beginning for 32-bit machines, and probably alternatives would go better in very limited memory – but people had already implemented 370 code generation for GCC (contemporary GCC supports 64-bit and 31-bit; I don't think contemporary mainline GCC supports 24-bit code generation any more, but people use an old version or branch which did.) I wonder about OpenWatcom, since that's originally a 370 compiler, and I believe the 370 code generator is still in the source tree, although I'm not sure if anybody has tried to use it.

            [0] https://mvs380.sourceforge.net/

            • nrr 38 days ago
              Yeah, I've wondered what the lift would be to backport XA I/O to MVS 3.8j, among other things, but given that it's a pretty pervasive change to the system, I'm not surprised to learn that it's pretty heavy.

              To your note about a hypervisor though: I did consider going this route. I already find VM/370 something of a more useful system anyway, and having my own VM/XA of sorts is an entertaining prospect.

              • skissane 38 days ago
                It arguably doesn't require anything as remotely complex/feature-rich as full VM/XA: it wouldn't need to support multiple virtual machines, or complicated I/O virtualisation.

                Primarily just intercept SIO/etc instructions, and replace them with the XA equivalent.

                Another idea that comes to mind: you could locate the I/O instructions in MVS 3.8J and patch them over with branches to some new "I/O translation" module. The problem I think with that, is while the majority of IO goes through a few central places in the code (IOS invoked via SVC call made by EXCP), there's I/O instructions splattered everywhere in less central parts of the system (VTAM, TCAM, utilities, etc).

                • nrr 38 days ago
                  I leaned into the "well, what if my own VM/XA" because, uh, VM/CMS has the odd distinction among IBM's operating systems of the era of being both (1) source available and (2) site-assemblable. I've gone through my fair share of CMS and CP generations, which felt like a more complete rebuild of those nuclei than the MVS sysgens I've done.

                  That there makes me feel a little less confident in an MVS 3.8j patching effort.

    • PaulHoule 39 days ago
      I wouldn't say the VAX was unusual even though it was a pathfinder in that it showed what 32-bit architectures were going to look like. In the big picture the 386, 68040, SPARC and other chips since then have looked a lot like a VAX, particularly in how virtual memory works. There's no fundamental problem with getting a modern Unix to run on a VAX except for all the details.

      Z is definitely interesting from it's history with the IBM 360 and its 24 bit address space. (24 bit micros existed in the 1980s such as the 286 but I never had one that was straightforward to program in 24 bit mode) which, around the time the VAX came out, got expanded to 31-bits

      https://en.wikipedia.org/wiki/IBM_System/370-XA

      • skissane 39 days ago
        > 24 bit micros existed in the 1980s such as the 286 but I never had one that was straightforward to program in 24 bit mode

        Making clear that we are talking about 24-bit physical or virtual addressing (machines with a 24-bit data word we’re quite rare, mainly DSPs, also some non-IBM mainframes like SDS 940):

        286’s 16-bit protected mode was heavily used by Windows 3.x in Standard mode. And even though 386 Enhanced Mode used 32-bit addressing, from an application developer viewpoint it was largely indistinguishable from 286 protected mode, prior to Win32s. And then Windows NT and 95 changed all that.

        286’s protected mode was also heavily used by earlier DOS extenders, OS/2 1.x, earlier versions of NetWare and earlier versions of Unix variants such as Xenix. Plus 32-bit operating systems such as Windows 9x/Me/NT/2000/XP/etc and OS/2 2.x+ still used it for backward compatibility when running older 16-bit software (Windows 3.x and OS/2 1.x)

        Other widely used CPU architectures with 24-bit addressing included anything with a Motorola 68000 or 68010 (32-bit addressing was only added with the 68020 onwards, while the 68012 had 31-bit addressing). So that includes early versions of classic MacOS, AmigaOS, Atari TOS - and also Apple Lisa, various early Unix workstations, and umpteen random operating systems which ran under 68K which you may have never heard of (everything from CP/M-68K to OS/9 to VERSAdos to AMOS/L).

        ARM1 (available as an optional coprocessor for the BBC Micro) and ARM2 (used in the earliest RISC OS systems) were slightly more than 24-bit, with 26-bit addressing. And some late pre-XA IBM mainframes actually used 26-bit physical addressing despite only having 24-bit virtual addresses. Rather similar to how 32-bit Intel processors ended up with 36-bit physical addressing via PAE

        • PaulHoule 39 days ago
          I understand the 286 protected mode still made you mess around with segment registers, if you wanted to work with 24-bit long pointers you would have to emulate that behavior with the segment registers and it was a hassle.

          I didn't think the segment registers were a big hassle in the 8086 real mode in fact I thought it was fun to do crazy stuff in assembly such as use segment register values as long pointers to large objects (with 16 byte granularity) I think the segment registers would have felt like more of a hassle if I was writing larger programs (e.g. 64k data + 64k code + 64k stack gets you further towards utilizing a 640k address space than it does towards a 16M address space).

          I recently discovered

          https://en.wikipedia.org/wiki/Zilog_eZ80

          and think that 2001 design is close to an ideal 24 bit micro in that both regular and index registers are extended to 24 bits, you have long pointers and all the facilities to work in a 24 bit "problem space" based on an architecture that is reasonable to write compilers for. It would be nice to have an MMU so you could have a real OS, even something with bounds registers would please me, but with many reasonably priced dev boards like

          https://www.tindie.com/products/lutherjohnson/makerlisp-ez80...

          it is a dream you can live. I don't think anything else comes close, certainly not

          https://en.wikipedia.org/wiki/WDC_65C816

          where emulating long pointers would have been a terrible hassle and which didn't do anything to address the compiler hostility of the 6502.

          ---

          Now if I wanted the huge register file of the old 360 I'd got to the thoroughly 8-bit AVR-8 where I sometimes have enough registers for your inner loop and interrupt handler variables. I use 24-bit pointers on AVR-8 to build data structures stored in flash for graphics and such and since even 16-bit operations are spelled out, 24 bit is a comfortable stop on the way to larger things.

          • skissane 38 days ago
            > I understand the 286 protected mode still made you mess around with segment registers, if you wanted to work with 24-bit long pointers you would have to emulate that behavior with the segment registers and it was a hassle.

            As an application programmer it wasn't much different from 16-bit real mode. Windows 3.x, OS/2 1.x and 16-bit DOS extenders gave you APIs for manipulating the segments (GDT/LDT/etc). You'd say you'd want to allocate a 64KB segment of memory, it would give you a selector number you could load into your DS or ES register – not fundamentally different from DOS. For an OS programmer perspective it was more complex, of course.

            It was true that with 16-bit real mode you could relatively easily acquire >64KB of contiguous memory, in 16-bit protected mode that was more difficult to come by. (Although the OS could allocate you adjacent selector numbers–but I'm not sure if 16-bit Windows / OS/2 / etc actually offered that as an option.)

            That said, 16-bit GDTs/LDTs have a relatively sensible logical structure. Their 32-bit equivalents were a bit of a mess due to backward compatibility requirements (the upper bits of the base and limit being stored in separate fields from the lower fields). And the 386, while it has a much richer feature set than the 286, those added features bring a lot of complexity that 286 OS developers didn't need to worry about. Even if you try your hardest (as contemporary OSes such as Linux and 32-bit Windows do) to avoid the 386's more esoteric features (such as hardware task switching and call gates)

    • ramses0 39 days ago
      Excellent summary, add "Water Computers" to the mix for completeness. https://www.ultimatetechnologiesgroup.com/blog/computer-ran-...
    • Bluestein 43 days ago
      > Transmeta

      Whatever happened to them ...

      They had a somewhat serious go at being "third wheel" back in the early 2000s, mid 1990s?

        PS. Actually considered getting a Crusoe machine back in the day ...
      • runjake 42 days ago
        They released a couple processors with much lower performance than the market expected, shut that down, started licensing their IP to Intel, Nvidia, and others, and then got acquired.
      • sliken 39 days ago
        They had a great plan, that was promising, and Intel was focused entirely on the pentium-4, which had high clocks for bragging rights, long pipeline (related to the high clocks), and high power usage.

        However between Transmeta's idea and shipping a product Intel's Israel location came up with the Intel Core series. MUCH more energy efficienct, much better performance per clock, and ideal for lower power platforms like laptops.

        Sadly transmeta's no longer had a big enough advantage, sales decreased, and I heard many of the engineers ended up at Nvidia, which did use some of their ideas in a nvidia product.

      • em-bee 39 days ago
        i did get a sony picturebook with a transmeta processor. the problem was that as a consumer i didn't notice anything special about it. for transmeta to make it they would have had to either be cheaper or faster or use less power to be attractive for consumer devices.
    • mass_and_energy 39 days ago
      The use of the Cell processor in the PlayStation 3 was an interesting choice by Sony. It was the perfect successor to the PS2's VU0 and VU1s, so if you were a game developer coming from the PS2 space and were well-versed in the concepts of "my programs job is to feed the VUs", you can scale that knowledge up to keep the cores of the Cell working. The trick seems to be in managing synchronization between them all
    • lormayna 42 days ago
      Why the Cell processor did not had success in AI/DL applications?
  • jecel 40 days ago
    "Computer architecture" is used in several different ways and that can lead to some very confusing conversations. Your proposed stack has some of this confusion. Some alternative terms might help:

    "computational model": finite state machine, Turing machine, Petri nets, data-flow, stored program (a.k.a. Von Neumann, or Princeton), dual memory (a.k.a. Harvard), cellular automata, neural networks, quantum computers, analog computers for differential equations

    "instruction set architecture": ARM, x86, RISC-V, IBM 360

    "instruction set style": CISC, RISC, VLIW, MOVE (a.k.a TTA - Transport Triggered Architecture), Vector

    "number of addresses": 0 (stack machine), 1 (accumulator machine), 2 (most CISCs), 3 (most RISCs), 4 (popular with sequential memory machines like Turing's ACE or the Bendix G-15)

    "micro-architecture": single cycle, multi-cycle, pipelines, super-pipelined, out-of-order

    "system organization": distributed memory, shared memory, non uniform memory, homogeneous, heterogeneous

    With these different dimensions for "computer architecture" you will have different answers for which was the weirdest one.

  • defrost 42 days ago
    Setun: three-valued ternary logic computer instead of the common binary- https://en.wikipedia.org/wiki/Setun

    Not 'weird' but any architecture that doesn't have an 8-bit byte causes questions and discussion.

    EG: Texas Instruments DSP chip family for digital signal processing, they're all about deep pipelined FFT computations with floats and doubles, not piddling about with 8-bit ASCII .. there's no hardware level bit operations to speak of, and the smallest addressable memory size is either 32 or 64 bits.

  • mikewarot 43 days ago
    BitGrid is my hobby horse. It's a Cartesian grid of cells with 4 bit in, 4 bit out, LUTs (look up tables), latched in alternating phases to eliminate race conditions.

    It's the response to the observation that most of the transistors in a computer are idle at any given instant.

    There are a full rabbit hole worth of advantages to this architecture once you really dig into it.

    Description https://esolangs.org/wiki/Bitgrid

    Emulator https://github.com/mikewarot/Bitgrid

  • PeterStuer 38 days ago
    In the 80's our lab lobbied the university to get a CM-1. We failed and they got a Cray instead. The connection machine was a realy different architectute aimed at massive partallel execution https://en.wikipedia.org/wiki/Connection_Machine
  • sshine 39 days ago
    These aren't implemented in hardware, but they're examples of esoteric architectures:

    zk-STARK virtual machines:

    https://github.com/TritonVM/triton-vm

    https://github.com/risc0/risc0

    They're "just" bounded Turing machines with extra cryptography. The VM architectures have been optimized for certain cryptographic primitives so that you can prove properties of arbitrary programs, including the cryptographic verification itself. This lets you e.g. play turn-based games where you commit to make a move/action without revealing it (cryptographic fog-of-war):

    https://www.ingonyama.com/blog/cryptographic-fog-of-war

    The reason why this requires a specialised architecture is that in order to prove something about the execution of an arbitrary program, you need to arithmetize the entire machine (create a set of equations that are true when the machine performs a valid step, where these equations also hold for certain derivatives of those steps).

  • amy-petrik-214 42 days ago
    they was some interesting funk in the 80s : Lisp computer: https://en.wikipedia.org/wiki/Lisp_machine (these were very hot in 1980s era AI) connection machine: https://en.wikipedia.org/wiki/Connection_Machine (a gorillion monobit processor supercluster)

    let us also not forget The Itanic

  • jy14898 39 days ago
    Transputer

    > The name, from "transistor" and "computer", was selected to indicate the role the individual transputers would play: numbers of them would be used as basic building blocks in a larger integrated system, just as transistors had been used in earlier designs.

    https://en.wikipedia.org/wiki/Transputer

    • jy14898 39 days ago
      Weird for its time, not so much today
  • mikewarot 42 days ago
    I thought magnetic logic was an interesting technology when I first heard of it. It's never going to replace semiconductors, but if you want to compute on the surface of Venus. You just might be able to make it work there.

    The basic limit is the curie point of the cores, and the source of clock drive signals.

    https://en.m.wikipedia.org/wiki/Magnetic_logic

    • mikewarot 42 days ago
      Vacuum tubes would be the perfect thing to generate the clock pulses, as they can be made to withstand the temperatures, vibrations, etc. I'm thinking a nuclear reactor to provide heat via thermopiles might be the way to power it.

      However... it's unclear how thermally conductive the "atmosphere" there is, it might make heat engines unworkable, no matter how powerful.

      • dann0 40 days ago
        The AMULET Project was an asynchronous version of ARM microprocessors. Maybe one could design away the clock like with these? https://en.wikipedia.org/wiki/AMULET_(processor)
        • mikewarot 40 days ago
          In the case of magnetic logic, the multi-phase clock IS the power supply. Vacuum tubes are quite capable of operating for years in space, if properly designed. I assume the same could be done for the elevated pressures and temperatures on the surface of Venus. As long as you can keep the cathode significantly hotter than the anode, to drive thermionic emission in the right direction, that is.
  • drakonka 40 days ago
    This reminds me of a talk I went to at the 2020 ALIFE conference, in which the speaker presented an infinitely-scalable architecture called the "Movable Feast Machine". He suggested relinquishing hardware determinism - the hardware can give us wrong answers and the software has to recover, and in some cases the hardware may fail catastrophically. The hardware is a series of tiles with no CPU. Operations are local and best-effort, determinism not guaranteed. The software then has to reconcile that.

    It was quite a while ago and my memory is hazy tbh, but I put some quick notes here at the time: https://liza.io/alife-2020-soft-alife-with-splat-and-ulam/

    • theideaofcoffee 39 days ago
      I was hoping someone was going to mention Dave Ackley and the MFM. It has really burrowed down into my mind and I start to see applications of it even when I'm not looking out for it. It really is a mindfuck and I wish it was something that was a bit more top of mind for people. I really think it will be useful when computation becomes even more ubiquitous than it is now, how we'll have to think even more about failure, and make it a first class citizen.

      Though I think it will be difficult to switch the narrative of better-performance-at-all-costs mindset toward something like this. For almost every application, you'd probably be better off worrying about integrity than raw speed.

    • sitkack 39 days ago
      Dave Ackley

      Now working on the T2 Tile Project https://www.youtube.com/@T2TileProject

  • ithkuil 43 days ago
    CDC 6000 was a barrel processor: https://en.m.wikipedia.org/wiki/Barrel_processor

    Mill CPU (so far only patent-ware but interesting nevertheless) : https://millcomputing.com/

  • dwrodri 38 days ago
    If you really want to see some esoteric computer architecture ideas, check out Mill Computing: https://millcomputing.com/wiki/Architecture. I don't think they've etched any of their designs into silicon, but very fascinating ideas nonetheless.
  • nailer 43 days ago
    The giant global computers that are Solana mainnet / devnet / testnet. The programs are compiled from Rust into (slightly tweaked) EBPF binaries and state updates every 400ms, using VDFs to sync clocks between the leaders that are allowed to update state.
  • muziq 43 days ago
    The Apple ‘Scorpius’ thing they bought the Cray in the 80’s for emulating.. RISC, multi-core, but could put all the cores in lockstep to operate as pseudo’SIMD.. Or failing that, the 32-bit 6502 successor the MCS65E4.. https://web.archive.org/web/20221029042214if_/http://archive...
  • AstroJetson 39 days ago
    Huge fan of the Burroughs Large Systems Stack Machines.

    https://en.wikipedia.org/wiki/Burroughs_Large_Systems

    They had an attached scientific processor to do vector and array computations.

    https://bitsavers.org/pdf/burroughs/BSP/BSP_Overview.pdf

  • CalChris 39 days ago
    Intel's iAPX 432. 1975. Instructions were bit-aligned, stack based, 32-bit operations, segmented, capabilities, .... It was so late+slow that the 16-bit 8086 was created.

    https://en.wikipedia.org/wiki/Intel_iAPX_432

    • wallstprog 39 days ago
      I thought it was a brilliant design, but it was dog-slow on hardware at the time. I keep hoping someone would revive the design for current silicon, would be a good impedance match for modern languages, and OS's.
  • mac3n 43 days ago
    FPGA: non-sequential programming

    Lightmatter: matrix multiply via optical interferometers

    Parametron: coupled oscillator phase logic

    rapid single flux quantum logic: high-speed pulse logic

    asynchronous logic

    https://en.wikipedia.org/wiki/Unconventional_computing

  • yen223 43 days ago
    A lot of things are Turing-complete. The funniest one to me are Powerpoint slides.

    https://beza1e1.tuxen.de/articles/accidentally_turing_comple...

    https://gwern.net/turing-complete

    • jasomill 42 days ago
      I prefer the x86 MOV instruction:

      https://web.archive.org/web/20210214020524/https://stedolan....

      Removing all but the mov instruction from future iterations of the x86 architecture would have many advantages: the instruction format would be greatly simplified, the expensive decode unit would become much cheaper, and silicon currently used for complex functional units could be repurposed as even more cache. As long as someone else implements the compiler.

  • metaketa 42 days ago
    HVM using interaction nets as alternative to Turing computation deserves a mention. Google: HigherOrderCompany
  • mbfg 38 days ago
    I've got to believe x86 is in the running. We don't think of it because it is the dominate architecture, but it's kind of crazy.
  • phyalow 42 days ago
  • 0xdeadbeer 43 days ago
    I heard of counter machines on Computerphile https://www.youtube.com/watch?v=PXN7jTNGQIw
  • BarbaryCoast 38 days ago
    Look at the earliest computers, that is, those around the time of ENIAC. Most were electro-mechanical, some were entirely relay machines. I believe EDSAC was the first _electronic_ digital computer.

    As for weird, try this: ENIAC instructions modified themselves. Back then, an "instruction" (they called them "orders") included the addresses of the operands and destination (which was usually the accumulator). So if you wanted to sum the numbers in an array, you'd put the address of the first element in the instructions, and as ENIAC repeated that instruction (a specified number of times), the address in the instruction would be auto-incremented.

    Or how about this: a computer with NO 'jump' or 'branch' instruction? The ATLAS-1 was a landmark of computing, having invented most of the things we take for granted now, like virtual memory, paging, and multi-programming. But it had NO instruction for altering the control flow. Instead, the programmer would simply _write_ to the program counter (PC). Then the next instruction would be fetched from the address in the PC. If the programmer wanted to return to the previous location (a "subroutine call"), they'd be obligated to save what was in the PC before overwriting it. There was no stack, unless you count a programmer writing the code to save a specific number of PC values, and adding code to all subroutines to fetch the old value and restore it to the PC. I do admire the simplicity -- want to run code at a different address? Tell me what it is and I'll just go there, no questions asked.

    Or maybe these shouldn't count as "weird", because no one had yet figured out what a computer should be. There was no "standard" model (despite Von Neumann) for the design of a machine, and cost considerations plus new developments (spurred by wanting better computers) meant that the "best" design was constantly changing.

    Consider that post-WWII, some materials were hard to come by. So much so that one researcher used a Slinky (yes, the toy) as a memory storage device. And had it working. They wanted acoustic delay lines (the standard of the time), but the Slinky was more available. So it did the same job, just with a different medium.

    I've spent a lot of time researching these early machines, wanting to find the path each item in a now-standard model of an idealized computer. It's full of twists and turns, dead ends and unintentional invention.

  • GistNoesis 39 days ago
    https://en.wikipedia.org/wiki/Unconventional_computing

    There is also Soap Bubble Computing, or various form of annealing computing (like quantum annealing or Adiabatic quantum computation), where you set up your computation as the optimal value of a physical system you can define.

  • elkekeer 39 days ago
    A multi-core Propeller processor by Parallax (https://en.wikipedia.org/wiki/Parallax_Propeller) in which multitasking is done by cores (called cogs) taking turns: first, code is executed on the first cog, then, after a while, on the second, then on the third, etc.
  • variadix 39 days ago
    From the creator of Forth https://youtu.be/0PclgBd6_Zs

    144 small computers in a grid that can communicate with each other

  • vismit2000 40 days ago
    How about water computer? https://youtu.be/IxXaizglscw
  • dholm 41 days ago
  • Joker_vD 39 days ago
    IBM 1401. One of the weirdest ISAs I've ever read about, with basically human readable machine code thanks to BCD.
    • jonjacky 39 days ago
      Yes, it had a variable word length - a number was a string of decimal digits of any length, with a terminator at the end, kind of like a C character string.

      Machine code including instructions and data was all printable characters, so you could punch an executable program right onto a card, no translation needed. You could put a card in the reader, press a button, and the card image would be read into memory and executed, no OS needed. Some useful utilities -- list a card deck on the printer, copy a card deck to tape -- fit on a single card.

      https://en.wikipedia.org/wiki/IBM_1401

  • Elfener 43 days ago
  • jareklupinski 39 days ago
    Physical Substrate: Carbon / Water / Sodium

    Computation Theory: Cognitive Processes

    Smallest parts: Neurons

    Largest parts: Brains

    Lowest level language: Proprietary

    First abstractions of programming: Bootstrapped / Self-learning

    Software architecture: Maslow's Theory of Needs

    User Interface: Sight, Sound

    • theandrewbailey 39 days ago
      The big problem is that machines built using these technologies tend to be unreliable. Sure, they are durable, self-repairing, and can run for decades, but they can have a will of their own. While loading a program, there is a non-zero chance that the machine will completely ignore the program and tell you to go f*** yourself.
  • Hatrix 43 days ago
  • trealira 42 days ago
    The ENIAC, the first computer, didn't have assembly language. You programmed it by fiddling with circuits and switches. Also, it didn't use binary integers, but decimal ones, with 10 vacuum tubes to represent the digits 0-9.
  • Paul-Craft 39 days ago
  • floxy 39 days ago
  • porridgeraisin 41 days ago
  • RecycledEle 41 days ago
    Using piloted pneumatic valves as logic gates blew my mind.

    If you are looking for strangeness, the 1990's to early 2000's microcontrollers had I/O ports, but every single I/O port was different. None of them had a standard so that we could (for example) plug in a 10-pin header and connect the same peripheral to any of the I/O ports on a single microcontroller, much less any microcontroller they made in a family of microcontrollers.

  • supercoffee 39 days ago
    I'm fascinated by the mechanical fire control computers of WW2 battleships.

    https://arstechnica.com/information-technology/2020/05/gears...

  • sshb 39 days ago
    This unconventional computing magazine came to my mind: http://links-series.com/links-series-special-edition-1-uncon...

    Compute with mushrooms, compute near black holes, etc.

  • t312227 39 days ago
    hello,

    great collection of interesting links - kudos to all! :=)

    idk ... but isn't the "general" architecture of most of our computers "von neumann"!?

    * https://en.wikipedia.org/wiki/Von_Neumann_architecture

    but what i miss from the various lists, is the "transputer"-architecture / ecosystem from INMOS - a concept of heavily networked arrays of small cores from the 1980ties

    about transputers

    * https://en.wikipedia.org/wiki/Transputer

    about INMOS

    * https://en.wikipedia.org/wiki/Inmos

    i had the possibility to take a look at a "real life" ATW - atari transputer workstation - back in the days at my university / CS department :))

    mainly used with the Helios operating-system

    * https://en.wikipedia.org/wiki/HeliOS

    to be programmed in occam

    * https://en.wikipedia.org/wiki/Occam_(programming_language)

    the "atari transputer workstation" ~ more or less a "smaller" atari mega ST as the "host node" connected to an (extendable) array of extension-cards containing the transputer-chips:

    * https://en.wikipedia.org/wiki/Atari_Transputer_Workstation

    just my 0.02€

    • madflame991 39 days ago
      > but isn't the "general" architecture of most of our computers "von neumann"!?

      That's something I was also curious about and it turns out Arduinos use the Harvard architecture. You might say Arduinos are not really "computers" but after a bit of googling I found an Apple II emulator running on an Arduino and, well, an Apple II is generally accepted to be a computer :)

  • dsr_ 39 days ago
    There are several replacements for electronic logic; some of them have even been built.

    https://en.wikipedia.org/wiki/Logic_gate#Non-electronic_logi...

  • osigurdson 39 days ago
    I'm not sure what the computer architecture was, but I recall the engine controller for the V22 Osprey (AE1107) used odd formats like 11 bit floating point numbers, 7 bit ints, etc.
    • CoastalCoder 39 days ago
      Why past tense? Does the Osprey no longer use that engine or computer?
  • mikequinlan 39 days ago
  • joehosteny 39 days ago
    The Piperench runtime reconfigurable FPGA out of CMU:

    https://research.ece.cmu.edu/piperench/

  • solardev 43 days ago
    Analog computers, quantum computers, light based computers, DNA based computers, etc.
  • dongecko 42 days ago
    Motorola used to have a one bit microprocessor, the MC14500B.
  • ksherlock 42 days ago
    The tinker toy computer doesn't even use electricity.
  • 29athrowaway 39 days ago
    The Soviet Union water integrator. An analog, water based computer for computing partial differential equations.

    https://en.m.wikipedia.org/wiki/Water_integrator

  • ranger_danger 39 days ago
    9-bit bytes, 27-bit words... middle endian.

    https://dttw.tech/posts/rJHDh3RLb

  • jacknews 39 days ago
    Of course there are things like the molecular mechanical computers proposed/popularised by Eric Drexler etc.

    I think Transport-triggered architecture (https://en.wikipedia.org/wiki/Transport_triggered_architectu...) is something still not fully explored.

  • gjvc 43 days ago
    rekursiv
  • aaron695 42 days ago
    [dead]