The Therac-25 Incident (2021)

(thedailywtf.com)

257 points | by lemper 7 hours ago

35 comments

  • benrutter 6 hours ago
    > software quality doesn't appear because you have good developers. It's the end result of a process, and that process informs both your software development practices, but also your testing. Your management. Even your sales and servicing.

    If you only take one thing away from this article, it should be this one! The Therac-25 incident is a horrifying and important part of software history, it's really easy to think type-systems, unit-testing and defensive-coding can solve all software problems. They definitely can help a lot, but the real failure in the story of the Therac-25 from my understanding, is that it took far too long for incidents to be reported, investigated and fixed.

    There was a great Cautionary Tales podcast about the device recently[0], one thing mentioned was that, even aside from the catasrophic accidents, Therac-25 machines were routinely seen by users to show unexplained errors, but these issues never made it to the desk of someone who might fix it.

    [0] https://timharford.com/2025/07/cautionary-tales-captain-kirk...

    • ChrisMarshallNY 1 hour ago
      I worked for a company that manufactured some of the highest-Quality photographic and scientific equipment that you can buy. It was expensive as hell, but our customers seemed to think it was worth it.

      > It's the end result of a process

      In my experience, it's even more than that. It's a culture.

      • franktankbank 1 hour ago
        A culture of high-quality engineering, no doubt. Made up of: high quality engineers!
        • ChrisMarshallNY 1 hour ago
          Yes, but some of them were the most stubborn bastards I've ever worked with.
        • anonymars 1 hour ago
          Isn't that exactly the opposite of the point being made?

          > software quality doesn't appear because you have good developers

          • ChrisMarshallNY 1 hour ago
            Not really.

            Good developers are a necessary ingredient of a much larger recipe.

            People think that a good process means you can toss in crap developers, or that great developers mean that you can have a bad process.

            In my experience, I worked for a 100-year-old Japanese engineering company that had a decades-long culture of Quality. People stayed at that company for their entire career, and most of them were top-shelf people. They had entire business units, dedicated to process improvement and QA.

            It was a combination of good talent, good process, and good culture. If any one of them sucks, so does the product.

    • vorgol 5 hours ago
      I was going to recommend that exact podcast episode but you beat me to it. Totally worth listening, especially if you're interested in software bugs.

      Another interesting fact mentioned in the podcast is that the earlier (manually operated) version of the machine did have the same fault. But it also had a failsafe fuse that blew so the fault never materialized. Excellent demonstration of the Swiss Cheese Model: https://en.wikipedia.org/wiki/Swiss_cheese_model

      • bell-cot 2 hours ago
        >> the real failure in the story of the Therac-25 from my understanding, is that it took far too long for incidents to be reported, investigated and fixed.

        > the earlier (manually operated) version of the machine did have the same fault. But it also had a failsafe fuse that blew so the fault never materialized.

        #1 virtue of electromechanical failsafes is that their conception, design, implementation, and failure modes tend to be orthogonal to those of the software. One of the biggest shortcomings of Swiss Cheese safety thinking is that you too-often end up using "neighbor slices from the same wheel of cheese".

        #2 virtue of electromechanical failsafes is that running into them (the fuse blew, or whatever) is usually more difficult for humans to ignore. Or at least it's easier to create processes and do training that actually gets the errors reported up the chain. (Compared to software - where the worker bees all know you gotta "ignore, click 'OK', retry, reboot" all the time, if you actually want to get anything done):

        But, sadly, electromechanical failsafes are far more expensive then "we'll just add some code to check that" optimism. And PHB's all know that picking up nickles in front of the steamroller is how you get to the C-suite.

    • pjmlp 2 hours ago
      The worst part is that many devlopers think that by not working with high integrity systems, such quality levels don't apply to them.

      Wrong, any software failure can have huge consequences in someone's life, or company, by preventing some critical flow to take place, corrupting data related to someone's life, professional or medical record, preventing a payment on some specific goods that had to be acquired on that moment or never,....

      • ozim 1 hour ago
        Hey don’t blame developers.

        It is business who requests features ASAP to cut costs and and then there are customers who don’t want to pay for „ideal software” but rather have every software for free.

        Most devs and QA workers I know want to deliver best quality software and usually are gold plating stuff anyway.

        • pjmlp 1 hour ago
          Being a real Software Engineer, those that actually have the proper title, eventually with the final examination, means being able to deliver the best product within the set of given constraints.

          Also, speaking out when the train is visibly going against a wall.

    • AdamN 5 hours ago
      This is true but there also needs to be good developers as well. It can't just be great process and low quality developer practices. There needs to be: 1/ high quality individual processes (development being one of them), 2/ high quality delivery mechanisms, 3/ feedback loops to improve that quality, 4/ out of band mechanisms to inspect and improve the quality.
      • Fr3dd1 5 hours ago
        I would argue that a good process always has a good self correction mechanism built in. This way, the work done by a "low quality" software developer (this includes almost all of us at some point in time), is always taken into account by the process.
        • quietbritishjim 5 hours ago
          Right, but if everyone is low quality then there's no one to do that correction.

          That may seem a bit hypothetical but it can easily happen if you have a company that systematically underpays, which I'm sure many of us don't need to think hard to imagine, in which case they will systematically hire poor developers (because those are the only ones that ever applied).

          • ZaoLahma 4 hours ago
            Replace the "hire poor developers" with "use LLM driven development", and you have the rough outline for a perfect Software Engineering horror movie.

            It used to be that the poor performers (dangerous hip-shootin' code commitin' cowpokes) were limited in the amount of code that they could produce per time unit, leaving enough time for others to correct course. Now the cowpokes are producing ridiculous amount of code that you just can't keep up with.

          • pjmlp 2 hours ago
            The correction is done by the "lucky" souls doing the onsite, customer facing roles, for the offshoring delivery. Experience from a friend....
          • anal_reactor 4 hours ago
            Sad truth is that average dev is average, but it's not polite to say this out loud. This is particularly important at scale - when you are big tech at some point you hit a wall and no matter how much you pay you can't attract any more good devs, simply because all good devs are already hired. This means that corporate processes must be tailored for average dev, and exceptional devs can only exist in start-ups (or hermetically closed departments). The side effect of that is that whole job market promotes the skill of fitting into corporate environment over the skill of programming. So an a junior dev, for me it makes much more sense to learn how to promote my visibility during useless meetings, rather than learn a new technology. And that's how the bar keeps getting lower.
            • ozim 1 hour ago
              Huh, sad truth?

              But average construction worker is also average and average doctor as well.

              World cannot be running on „best of the best” - just wrap your head around the fact whole economy and human activities are run by average people doing average stuff.

        • franktankbank 1 hour ago
          The process that makes this work would be so onerous to create. Would you think you could do this to make a low quality machinist be able to build a high quality technical part? What would this look like? Quite a lot like machine code which doesn't really reduce the requirements does it? It actually just shifted the onerous requirement somewhere else.
        • rcxdude 3 hours ago
          This only works with enough good developers involved in the process. I've seen how the sausage is made, and code quality is often shockingly low in these applications, just in ways that don't set off the metrics (or they do, but they can bend the process to wave them away). Also, the process often makes it very hard to fix latent problems in the software, so it rarely gets better over time, either.
        • varjag 3 hours ago
          My takeaway from observing different teams over years is the talent by a huge margin is the most important component. Throw a team of A performers together and it really doesn't matter what process you make them jump through. This is how a waterfall team got the mankind to the Moon with handwoven core memory but an agile team 10x the size can't fix the software for a family car.
          • scott_w 2 hours ago
            You conflated, misrepresented and simply ignored so many things in your statement that I really don’t know where to start rebutting it. I’d say at least compare SpaceX to NASA with space exploration but, even then, I doubt you have anywhere near enough knowledge of both programmes to be able to properly analyse, compare and contrast to back up your claim. Hell, do you even know if SpaceX or Tesla are even using an agile methodology for their system development? I know I don’t.

            That’s not to say talent is unimportant, however, I’d need to see some real examples of high talent, no process, teams compared to low talent, high process, teams, then some mixture of the groups to make a fair statement. Even then, how do you measure talent? I think I’m talented but I wouldn’t be surprised to learn others think I’m an imbecile who only knows Python!

            • varjag 1 hour ago
              Hell, do you even know if SpaceX or Tesla are even using an agile methodology for their system development?

              What I've been saying is methodology is mostly irrelevant, not that waterfall is specifically better than agile. Talent wins over the process but I can see how this idea is controversial.

              I’d need to see some real examples of high talent, no process, teams compared to low talent, high process, teams, then some mixture of the groups to make a fair statement. Even then, how do you measure talent?

              Yep, even if I made it my life's mission to run a formal study on programmer productivity (which I clearly won't) that wouldn't save the argument from nitpicking.

    • 0xDEAFBEAD 2 hours ago
      Honestly I wish instead of the Therac-25, we were discussing a system which made use of unit testing and defensive coding, yet still failed. That would be more educational. It's too easy to look at the Therac-25 and think "I would never write a mess like that".
      • roeles 1 hour ago
        One instance that crosses my mind often is the airbus a320 incident at Hamburg in 2008. Everything was done right there, but the requirements were wrong.

        Despite all the procedures and tests, the software still managed to endanger the lives of the passengers.

      • wat10000 2 hours ago
        The lesson is not to write a mess like that. It might seem obvious, but it has to be learned.
    • sonicggg 3 hours ago
      Not sure why the article is focusing so much on software development. That was just a piece of the problem. The entire product had design flaws. When the FDA for involved, the company wasn't just told to make software updates.
      • speed_spread 2 hours ago
        Yet It doesn't take much to swamp a team of good developers. A poorly defined project, mismatched requirements, sent to production too early and then put in support mode with no time planned to plug the holes... There's only so much smart technicians can do when the organization is broken.
    • aaron695 3 hours ago
      [dead]
  • elric 5 hours ago
    One of the commenters on the article wrote this:

    > Throughout the 80s and 90s there was just a feeling in medicine that computers were dangerous <snip> This is why, when I was a resident in 2002-2006 we still were writing all of our orders and notes on paper.

    I was briefly part of an experiment with electronic patient records in an ICU in the early 2000s. My job was to basically babysit the server processing the records in the ICU.

    The entire staff hated the system. They hated having to switch to computers (this was many years pre-ipad and similarly sleek tablets) to check and update records. They were very much used to writing medications (what, when, which dose, etc) onto bedside charts, which were very easy to consult and very easy to update. Any kind of dataloss in those records could have fatal consequences. Any delay in getting to the information could be bad.

    This was *not* just a case of doctors having unfounded "feelings" that computers were dangerous. Computers were very much more dangerous than pen and paper.

    I haven't been involved in that industry since then, and I imagine things have gotten better since, but still worth keeping in mind.

    • jacquesm 4 hours ago
      Now we have Chipsoft, arguably one of the worst players in the entire IT space that has a near monopoly (around me, anyway) on IT for hospitals. They charge a fortune, produce crap software and the larger they get the less choice there is for the remainder. It is baffling to me that we should be enabling such hostile players.
      • misja111 4 hours ago
        I worked for them in the early 2000's. There was nothing wrong with the people working there, except for the two founders, a father and son. They were absolutely ruthless. And as so often, that ruthless mentality was what enabled them to gain dominance over the market. I could tell some crazy stories about how they ran the company but better not because it might get me sued. But if you understand Dutch, you can read more about them e.g. here: https://www.quotenet.nl/zakelijk/a41239366/chipsoft-gerrit-h...
      • skinwill 4 hours ago
        Around here we have Epic. If you want a good scare, look up their corporate Willy Wonka-esq jail/campus and their policy of zero remote work.
    • greazy 4 hours ago
      It's still an issue. I've heard stories of EMR system going down forcing staff to use pen and paper. It boggles my mind that such systems don't have redundancy.

      These are commercial products being deployed.

      • elric 1 hour ago
        I have a few pet theories of why software in the medical space is so often shitty and insanely expensive. One of them is that working with doctors is often very unpleasant, which makes building software them unpleasant, which drives up the price. I mean some of the ones I worked with were terribly nice, especially the ICU docs and neurologists, but a large majority of them were major aholes.

        The other theory is there are soo many bureaucratic hoops to jump through in order to make anything in the medical space, that no one does it willingly.

        • siva7 1 hour ago
          It's not only the doctors, i have the gut feeling from my previous stint that people who like to work in the medical space are more often than not "difficult".
  • OskarS 4 hours ago
    It's interesting to compare this with the Post Office Scandal in the UK. Very different incidents, but reading this, there is arguably a root assumption in both cases that people made, which is that "the software can't be wrong". For developers, this is a hilariously silly thing, but for non-developers looking at it from the outside, they don't have the capability or training to understand that software can be this fragile. And they look at a situation like the post office scandal and think "Either this piece of software we paid millions for and was developed by a bunch of highly trained engineers is wrong, or these people are just ripping us off". Same thing with Therac-25, this software had worked on previous models and the rest of the company just had this unspoken assumption that it simply wasn't possible that there was anything wrong with it, so testing it specifically wasn't needed.
    • jwr 3 hours ago
      No, this is not a "hilariously silly thing" for developers. In fact, I'd say that most developers place way too much trust in software.

      I am a developer and whatever software system I touch breaks horribly. When my family wants to use an ATM, they tell me to stand at a distance, so that my aura doesn't break things. This is why I will not get into a self-driving car in the foreseeable future — I think we place far too much confidence in these complex software systems. And yet I see that the overwhelming majority of HN readers are not only happy to be beta-testers for this software as participants in road traffic, but also are happy to get in those cars. They are OK with trusting their life to new, complex, poorly understood and poorly tested software systems, in spite of every other software system breaking and falling apart around them.

      [anticipating immediate common responses: 1) yes, I know that self-driving car companies claim that their cars are statistically safer than human drivers, this is beyond the point here. One, they are "safer" largely because they drive so badly that other road participants pay extra attention and accommodate their weirdness, and two, they are still new, complex and poorly understood systems. 2) "you already trust your life to software systems" — again, beyond the point, not quite true as many software systems are built to have human supervision and override capability (think airplanes), and others are built to strict engineering requirements (think brakes in cars) while self-driving cars are not built that way.]

      • crazygringo 1 minute ago
        > but also are happy to get in those cars. They are OK with trusting their life to new, complex, poorly understood and poorly tested software systems

        Because the alternative isn't bug-free driving -- it's a human being. Who maybe didn't sleep last night, who might have a heart attack while their foot is on the accelerator, who might pull over and try to sexually assault you.

      • pfdietz 2 hours ago
        I wonder if this is a desired outcome of fuzzing, the puncturing of the idea that software doesn't have bugs. This goes all the way back to the very start of fuzzing with Barton Miller's work from ~1990.
    • brazzy 3 hours ago
      > there is arguably a root assumption in both cases that people made, which is that "the software can't be wrong"

      I think in this case, the thought process was based on the experience with older, electro-mechanical machines where the most common failure modern was parts wearing out.

      Since software can, indeed, not "wear out", someone made the assumption that it was therefore inherently more reliable.

      • balamatom 2 hours ago
        I think the "software doesn't wear out" assumption is just a conceivable excuse for the underlying "we do not question" assumption. A piece of software can be like a beautiful poem, but the kind of software most people are familiar with is more like a whole lot of small automated bureaucracies.

        Bureaucracy being (per Graeber 2006) something like the ritual where by means of a set of pre-fashioned artifacts for each other's sake we all operate at 2% of our normal mental capacities and that's how modern data-driven, conflict-averse societies organize work and distribute resources without anyone being able to have any complaints listened to.

        >Bureaucracies public and private appear—for whatever historical reasons—to be organized in such a way as to guarantee that a significant proportion of actors will not be able to perform their tasks as expected. It also exemplifies what I have come to think of the defining feature of a utopian form of practice, in that, on discovering this, those maintaining the system conclude that the problem is not with the system itself but with the inadequacy of the human beings involved.

        Most places where a computer system is involved in the administration of a public service or something of the caliber, has that been a grassroots effort, hey computers are cool and awesome let's see what they change? No, it's something that's been imposed in the definitive top-down manner of XX century bureaucracies. Remember the cohort of people who used to become stupid the moment a "thinking machine" was powered within line of sight (before the last uncomputed generation retired and got their excuse to act dumb for the rest of it)? Consider them in view the literally incomprehensible number of layers that any "serious" piece of software consists of; layers which we're stuck producing more of, when any software professional knows the best kind of software is less of it.

        But at least it saves time and the forest, right? Ironically, getting things done in a bureaucratic context with less overhead than filling out paper forms or speaking to human beings, makes them even easier to fuck up. And then there's the useful fiction of "the software did it" that e.g. "AI agents" thing is trying to productize. How about they just give people a liability slider in the spinup form, eh, but nah.

        Wanna see a miracle? A miracle is when people hype each other into pretending something impossible happened. To the extent user-operated software is involved in most big-time human activities, the daily miracle is how it seems to work well enough, for people to be able to pretend it works any good at all. Many more than 3 such cases. But of course remembering the catastrophal mistakes of the past can be turned into a quaint fun-time activity. Building things that empower people to make less mistakes, meanwhile, is a little different from building artifacts for non-stop "2% time".

  • NoSalt 5 minutes ago
    Is there a way to get the "gist" of the article, the lesson to be learned without reading the full article? I got to the screaming part and couldn't read any more.
  • isopede 6 hours ago
    I strongly believe that we will see an incident akin to Therac-25 in the near future. With as many people running YOLO mode on their agents as there are, Claude or Gemini is going to be hooked up to some real hardware that will end up killing someone.

    Personally, I've found even the latest batch of agents fairly poor at embedded systems, and I shudder at the thought of giving them the keys to the kingdom to say... a radiation machine.

    • SCdF 5 hours ago
      The Horizon (UK Royal Mail accounting software) incident killed multiple postmasters through suicide, and bankrupted and destroyed the lives of dozens or hundreds more.

      The core takeaway developers should have from Therac-25 is not that this happens just on "really important" software, but that all software is important, and all software can kill, and you need to always care.

      • hahn-kev 5 hours ago
        From what I've read about that incident I don't know what the devs could have done. The company sure was a problem but also the laws basically saying a computer can't be wrong. No dev can solve that problem.
        • V__ 3 hours ago
          > Engineers are legally obligated to report unsafe conduct, activities or behaviours of others that could pose a risk to the public or the environment. [1]

          If software "engineers" want to be taken seriously, then they should also have the obligation to report unsafe/broken software and refuse to ship unsafe/broken software. The developers are just as much to blame as the post office:

          > Fujitsu was aware that Horizon contained software bugs as early as 1999 [2]

          [1] https://engineerscanada.ca/news-and-events/news/the-duty-to-...

          [2] https://en.wikipedia.org/wiki/British_Post_Office_scandal

        • siva7 47 minutes ago
          Then you haven't read deep enough into the Horizon UK case. The lead devs have to take a major blame for what happened as they lied to the investigators and could have helped prevent early on some suicides if they had courage. These devs are the worst kind of, namely Gareth Jenkins and Anne Chambers.
        • sim7c00 4 hours ago
          as you point out this was a messup on a lot of levels. its an interesting effect tho not to be dismissed. how your software works and how its perceived and trusted can impact people psychologically.
        • fuckaj 4 hours ago
          Given whole truth testimony?
      • maweki 4 hours ago
        But there is still a difference here. Provenance and proper traceability would have allowed the subpostmasters to show their innocence and prove the system failable.

        In the Therac-25 case, the killing was quite immediate and it would have happened even if the correct radiation dose was recorded.

        • scott_w 2 hours ago
          I’m not sure it would. Remember that the prosecutors in this case were outright lying to the courts about the system! When you hit that point, it’s really hard to even get a clean audit trail out in the open any more!
    • grues-dinner 3 hours ago
      Non-agentic AI is already "killing" people by some definitions. There's a post about someone being talked into suicide on the front page right now, and they are 100% going to get used for something like health insurance and benefits where avoidable death is a very possible outcome. Self-driving cars are also full of "AI" and definitely have killed people already.

      Which is not to say that software hasn't killed people before (Horizon, Boeing, probably loads of industrial accidents and indirect process control failures leading to dangerous products, etc, etc). Hell, there's a suspicion that austerity is at least partly predicated on a buggy Excel spreadsheet, and with about 200k excess deaths in a decade (a decade not including Covid) in one country, even a small fraction of those being laid at the door of software is a lot of Theracs.

      AI will probably often skate away from responsibility in the same way that Horizon does: by being far enough removed and with enough murky causality that they can say "well, sure, it was a bug, but them killing themselves isn't our fault"

      I also find AI copilot things do not work well with embedded software. Again, people YOLOing embedded isn't new, but it might be about to get worse.

    • the-grump 5 hours ago
      The 737 MAX MCAS debacle was one such failure, albeit involving a wider system failure and not purely software.

      Agreed on the future but I think we were headed there regardless.

      • jonplackett 5 hours ago
        Yeah reading this reminded me a lot of MCAS. Though MCAS was intentionally implemented and intentionally kept secret.
    • sim7c00 4 hours ago
      talk to anyone in the industries about 'automation' on medical or critical infra devices and they will tell you NO. No touching our devices with your rubbish.

      i am pretty confident they wont let claude touch if it they dont even let deterministic automations run...

      that being said, maybe there are places. but this is always the sentiment i got. no automating, no scanning, no patching. device is delivered certified and any modifications will invalidate that. any changes need to be validated and certified.

      its a different world that makin apps thats for sure.

      not to say mistakes arent made and change doesnt happen, but i dont think people designing medical devices will be going yolo mode on their dev cycle anytime soon... give the folks in safety critical system engineering some credit..

      • throwaway0261 4 hours ago
        > but i dont think people designing medical devices will be going yolo mode on their dev cycle anytime soon

        I don't have the same faith in corporate leadership as you, at least not when they see potentially huge savings by firing some of the expensive developers and using AI to write more of the code.

    • Maxion 5 hours ago
      > Personally, I've found even the latest batch of agents fairly poor at embedded systems

      I mean even simple crud web apps where the data models are more complex, and where the same data has multiple structures, the LLMs get confused after the second data transformation (at the most).

      E.g. You take in data with field created_at, store it as created_on, and send it out to another system as last_modified.

  • haunter 4 hours ago
    My "favorite" part:

    >One failure occurred when a particular sequence of keystrokes was entered on the VT100 terminal that controlled the PDP-11 computer: If the operator were to press "X" to (erroneously) select 25 MeV photon mode, then use "cursor up" to edit the input to "E" to (correctly) select 25 MeV Electron mode, then "Enter", all within eight seconds of the first keypress and well within the capability of an experienced user of the machine, the edit would not be processed and an overdose could be administered. These edits were not noticed as it would take 8 seconds for startup, so it would go with the default setup

    Kinda reminds me how everything is touchscreen nowadays from car interfaces to industry critical software

    • hiccuphippo 1 hour ago
      And we have a concept, optimistic updates, for making the ui look responsive while the updates happen in the background and reconcile later. I can only hope they know when not to use it.
  • rendaw 34 minutes ago
    So reading about this my current company sounds exactly the same. And the one before it, and the one before that.

    Critical issues happen with customers, blame gets shifted, a useless fix is proposed in the post mortem and implemented (add another alert to the waterfall of useless alerts we get on call), and we continue to do ineffective testing. Procedural improvements are rejected by the original authors who were then promoted and want to keep feeling like they made something good and are now in a position to enforce that fiction.

    So IMO the lesson here isn't that everyone should focus on culture and process, it's that you won't have the right culture and process and (apparently) laws and regulation can overcome the lack of culture and process.

  • michaelt 6 hours ago
    I'd be interested in knowing how many of y'all are being taught about this sort of thing in college ethics/safety/reliability classes.

    I was taught about this in engineering school, as part of a general engineering course also covering things like bathtub reliability curves and how to calculate the number of redundant cooling pumps a nuclear power plant needs. But it's a long time since I was in college.

    Is this sort of thing still taught to engineers and developers in college these days?

    • FuriouslyAdrift 51 minutes ago
      A big thing that was emphasized in my computer engineering courses at Purdue in the early 90s with regards to machine interfaces was hysteresis. A machine has a RANGE of behaviors throughout it's operating area that might not be accounted fro in your programing and you must take that into consideration (i.e. a robotic arm or electric motor doesn't just 'stop' instantly).

      Analog systems do not behave like computers.

    • wocram 5 hours ago
      This was part of our Systems Engineering class, something like this: https://web.mit.edu/6.033/2014/wwwdocs/assignments/therac25....
    • BoxOfRain 5 hours ago
      I was taught about it in university as a computer science undergrad, thought about it often since I ended up working in medtech.
    • aDyslecticCrow 5 hours ago
      Im too curious, I made a poll. I for sure wasnt in computer science uni. I only heard about it vaguely online.

      https://strawpoll.com/NMnQNX9aAg6

    • lgeek 5 hours ago
      It was taught in a first year software ethics class on my Computer Science programme. Back in 2010. I'm wondering if they still do
      • firesteelrain 2 hours ago
        I was taught Computer Ethics back in the early 2000s as part of my CS degree.
    • 3D30497420 5 hours ago
      I studied design and I wish we'd had a design ethics class, which would have covered instances like this.
  • ChrisMarshallNY 1 hour ago
    I worked for hardware manufacturers for most of my career, as a software guy.

    In my experience, hardware people really dis software. It's hard to get them to take it seriously.

    When something like this happens, they tend to double down on shading software.

    I have found it very, very difficult to get hardware people to understand that software has a different ruleset and workflow, from hardware. They interpret this as "cowboy software," and think we're trying to weasel out of structure.

    • scottLobster 11 minutes ago
      From the sound of it there was water in the hydraulic system. There's no software fix for that, wheel wasn't coming down regardless of what it was commanded to do.
  • MerrimanInd 33 minutes ago
    Every mechanical engineer educated in the USA knows the name of two famous collapses: the Tacoma Narrows Bridge and the Hyatt Regency balcony in Kansas City, MO. With an engineering ethics class being part of nearly every undergrad curriculum, these are two of the classic examples for us. I'm curious; do software engineers learn stories like the Therac-25 in their degrees?
    • scottLobster 13 minutes ago
      I was a Computer Engineer, so not quite the same, but we got taught about Therac-25 in our Engineering Ethics class when I took it over a decade ago.

      Unfortunately Computer Science is still in its too-cool-for-school phase, see OpenAI being sued over recently encouraging a suicidal teenager to kill themself. You'd think it would be common sense for that to be a hard stop outside of the LLM processing the moment a conversation turns to subjects like that, but nope.

  • Tenemo 2 hours ago
    The full 1993 report linked in the article has an intetesting statement regarding software developer certfication in the "Lessons learned" chapter:

    > Taking a couple of programming courses or programming a home computer does not qualify anyone to produce safety-critical software. Although certification of software engineers is not yet required, more events like those associated with the Therac-25 will make such certification inevitable. There is activity in Britain to specify required courses for those working on critical software. Any engineer is not automatically qualified to be a software engineer — an extensive program of study and experience is required. Safety-critical software engineering requires training and experience in addition to that required for noncritical software.

    After 32 years, this didn't go the way the report's authors expected, right?

    • firesteelrain 2 hours ago
      To add. Safety-critical software is not something you pick up in a classroom, it is something built over years of disciplined practice. There are standards like DO-178 for avionics and IEC 61508 for industrial systems, but how rigorously they are applied often depends on cost and project constraints. That said, when failures happen, the audit trail will not matter to the people harmed. The history of safety engineering shows that almost every rule exists because someone was hurt first.
  • siva7 1 hour ago
    > With AECL's continued failure to explain how to test their device

    They can't. There was a single developer, he left, no tests existed, no one understood the mess to confidently make changes. At this point you can either lie your way through the regulators or scrap the product altogether.

    I've seen this kind of devs and companies running their software in regulated industries like in the therac incident, just now we are in the year 2025. I left because i understood that it's a criminal charge waiting to happen.

  • rossant 5 hours ago
    The first commenter on this site introduces himself as "a physician who did a computer science degree before medical school." He is now president of the Ray Helfer Society [1], "an honorary society of physicians seeking to provide medical leadership regarding the prevention, diagnosis, treatment and research concerning child abuse and neglect."

    While the cause is noble, the medical detection of child abuse faces serious issues with undetected and unacknowledged false positives [2], since ground truth is almost never knowable. The prevailing idea is that certain medical findings are considered proof beyond reasonable doubt of violent abuse, even without witnesses or confessions (denials are extremely common). These beliefs rest on decades of medical literature regarded by many as low quality because of methodological flaws, especially circular reasoning (patients are classified as abuse victims because they show certain medical findings, and then the same findings are found in nearly all those patients—which hardly proves anything [3]).

    I raise this point because, while not exactly software bugs, we are now seeing black-box AIs claiming to detect child abuse with supposedly very high accuracy, trained on decades of this flawed data [4, 5]. Flawed data can only produce flawed predictions (garbage in, garbage out). I am deeply concerned that misplaced confidence in medical software will reinforce wrongful determinations of child abuse, including both false positives (unjust allegations potentially leading to termination of parental rights, foster care placements, imprisonment of parents and caretakers) and false negatives (children who remain unprotected from ongoing abuse).

    [1] https://hs.memberclicks.net/executive-committee

    [2] https://news.ycombinator.com/item?id=37650402

    [3] https://pubmed.ncbi.nlm.nih.gov/30146789/

    [4] https://rdcu.be/eCE3l

    [5] https://www.sciencedirect.com/science/article/pii/S002234682...

  • tedggh 2 hours ago
    TL;DR

    The Therac-25 was a radiation therapy machine built by Atomic Energy Canada Limited in the 1980s. It was the first to rely entirely on software for safety controls, with no hardware interlocks. Between 1985 and 1987, at least six patients received massive overdoses of radiation, some fatally, due to software flaws.

    One major case in March 1986 at the East Texas Cancer Center involved a technician who mistyped the treatment type, corrected it quickly, and started the beam. Because of a race condition, the correction didn’t fully register. Instead of the prescribed 180 rads, the patient was hit with up to 25,000 rads. The machine reported an underdose, so staff didn’t realize the harm until later.

    Other hospitals reported similar incidents, but AECL denied overdoses were possible. Their safety analysis assumed software could not fail. When the FDA investigated, AECL couldn’t produce proper test plans and issued crude fixes like telling hospitals to disable the “up arrow” key.

    The root problem was not a single bug but the absence of a rigorous process for safety-critical software. AECL relied on old code written by one developer and never built proper testing practices. The scandal eventually pushed regulators to tighten standards. The Therac-25 remains a case study of how poor software processes and organizational blind spots can kill—a warning echoed decades later by failures like the Boeing 737 MAX.

    • thijson 1 hour ago
      I remember my computer science professor talking about this, how critical safety can be in software. Another example he gave was the refueling machine at a nuclear power plant, it had fell off the tracks and broke the pipe that goes through the reactor due to a software bug. Also he mentioned the software in his pacemaker.

      Engineers in other fields need to sign off on designs, and can be held liable if something goes wrong. Software hasn't caught up to that yet.

  • softwaredoug 3 hours ago
    Safety problems are almost never about one evil / dumb person and frequently involve confusing lines of responsibility.

    Which makes me very nervous about AI generated code and people who don’t clam human authorship. The bug that creeps in where we scapegoat the AI isn’t gonna cut it in a safety situation.

  • throwaway0261 3 hours ago
    One of the comments said this:

    > That standard [IEC 62304] is surrounded by other technical reports and guidances recognized by the FDA, on software risk management, safety cases, software validation. And I can tell you that the FDA is very picky, when they review your software design and testing documentation. For the first version and for every design change.

    > That’s good news for all of us. An adverse event like the Therac 25 is very unlikely today.

    This is a case where regulation is a good thing. Unfortunately I see a trend lately where almost any regulation is seen as something stopping innovation and business growth. There are room for improvements and some areas are over regulated, but we don't want a "DOGE" chainsaw to regulations without knowing what the consequences are.

  • vemv 5 hours ago
    My (tragically) favorite part is, from wikipedia:

    > A commission attributed the primary cause to generally poor software design and development practices, rather than singling out specific coding errors.

    Which to me reads as "this entire codebase was so awful that it was bound to fail in some or other way".

    • rgoulter 4 hours ago
      Hmm. "poor software design" suggests a high risk that something might go wrong; "poor development practice" suggests that mistakes won't get caught/remedied.

      By focusing on particular errors, there's the possibility you'll think "problem solved".

      By focusing on process, you hope to catch mistakes as early as possible.

    • ycombobreaker 1 hour ago
      Sibling reply notes the "process" is the problem, amd I would second that. I would also like to add, it's perfectly possible to produce a high quality code base with poor practices. This can happen with very small, expert teams. However, certain qualities become high-variance, which becomes a hazard over time.
  • mdavid626 5 hours ago
    Some sanity checks are always a good idea before running such destructive action (IF beam_strength > REASONABLY_HIGH_NUMBER THEN error). Of course the UI bug is hard to catch, but the sanity check would have prevented this completely and the machine would just end up in an error, rather than killing patients.
    • b_e_n_t_o_n 5 hours ago
      invariants are so useful to enforce even for toy projects. they should never be triggered outside of dev, but if they do sometimes it's better to just let it crash.
      • bzzzt 4 hours ago
        Making sure the beam is off before crashing would be better though.
  • snkline 1 hour ago
    I was kinda shocked by the results of his informal survey, because this was a big focus of my ethics course in college. I guess a lot of developers either didn't get a CS degree, or their degree program didn't involve an ethics course.
  • haddonist 5 hours ago
    Well There's Your Problem podcast, Episode 121: Therac-25

    https://www.youtube.com/watch?v=7EQT1gVsE6I

  • 0xDEAFBEAD 2 hours ago
    >any bugs we see would have to be transient bugs caused by radiation or hardware errors.

    Can't imagine that radiation might be a factor here...

  • rokkamokka 6 hours ago
    I was taught this incident in university many years ago. It's undeniably an important lesson that shouldn't be forgotten
  • mellosouls 5 hours ago
    TIL TheDailyWTF is still active. I'd thought it had settled to greatest hits only some years ago.
    • greatgib 4 hours ago
      This story is kind of old. But also I'm suspicious that this was an AI generated content due to this weird paragraph (one becoming "they"):

         It's worth noting that there was one developer who wrote all of this code. They left AECL in 1986, and thankfully for them, no one has ever revealed their identity. And while it may be tempting to lay the blame at their feet—they made every technical choice, they coded every bug—it would be wildly unfair to do that.
      • pie_flavor 3 hours ago
        'They' is a correct singular form for a person of unknown gender. Modern writing overwhelmingly uses it instead of 'he or she', but it has always been correct, has been predominant for a long time, and furthermore it doesn't have anything to do with AI, nor was AI viable as an authoring tool when this article was written, nor is Remy ever going to sell out. What a bizarre comment.
      • edot 4 hours ago
        Isn’t that the pronoun to use when you’re unsure of gender? This article didn’t feel AI-y to me.
      • semv3r 4 hours ago
        Singular "they" has been used since at least the 14th century—was generative AI commonly available then? https://en.wikipedia.org/wiki/Singular_they
      • tbossanova 2 hours ago
        That is 100% standard english, dude. I feel like I might have read that exact sentence 20 years ago...
  • linohh 5 hours ago
    In my university this case was (and probably still is) subject of the first lecture in the first semester. A lot to learn here and one of the prime examples how the DEPOSE model [Perrow 1984] works for software engineering.
  • Forgret 4 hours ago
    What surprised me most was that only one developer was working on such an unpredictable technology, whereas I think I need at least 5 developers to be able to discuss options.
    • throwaway0261 3 hours ago
      One of the benefits of regulations in these areas, is that they require proper tests and documentation. This often requires more than one person to handle the load. We don't want to go back to the 80s YOLO mode just because we need to "move faster".

      BTW: Relevant XKCD: https://xkcd.com/2347/

  • armcat 2 hours ago
    Therac-25 was part of the mandatory "computer ethics" course at my uni, as part of the Computer Science programme, circa early 2000s.
  • amelius 5 hours ago
    > The Therac-25 was the first entirely software-controlled radiotherapy device.

    This says it all.

  • autonomousErwin 5 hours ago
    This reminds me of the Belgium 2003 election that was impossibly skewered by a supernova light years away sending charged particles which manage to get through our atmosphere (allegedly) and flipping a bit. Not the only case it's happened.
    • jve 5 hours ago
      On the bright side, wow, those computers are really sturdy: takes a whole supernova to just flip a bit :)
      • kijin 5 hours ago
        Well the thing is, millions of stars go supernova in the observable universe every single day. Throw in the daily gamma ray burst as well, and you've got bit flips all over the place.
  • napolux 6 hours ago
    The most deadly bug in history. If you know any other deadly bug, please share! I love these stories!
    • kgwgk 5 hours ago
      Several people killed themselves over this: https://www.wikipedia.org/wiki/British_Post_Office_scandal

      https://www.theguardian.com/uk-news/2024/jan/09/how-the-post...

      One member of the development team, David McDonnell, who had worked on the Epos system side of the project, told the inquiry that “of eight [people] in the development team, two were very good, another two were mediocre but we could work with them, and then there were probably three or four who just weren’t up to it and weren’t capable of producing professional code”.

      What sort of bugs resulted?

      As early as 2001, McDonnell’s team had found “hundreds” of bugs. A full list has never been produced, but successive vindications of post office operators have revealed the sort of problems that arose. One, named the “Dalmellington Bug”, after the village in Scotland where a post office operator first fell prey to it, would see the screen freeze as the user was attempting to confirm receipt of cash. Each time the user pressed “enter” on the frozen screen, it would silently update the record. In Dalmellington, that bug created a £24,000 discrepancy, which the Post Office tried to hold the post office operator responsible for.

      Another bug, called the Callendar Square bug – again named after the first branch found to have been affected by it – created duplicate transactions due to an error in the database underpinning the system: despite being clear duplicates, the post office operator was again held responsible for the errors.

      • BoxOfRain 5 hours ago
        More heads should have rolled over this in my opinion, absolutely despicable that they cheerfully threw innocent people in prison rather than admit their software was a heap of crap. It makes me so angry this injustice was allowed to prevail for so long because nobody cared about the people being mistreated and tarred as thieves as long as they were 'little people' of no consequence, while senior management gleefully covered themselves in criminality to cover for their own uselessness.

        It's an archetypal example of 'one law for the connected, another law for the proles'.

    • benrutter 5 hours ago
      Probably many rather than a single bug, but the botched London Ambulance dispatch software from the 90s, is probably one of the most deadly software issues of all time, although there aren't any estimates I know of that try to quantify the number of lives lost as a result.

      http://www0.cs.ucl.ac.uk/staff/a.finkelstein/papers/lascase....

    • A1kmm 5 hours ago
      Not even close. Israel apparently has AI bombing target intel & selection systems called Gospel and Lavender - https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai.... Claims are these systems have a selectivity of 90% per bombing, and they were willing to bomb up to 20 civilians per person classified by the system as a Hamas member. So assuming that is true, 90% of the time, they kill one Hamas member, and up to 20 innocents. 10% of the time, they kill up to 21 innocents and no Hamas members.

      Killing 20 innocents and one Hamas member is not a bug - it is callous, but that's a policy decision and the software working as intended. But when it is a false positive (10% of the time), due to inadequate / outdated data and inadequate models, that could reasonably classified as a bug - so all 21 deaths for each of those bombings would count as deaths caused by a bug. Apparently (at least earlier versions) of Gospel were trained on positive examples that mean someone is a member of Hamas, but not on negative examples; other problems could be due to, for example, insufficient data, and interpolation outside the valid range (e.g. using pre-war data about, e.g. how quickly cell phones are traded, or people movements, when behaviour is different post-war).

      I'd therefore estimate that deaths due to classification errors from those systems is likely in the thousands (out of the 60k+ Palestinian deaths in the conflict). Therac-25's bugs caused 6 deaths for comparison.

    • NitpickLawyer 6 hours ago
      The MCAS related bugs @ Boeing led to 300+ deaths, so it's probably a contender.
      • solids 5 hours ago
        Was that a bug or a failure to inform pilots about a new system?
        • thyristan 5 hours ago
          In the same vein one could argue that Therac-25 was not actually a software bug but a hardware problem. Interlocks, that could have prevented the accidents and that where present in earlier Therac models, were missing. The software was written with those interlocks in mind. Greedy management/hardware engineers skipped them for the -25 version.

          It's almost never just software. It's almost never just one cause.

          • actionfromafar 5 hours ago
            Just to point it out even clearer - there's almost never a root cause.
        • AdamN 5 hours ago
          Both - and really MCAS was fine but the issue was the metering systems (Pitot tubes) and the handling of conflicting data. That part of the puzzle was definitely a bug in the logic/software.
          • mnw21cam 3 hours ago
            It wasn't pitot tubes that had the hardware problem, it was the angle of attack sensor. The software was poorly designed to believe the input from just one fallible angle of attack sensor.
          • phire 4 hours ago
            That wasn't a bug.

            They deliberately designed it to only look at one of the Pitot tubes, because if they had designed it to look at both, then they would have had to implement a warning message for conflicting data.

            And if they had implemented a warning message, they would have had to tell the pilots about the new system, and train them how to deal with it.

            It wasn't a mistake in logic either. This design went through their internal safety certification, and passed.

            As far as I'm aware, MCAS functioned exactly as designed, zero bugs. It's just that the design was very bad.

          • kijin 5 hours ago
            Remember the Airbus that crashed in the middle of the Atlantic because one of the pilots kept pulling on his yoke, and the computer decided to average his input with normal input from the other pilot?

            Conflict resolution in redundant systems seems to be one of the weakest spots in modern aircraft software.

            • sgerenser 3 hours ago
              Air France 447: https://en.m.wikipedia.org/wiki/Air_France_Flight_447

              Inputs were averaged, but supposedly there’s at least a warning: Confused, Bonin exclaimed, "I don't have control of the airplane any more now", and two seconds later, "I don't have control of the airplane at all!"[42] Robert responded to this by saying, "controls to the left", and took over control of the aircraft.[84][44] He pushed his side-stick forward to lower the nose and recover from the stall; however, Bonin was still pulling his side-stick back. The inputs cancelled each other out and triggered an audible "dual input" warning.

        • NitpickLawyer 5 hours ago
          I would say plenty of both. They obviously had to inform the pilots, but the way the system didn't reset permanently after 2-3 (whatever) sessions of "oh, the pilot trimmed manually, after 10 seconds we keep doing the same thing" was a major major logic blunder. Failure all across the board, if only from the perspective of end-to-end / integration testing if nothing else.

          Worryingly, e2e / full integration testing was also the main cause of other Boeing blunders, like the Starliner capsule.

        • fuckaj 4 hours ago
          Not a bug. A non airworthy plane they tried to patch up with software.
          • reorder9695 4 hours ago
            The plane was perfectly airworthy without MCAS, that was never the issue. The issue was it handled differently enough at high angles of attack to the 737NG that pilots would've needed additional training or possibly a new type rating without MCAS changing the trim in this situation. The competition (Airbus NEO family) did not need this kind of new training for existing pilots, so airlines being required to do this for new Boeing but not Airbus planes would've been a huge commercial disadvantage.

            [edit as I can't reply to the child comment]: The FAA and EASA both looked into the stall characteristics afterwards and concluded that the plane was stable enough to be certified without MCAS and while it did have more of a tenancy to pitch up at high angles of attack it was still an acceptable amount.

            • fuckaj 4 hours ago
              I may have understood wrong but thought is possible to get into an unrecoverable stall?
    • bobmcnamara 2 hours ago
      In Dhahran, Saudi Arabia, on February 25, 1991, a Patriot missile failed to intercept an Iraqi Scud causing the death of 28 American soldiers.

      The patriot missile system used floating point for time, so as uptime extended the clock became more and more granular, eventually to the point where time skipped so far that the range gate was tripped.

      The fix was being deployed earlier that year but this unit hadn't been updated yet.

      https://www.cs.unc.edu/~smp/COMP205/LECTURES/ERROR/lec23/nod...

    • danadam 5 hours ago
      Some Google Pixel phones couldn't dial emergency number (still can't?). I don't know if there were any deadly consequences of that.

      https://www.androidauthority.com/psa-google-pixel-911-emerge...

    • throwaway0261 3 hours ago
      There was a news story from Norway last year where a car allegedly accelerated by itself, causing the car to fall off the second floor of a parking garage and kill the driver.
      • mnw21cam 3 hours ago
        There are plenty of "car allegedly accelerated by itself" incidents, and usually the root cause is the driver mistakenly pressing the accelerator pedal when they think they're pressing the brake pedal. And then swearing blind afterwards that they were braking as hard as they possibly could but the car kept surging forwards.
    • echelon 6 hours ago
      The 737 Max MCAS is arguably a bug. That killed 346 people.

      Not a "bug" per se, but texting while driving kills ~400 people per year in the US. It's a bug at some level of granularity.

      To be tongue in cheek a bit, buggy JIRA latency has probably wasted 10,000 human years. Those are many whole human lives if you count them up.

      • b_e_n_t_o_n 5 hours ago
        > To be tongue in cheek a bit, buggy JIRA latency has probably wasted 10,000 human years. Those are many whole human lives if you count them up.

        These kind of calculations always make me wonder...say someone wasted one minute of everybody's life, is the cost ~250 lives? One minute? Somewhere in between?

  • voxadam 4 hours ago
    (2021)
  • rvz 6 hours ago
    We're more likely to get a similar incident like this very quickly if we continue with the cult of 'vibe-coding' and throwing away basic software engineering principles out of the window as I said before. [0]

    Take this post-mortem here [1] as a great warning and which also highlights exactly what could go horribly wrong if the LLM misreads comments.

    What's even more scarier is each time I stumble across a freshly minted project on GitHub with a considerable amount of attention, not only it is 99% vibe-coded (very easy to detect) but it completely lacks any tests written for it.

    Makes me question the ability of the user prompting the code in the first place if they even understand how to write robust and battle-tested software.

    [0] https://news.ycombinator.com/item?id=44764689

    [1] https://sketch.dev/blog/our-first-outage-from-llm-written-co...

    • voxadam 4 hours ago
      The idea of 'vibe-coding' safety critical software is beyond terrifying. Timing and safety critical software is hard enough to talk about intelligently, even harder to code, harder yet to audit, and damn near impossible to debug, and all that's without neophyte code monkeys introducing massive black boxes full of poorly understood voodoo to the process.
  • auggierose 5 hours ago
    Wondering if that "one developer" is here on HN.
    • Forgret 4 hours ago
      Hahaha, it would be interesting, maybe he just commented on the post here?