Remy Porter recounts the Therac-25 incident, a fully software-controlled radiotherapy machine whose safety failures caused multiple overdoses and deaths in the mid-1980s. The ETCC overdose in March 1986 was part of six accidents between 1985 and 1987, in which patients were harmed or killed while the machine reported underdoses. The root cause was not a hardware fault but a race condition in the software and a flawed safety process. When a clinician Typed quickly to correct input (mistyping X for X-ray) and then pressed to start the beam, the UI sometimes failed to recalculate the activation sequence in time, allowing a high-energy beam to reach the patient. The Therac-25 lacked reliable hardware interlocks; safety depended on software that was not adequately tested or validated for complex human–computer interactions. The system ran on PDP-11 assembly with multiple processes for input, beam alignment, and dosage, all written largely by a single developer. AECL assumed the software was safe because it had been used for years, ignoring software decay and the need for rigorous testing of software changes. As incidents accumulated, the FDA demanded a Corrective Action Plan (CAP), but AECL’s CAP revisions were repeatedly criticized for incomplete test plans and inadequate validation of software changes. In January 1987 another overdose occurred due to a different bug: a shared one-byte flag that controlled whether a safety check ran could be corrupted by timing, allowing the beam to fire at full energy when the turntable was not properly positioned. A temporary "fix" involved publishing a change to disable the UP key by removing its keycap, a drastic and frightening workaround that underscored how safety was being protected through process hacks rather than robust engineering. Ultimately, the software was fixed and regulatory changes were introduced to prevent similar failures. Porter emphasizes that the Therac-25 story is a systemic failure, not just the fault of a lone programmer: faulty processes, inadequate testing, and organizational culture allowed life-threatening software failures to slip through. He uses it to urge readers to examine how their own development processes ensure safety and quality at scale, pointing to modern parallels in safety-critical software and calling for rigorous testing and governance. The article also cites a deeper, definitive technical account for those who want a more thorough reconstruction.