Normalization of Deviance Normalization of deviance is a phenomenon where deviant behavior or practices become accepted as normal within organizations, often leading to riskier situations and sometimes catastrophic failure. While studied in fields like healthcare and aviation, this concept is rarely discussed in software development despite its clear relevance. --- Anecdotes of Normalized Deviance in Tech Companies High employee turnover: A company combining aspects of Valve and Netflix with great culture loses about half of new hires in the first year—considered normal. Excessive secrecy: Some teams refuse to report hardware bugs to vendors to avoid competitors getting fixes; publish unreproducible research results intentionally; strict anti-leak policies cause extreme employee paranoia. Toxic relationships: In one office, two team managers had a decade-long feud preventing them from being in the same room for years. Misaligned culture and practice: In some companies: Version control wasn't enforced for months. Builds were broken multiple times daily, and this was rationalized as acceptable since it affected everyone equally. Recruiter bias persisted despite initiatives to increase diversity. Massive budgets were poorly allocated, with expensive items taking months to be approved. Common adoption of questionable tools: Widespread use of the Python @flaky decorator that reports passing tests if any rerun passes, masking flaky test problems rather than fixing them. Low reliability: Companies with infrastructure critical to many others experience only "two nines" (99%) uptime due to poor practices stemming from an early focus on growth and disregarding risks. --- Causes and Dynamics Behind Normalization of Deviance Rules seen as stupid/inefficient: Procedures are bypassed because they slow down operations (e.g., skipping staging or testing during deployment). Imperfect and uneven knowledge: New employees learn deviant but accepted workflows; repeated exposure leads to normalization. "For the good of the patient" rationale: Breaking rules is justified for immediate benefit (e.g., bypassing safety protocols to avoid service degradation). Overconfidence and entitlement: "Rules don't apply to me" mentality, leading to pushback against security measures or access restrictions. Fear of speaking up: Cultural or interpersonal barriers prevent raising concerns or giving critical feedback. Leadership hiding problems: Issues get diluted or filtered up the management chain to avoid embarrassment or risk. --- Examples from Other Industries In healthcare, ignoring alarms or disabling safety equipment due to annoyance led to tragic patient deaths. Hand-washing non-compliance by medical staff despite clear evidence of mortality reduction parallels tech’s failure to consistently follow best practices. --- Insights on Software Industry Culture Many tech companies prioritize rapid growth over operational safety and maintenance, leading to long-term issues. Efforts to fix these problems often fail due to incentive misalignment. Larger companies find it difficult to change culture because decision-making and rewards are dispersed. Smaller companies can react faster with direct leadership involvement. Industry-wide, best practices are often copied blindly (“cargo culting”) without understanding why they worked originally. Good engineering culture requires ongoing vigilance, attention to weak signals, and safe channels for feedback. --- Recommendations and Solutions From John Banja's work and observations applied to software development: Pay attention to weak signals: Give proper weight to new employees’ concerns before they acclimate to poor standards. Resist unreasonable optimism: Avoid assumptions that shortcuts are safe. Teach difficult conversations: Empower employees to speak up