How thousands of ‘overworked, underpaid’ humans train Google’s AI to seem smart Author: Varsha Bansal Date: Thu 11 Sep 2025 --- Overview This article exposes the harsh realities faced by thousands of contracted workers who train and moderate Google’s AI systems, such as the Gemini chatbot and AI Overviews. These workers, employed through subcontractors like GlobalLogic and Accenture, undergo demanding, often distressing work conditions, yet receive little recognition or adequate support. --- The Human Workforce Behind Google's AI Workers like Rachael Sawyer, a technical writer, discovered their roles involved rating and moderating AI-generated content rather than creating it. Tasks include checking summaries, moderating violent or explicit AI outputs, and quality rating responses within very tight deadlines. Thousands of such AI raters operate primarily in the US, earning $16/hour (generalists) to $21/hour (super raters). GlobalLogic, Google's primary contractor, had grown from 25 super raters in 2023 to nearly 2,000 at peak. --- Work Conditions and Challenges Raters often encounter disturbing content unexpectedly, without prior warning or mental health support. Increasing performance pressures have cut task times drastically, from 30 to as little as 10–15 minutes per task. Raters face unclear, frequently changing guidelines and limited information on the AI’s usage or end purpose. They must sometimes evaluate AI outputs outside their expertise, including complex fields like astrophysics and medicine. Group consensus processes for rating can lead to social dynamics where more dominant raters influence outcomes. --- Moderation, Safety, and Policy Changes Google’s AI, despite public scrutiny and incidents of bizarre responses (e.g., suggesting glue on pizza dough), continues to loosen moderation guardrails. In 2024, Google allowed AI to repeat user-provided hate speech or explicit content, though the AI itself should not generate it. Internal contractor guidelines reportedly downgraded safety enforcement if harmful content was not AI-generated but merely regurgitated. Google claims quality raters provide external feedback without directly impacting algorithms. --- Worker Sentiment and Industry Implications Many raters feel invisible, undervalued, and morally conflicted about supporting AI products they view as unsafe or unnecessary. Layoffs have reduced the workforce from about 2,000 to 1,500 in 2025. Some workers avoid using AI themselves and discourage others from trusting AI outputs. Experts warn that "speed eclipses ethics," highlighting how profit motives undermine AI safety and worker well-being. --- Quotes “AI isn’t magic; it’s a pyramid scheme of human labor.” – Adio Dinika, Distributed AI Research Institute “We had no idea where it was going, how it was being used or to what end.” – Anonymous AI rater “Speed eclipses ethics. The AI safety promise collapses the moment safety threatens profit.” – Adio Dinika “AI is built on the backs of overworked, underpaid human beings.” – Rachael Sawyer --- Context and Related Issues Human data workers, including annotators and raters, form the middle layer of AI's supply chain, bridging massive internet data and high-tech engineering. There is growing concern around the ethical treatment of AI workers, the opacity of AI development, and the prioritization of market speed over safety. Related Guardian report: "Tech companies are stealing our books, music and films for AI. It’s brazen theft and must be stopped." --- Visuals Illustration: AI models are trained on vast swathes of data from every corner of the internet, by humans (Rita Liu/The Guardian). Photograph: User testing Google Gemini AI at MWC25 tech show in Barcelona, March 2024 (Bloomberg/Getty Images). --- Conclusion The human cost behind cutting-edge AI technology remains largely hidden. Contract workers,