AI Surveillance Should Be Banned While There Is Still Time Written by yegg on Sep 06, 2025 Published on Gabriel Weinberg's newsletter --- Key Points AI-powered chatbots present privacy risks that are significantly worse than traditional online tracking. Chatbot conversations capture richer personal information, revealing detailed profiles including personality traits, thought processes, and communication styles. This detailed data can be exploited for advanced manipulation: Commercial targeting through behavioral chatbot advertising. Ideological influence via politically biased chatbot prompts. AI chatbots have already proven to be more persuasive than humans, sometimes leading users into delusional spirals. Chatbot memory features enable models to learn from past conversations to tailor more subtle and convincing influence. Worrying incidents include: Grok chatbot leaking hundreds of thousands of private conversations. Perplexity AI agent vulnerable to hackers stealing private data. OpenAI's vision for a “super assistant” tracking user behavior online and offline. Anthropic training chatbots on user conversations by default (previously off). DuckDuckGo offers privacy-conscious AI services: Duck.ai for protected chatbot conversations. Optional, anonymous AI-assisted answers integrated with their private search engine. Despite growing privacy mishaps, the U.S. still lacks comprehensive online privacy laws and a constitutional privacy right. There is a narrow opportunity for AI-specific federal legislation, although some seek to block state AI laws. Immediate legislative action is critical to prevent history repeating itself—replacing online tracking with pervasive AI surveillance. --- Discussion The article highlights the increasing privacy dangers posed by AI chatbots compared to prior online tracking methods. Unlike brief search queries, chatbot interactions encourage users to share detailed personal information and create nuanced psychological profiles. These profiles can be exploited in unprecedented ways for commercial gains or ideological manipulation. Examples of current failures and risks include: Grok chatbot leaks: Users’ supposedly private conversations were exposed publicly. Perplexity vulnerabilities: Hackers accessed sensitive personal information. OpenAI's surveillance vision: Plans for AI tracking all user activities, online and offline. Anthropic’s data usage: Default training of AI on user chats to improve models. DuckDuckGo’s approach showcases the potential for privacy-respecting AI, demonstrating that protected, anonymous AI interaction is feasible. However, without federal regulation, bad actors and companies continue to exploit user data with impunity. --- Conclusion AI surveillance poses a rapidly escalating threat that surpasses previous online privacy concerns. Privacy-protective measures for AI chatbot interactions should be mandated by law. Congress must act swiftly to avoid normalizing AI tracking, ensuring privacy-preserving AI tools become standard. Meanwhile, privacy-focused companies like DuckDuckGo continue to develop tools to allow consumers to benefit from AI without suffering privacy harms. --- Visual Original cartoon by Dominique Lizaambard (left) updated for AI by AI (right) illustrates how AI-related privacy concerns have evolved. --- For those interested in AI privacy issues and privacy-first technology solutions, subscribing to Gabriel Weinberg’s newsletter provides ongoing insights and updates.