Anthropic announced updates to its Consumer Terms and Privacy Policy for Claude users (Free, Pro, Max) on Aug 28, 2025. The changes introduce an opt-in option to allow data from chats and coding sessions to be used to improve Claude and strengthen safeguards against misuse. The update applies to Claude Free, Pro, and Max (including Claude Code) but does not apply to Claude for Work, Claude Gov, Claude for Education, or API usage (e.g., via Amazon Bedrock or Google Vertex AI). Participation is voluntary and can be selected during signup or anytime in Privacy Settings. Existing users will receive an in-app notification and have until September 28, 2025 to accept; if they accept, the new terms take effect immediately for new or resumed chats and coding sessions. After September 28, users must choose a model-training preference to continue using Claude. If you opt in to data used for training, Anthropic will extend data retention to five years for training data and for any feedback you provide about Claude’s responses. If you opt out, the current 30-day data retention remains. Anthropic emphasizes privacy protections: data is filtered/obfuscated to protect sensitive information, and data is not sold to third parties. You can change your data-training preference at any time in Privacy Settings, and you can revoke or modify your choice as policies evolve. The accompanying FAQ explains what’s changing, deadlines, how to change preferences, what happens if you enable or disable training data, and why the retention period is being extended. The updates are designed to improve model safety, accuracy, and capabilities (coding, analysis, reasoning) across Claude, while offering users ongoing control over their data.