Anthropic will start training its AI models on user data, including new chat transcripts and coding sessions, unless users choose to opt out. Itâs also extending its data retention policy to five years â again, for users that donât choose to opt out.
Anthropic will start training its AI models on chat transcripts
You can choose to opt out.
You can choose to opt out.


All users will have to make a decision by September 28th. For users that click âAcceptâ now, Anthropic will immediately begin training its models on their data and keeping said data for up to five years, according to a blog post published by Anthropic on Thursday.
The setting applies to ânew or resumed chats and coding sessions.â Even if you do agree to Anthropic training its AI models on your data, it wonât do so with previous chats or coding sessions that you havenât resumed. But if you do continue an old chat or coding session, all bets are off.
The updates apply to all of Claudeâs consumer subscription tiers, including Claude Free, Pro, and Max, âincluding when they use Claude Code from accounts associated with those plans,â Anthropic wrote. But they donât apply to Anthropicâs commercial usage tiers, such as Claude Gov, Claude for Work, Claude for Education, or API use, âincluding via third parties such as Amazon Bedrock and Google Cloudâs Vertex AI.â
New users will have to select their preference via the Claude signup process. Existing users must decide via a pop-up, which they can defer by clicking a âNot nowâ button â though they will be forced to make a decision on September 28th.
But itâs important to note that many users may accidentally and quickly hit âAcceptâ without reading what theyâre agreeing to.
The pop-up that users will see reads, in large letters, âUpdates to Consumer Terms and Policies,â and the lines below it say, âAn update to our Consumer Terms and Privacy Policy will take effect on September 28, 2025. You can accept the updated terms today.â Thereâs a big black âAcceptâ button at the bottom.
In smaller print below that, a few lines say, âAllow the use of your chats and coding sessions to train and improve Anthropic AI models,â with a toggle on / off switch next to it. Itâs automatically set to âOn.â Ostensibly, many users will immediately click the large âAcceptâ button without changing the toggle switch, even if they havenât read it.
If you want to opt out, you can toggle the switch to âOffâ when you see the pop-up. If you already accepted without realizing and want to change your decision, navigate to your Settings, then the Privacy tab, then the Privacy Settings section, and, finally, toggle to âOffâ under the âHelp improve Claudeâ option. Consumers can change their decision anytime via their privacy settings, but that new decision will just apply to future data â you canât take back the data that the system has already been trained on.
âTo protect usersâ privacy, we use a combination of tools and automated processes to filter or obfuscate sensitive data,â Anthropic wrote in the blog post. âWe do not sell usersâ data to third-parties.â
Most Popular
- Disney Plus is getting another price hike
- Weâre living in a golden age of affordable mechanical keyboards
- Dropoutâs Sam Reich on business, comedy, and keeping the internet weird
- Google begins its battle for the âunofficial currency of the internetâ
- Appleâs iPhone 17 Pro can be easily scratched