
LinkedIn, the professional networking platform owned by Microsoft, is preparing to roll out a new data usage policy that will see user-generated content used to train artificial intelligence models beginning November 3rd, 2025. The policy, which affects users in the United States, the European Union, the United Kingdom, and Switzerland, has already been communicated via email to members. According to LinkedIn, the change is intended to improve key AI-powered services, such as job matching, recruiter tools, and AI-assisted content generation features, which the platform has been steadily expanding over the past year.
The company clarified that only data from publicly visible sources—such as open profiles and shared posts—will be used in this process. Sensitive information like private messages or hidden entries will not be included. Still, LinkedIn has made participation the default choice by automatically enabling the new “Data for Generative AI Improvement” setting on all accounts. For users concerned about data privacy, opting out is possible, but it requires manual action in account settings.
To opt out, members can open their LinkedIn profile, go to Settings > Data Privacy > Data for Generative AI Improvement, and toggle the option off, or use a direct link provided by the platform. LinkedIn stresses that once data has been processed by its AI systems, it cannot be retroactively removed from training sets. This approach underscores an ongoing tension between companies developing AI features and users who wish to retain control over their personal information. As AI adoption continues to accelerate across professional and social platforms, users are increasingly being placed in the position of having to actively manage how their data is used.

