Another major tech company is once again training AI models using user data by default, without prior notification to users.
Similar to what Meta and X’s Grok have done, LinkedIn is automatically opting its users into AI training, which also extends to models used by unspecified “affiliates.”
As a Microsoft-owned platform, LinkedIn’s user data is being utilized for AI training by the tech giant, which also has significant ties to ChatGPT’s developer, OpenAI.
However, following inquiries after the initial news release, LinkedIn clarified that while user data will not be employed to train OpenAI’s base models, it will be shared with Microsoft for use in its own AI software related to OpenAI.
LinkedIn explained: “The artificial intelligence models that LinkedIn uses to power generative AI features may be trained by LinkedIn or another provider. For example, some of our models are provided by Microsoft’s Azure OpenAI service.”
Greg Snapper, a LinkedIn spokesperson, added: “We leverage OpenAI models made available through Microsoft’s Azure AI Service like other customers or users of that API service.
And when we use the models provided through that service, we do not send data back to OpenAI for the training of their models.”
Mariano delli Santi, legal and policy officer at the Open Rights Group, a U.K.-based privacy advocacy group, expressed concern about the broader implications, stating: “The opt-out model proves once again to be wholly inadequate to protect our rights.”
LinkedIn also addressed privacy concerns, stating that, when training its generative AI models, it aims to limit the use of personal data in the datasets by employing privacy-enhancing technologies to redact or remove personal data.
Additionally, it confirmed that data from the European Union (EU), European Economic Area (EEA), or Switzerland is not being used to train “content-generating AI models.”
The EEA encompasses all 27 EU member states, as well as Iceland, Liechtenstein, and Norway.
For users in regions where LinkedIn has begun using personal data for AI training, there is a straightforward way to opt out. By visiting the data privacy section in settings, users can toggle off the option labeled “Use my data for training content creation AI models.”
Despite this, privacy advocates remain concerned about LinkedIn’s choice to automatically include users in training several AI models.
Mariano delli Santi of the Open Rights Group told: “The opt-out model proves once again to be wholly inadequate to protect our rights:
the public cannot be expected to monitor and chase every single online company that decides to use our data to train AI. Opt-in consent isn’t only legally mandated but a common-sense requirement.”
He urged the U.K.’s privacy regulators to take immediate action against LinkedIn and other companies that behave as though they are above the law.