Microsoft-owned LinkedIn has paused training generative AI models using data from UK users after data protection regulatory concerns were raised by the Information Commissioner’s Office (ICO).
The ICO confirmed LinkedIn had “suspended” AI model training based on data from UK users on Friday, adding it was “pleased” with LinkedIn’s move.
“We are pleased that LinkedIn has reflected on the concerns we raised about its approach to training generative AI models with information relating to its UK users,” said the ICO’s Executive Director, Stephen Almond.
LinkedIn said it welcomes the opportunity to engage further with the ICO.
Many big tech firms, including LinkedIn, Meta, and X (formerly known as Twitter), seek to use content posted on their platforms to help develop their generative AI tools’ capabilities.
LinkedIn stated on the platform that user control over personal data is a top priority for the company. To address concerns, LinkedIn has now provided UK users with the option to opt out of having their data used for AI model training.
A LinkedIn spokesperson said that the company believes users should have control over their data.
“We’ve always integrated automation in LinkedIn products, and we’ve made it clear that users can choose how their data is used,” the spokesperson added.
LinkedIn users post a variety of content on the social media platform – from information related to their personal lives to job applications and advice for people seeking similar job opportunities.
Generative AI tools, such as chatbots like OpenAI’s ChatGPT or image generators like Midjourney, benefit from these vast sources of personal content to provide more accurate functionalities that aim to seamlessly imitate human behaviour.
Recent Stories