LinkedIn’s AI Data Policy - A Case for Better Transparency and User Control

2024-10-02
Article Banner

By Kevin Shepherdson


In a move that raised eyebrows across its user base, LinkedIn introduced a new feature in mid-September, allowing user data to be utilised for generative AI training. The catch? Users (except those in certain jurisdictions like the EU and UK) were opted in by default, with little to no notification, prompting concerns about privacy and data usage. This decision has sparked a broader discussion about transparency, user consent, and how social media companies handle personal data.

Source: Linkedin Preferences

LinkedIn's Approach of Opting in by Default 

LinkedIn’s decision to automatically opt users into its AI data training programme, except for those in the EU, EEA, UK, and Switzerland, has raised concerns among some users - primarily as they felt blindsided by the lack of direct notification.

From a data protection standpoint, this raises some legitimate concerns. As per PDPC’s Advisory Guidelines on use of Personal Data in AI Recommendation and Decision Systems, users must be notified when their personal data is being used in AI systems, and their consent should be obtained unless exceptions apply. 

In this case, while LinkedIn mentioned in its documents on its website a varied use of the data that it uses to train its Gen AI model, such as content generation, feedback and improvement, and safety, security and compliance, not all of these purposes may fall under an exception to consent.

LinkedIn’s failure to notify users in Singapore about this policy shift may therefore have fallen short of meeting   these principles.

Moreover, this approach of opting users in by default gives little consideration to user autonomy. Data usage for purposes like AI training, which involves new technologies and different potential risks, should be treated with extra care. LinkedIn could have taken a more user-friendly approach by providing transparent, direct communication and offering an opt-in option rather than opt-out.

Interestingly, the opt-out does not affect “LinkedIn or its affiliates' development of AI models … used to personalise your LinkedIn experience.


Source: Linkedin Help

Retention and and Other Concerns

Another potential issue involves how long user data, once integrated into LinkedIn’s AI system, is retained.  

Data, once used to train an AI model, may not be easily disgorged. Unless LinkedIn offers a clear explanation on how user data can be removed from their AI system, questions will remain about how long user data will linger within their AI systems, even in situations where the user has deleted his account and left the platform. The user may have to contend with the possibility of his data being permanently retained on LinkedIn’s platform, or worse, that it may potentially be surfaced by other users with a few well-crafted prompts. 

This of course raises concerns about data retention by LinkedIn (which under the current PDPA may only be retained by an organisation if it is necessary for a legal or business purpose). In addition, users will also have to grapple with the issue of their personal data being used in ways they did not explicitly agree to (unauthorized purpose), and even the issue of data accuracy, where the initial data used to train the AI model may have become outdated.  

A Personal Decision to Opt Out

As a LinkedIn user, I have chosen to opt out of having my data used for AI training. This decision stems from a few key concerns:

1. Lack of Clear Notification: LinkedIn did not effectively inform users about this major change, eroding trust.

2. Data Usage and Privacy: The implications of how personal and professional data would be used in AI training are not fully clear.

3. Autonomy: I prefer to have an active role in deciding how my data is used, and opting out of helping them train their AI, gives me the control I need for my professional identity.

For me, transparency and trust are non-negotiable, especially when it comes to how my personal data is used. Until LinkedIn provides clearer details about data usage, control options, and tangible user benefits from this AI training, I won’t be opting in.

Social Media Platforms Should Update Data Collection Policies

Social media platforms hold an immense amount of their users’ personal data. Many will recall the backlash Whatsapp faced some years ago, when it announced users’ personal data would be shared with its parent company Facebook (now Meta).  

Social media companies must handle data privacy with care, ensuring they demonstrate transparency and accountability. There are several best practices they should adopt when updating their data collection policies:

1. Proactive Transparency: Users should be informed in clear and simple terms about how their data will be used, especially when new technologies like AI are involved. According to Singapore’s Model AI Governance Framework, companies are encouraged to ensure consumers understand that AI-enabled features are in use. LinkedIn’s current approach may not sufficiently meet this standard.

2. Opt-In by Default: For significant changes, especially involving the use of personal data in AI training, an opt-in model should be the norm. This allows users to make informed decisions, rather than being automatically enrolled. Opt-out mechanisms, while easier for companies, may not always reflect genuine user consent.

3. User Education: Companies should provide accessible, easy-to-understand information about how their AI models are being trained,  and their implications. There are users who might opt-in willingly if they understood the potential benefits being offered by these companies through the use of their AI models, and had reassurance about data protection.

4. Global Consistency: Social media platforms should apply the highest data protection standards worldwide, not just in regions with strict regulations like the EU. LinkedIn’s decision to exclude users in GDPR-regulated regions from its AI training programme highlights an inconsistency in privacy protection.

5. Engagement with Regulators: Engaging with data protection authorities beforehand can prevent misunderstandings and ensure best practices are followed. In LinkedIn’s case, it suspended data usage for UK users only after concerns were raised by the Information Commissioner’s Office (ICO).

6. Published Audits and Accountability: Regular audits of data usage practices, especially concerning AI training, should be conducted and published. This would provide users with reassurance and keep companies accountable.

7. Striking a Balance Between Innovation and Privacy

Source: Straits Interactive


As AI continues to evolve, it’s crucial for companies like LinkedIn to strike the right balance between innovation and user privacy. Transparency, informed consent, and user control should be at the heart of data usage policies. While AI holds incredible potential, maintaining trust is key. Companies must remember that without user trust, even the most advanced technology may face pushback.

LinkedIn’s recent changes serve as a reminder for all social media platforms: users want to be part of the conversation, not passive participants in data experiments.


Unlock these benefits
benefit

Get access to news, enforcement cases, events, and actionable tips and guides

benefit

Get regular email updates and offers

benefit

Job opportunities, mentorship and career guidance

benefit

Exclusive access to Data Protection community - ask questions, network and share knowledge with peers and experts via WhatsApp and Linkedin

Topics
Related Articles