Meta has announced a halt to its plans to utilize data from users in the European Union and U.K. for training its artificial intelligence systems.
This decision follows significant resistance from the Irish Data Protection Commission (DPC), which oversees Meta’s compliance in the EU. The DPC, representing several EU data protection authorities, engaged intensively with Meta before the company agreed to pause its plans. Additionally, the U.K.’s Information Commissioner’s Office (ICO) requested Meta to delay its plans until their concerns were addressed.
“The DPC welcomes Meta’s decision to pause its plans,” the DPC stated on Friday. The organization emphasized ongoing cooperation with Meta to address these issues.
GDPR and Privacy Challenges
While Meta has been using user-generated content for AI training in markets like the U.S., the stringent GDPR regulations in Europe have presented hurdles. Last month, Meta began notifying users about upcoming privacy policy changes that would allow it to use public content on Facebook and Instagram for AI training, including interactions, status updates, photos, and captions. Meta argued that this was necessary to accommodate the diverse languages and cultural references in Europe.
However, these changes, set to take effect on June 26, sparked concerns. Privacy advocacy group NOYBPrivacy advocacy group NOYB filed 11 complaints, arguing that Meta’s plans violated GDPR, particularly regarding opt-in consent for data processing.
Meta relied on the “legitimate interests” provision of GDPR to justify its actions, a strategy it has previously used to support targeted advertising. However, this legal basis was contested, especially given the complexity of opting out of data use. Meta claimed to have sent over 2 billion notifications about the changes, but these were easily overlooked among regular notifications, making it difficult for users to be fully aware or take action.
Those who did notice had to navigate a cumbersome process to file an objection, which was not straightforward. On Facebook’s website, users had to go through multiple steps to find the “right to object” form, hidden deep within the privacy settings. This complicated process likely deterred many from opting out.
Meta’s Perspective and Future Implications
Meta’s policy communications manager, Matt Pollard, defended the approach, citing a balance between processing public data at scale and respecting user rights. He pointed to a blog post explaining why the company believes “legitimate interests” is the appropriate legal basis.
In response to the DPC’s request, Meta expressed disappointment. Stefano Fratta, Meta’s global engagement director for privacy policy, argued that the pause hinders European innovation and AI development. He maintained confidence in Meta’s compliance with European laws and emphasized the company’s transparency compared to industry counterparts.
Meta’s situation highlights broader issues in the AI industry, where companies are eager to leverage vast amounts of data under existing regulations. Reddit, for example, stands to earn over $200 million by licensing data to firms like OpenAI and Google. However, these practices have led to significant fines and criticism for using copyrighted content without proper permissions.
Efforts to streamline data use for AI training often bypass the need for explicit user consent, making opting out challenging. Google and OpenAI are working on tools to allow content creators to opt out, with solutions expected in the coming years.
While Meta’s plans in Europe are on hold, future attempts will likely follow further consultations with the DPC and ICO. A revised user-permission process could be expected.
Stephen Almond, the ICO’s executive director for regulatory risk, emphasized the importance of trust in AI development. He assured ongoing monitoring of major AI developers to ensure the protection of user information rights in the U.K.
See also: Yahoo Prepares AI Summaries For Homepage Following News App Revamp