Elon Musk’s social media platform, X (formerly Twitter), is facing intense scrutiny across Europe after it was found to be quietly utilizing user data for training AI models without proper consent—a direct challenge to the stringent data protection rules in the European Union (EU).
The controversy erupted after users in the EU noticed that X had begun harvesting their post data to train its AI chatbot, Grok, without clear notification or consent. This move has triggered formal complaints in nine EU countries, raising questions about the company’s compliance with the General Data Protection Regulation (GDPR).
The GDPR, Europe’s landmark privacy law, imposes stringent requirements on how companies handle personal data, including obtaining clear consent for data processing. Failure to comply can lead to significant penalties, with fines reaching up to 4% of global annual revenue. In X’s case, the company could be facing substantial financial and reputational damage if regulators find it has violated the rules.
Max Schrems, the prominent privacy advocate and chairman of noyb (None of Your Business), a non-profit focused on digital rights, has been leading the charge against X’s practices. Schrems argues that X’s reliance on “legitimate interest” as a legal basis for processing personal data is insufficient under EU law. He points out that asking for consent is not just a legal obligation, but a basic measure that companies routinely implement for various other uses of data.
“Companies that interact directly with users simply need to show them a yes/no prompt before using their data,” Schrems commented. “They do this regularly for lots of other things, so it would definitely be possible for AI training as well.”
While the Irish Data Protection Commission (DPC), X’s lead regulator in the EU, has initiated legal action, Schrems and noyb argue that more needs to be done. Notably, users have no way to ensure that their data, once used in AI model training, can be deleted—a direct challenge to fundamental GDPR rights such as the right to be forgotten.
The broader implications of this case extend beyond X. It shines a spotlight on the growing tension between tech companies’ ambitions to develop cutting-edge AI technologies and the need to protect user privacy. With over 60 million people in the EU potentially affected by X’s actions, the stakes are high, not just for Musk’s platform but for any tech company operating in Europe.
In recent months, Meta (formerly Facebook) faced a similar challenge when it paused plans to process user data for AI training following regulatory intervention and privacy complaints. These incidents underscore a critical reality for tech companies: the era of unregulated data use is over, particularly in regions with strong privacy laws like the EU.
For companies operating in AI, these developments serve as a reminder that compliance with privacy regulations is not optional—it’s a business imperative. The balance between innovation and regulation will be key to shaping the future of AI, and companies that fail to prioritize user consent risk not only regulatory penalties but also the trust of their users.
As the situation unfolds, it will be crucial for tech leaders to closely monitor the outcomes of these privacy challenges. The decisions made in these cases could set new precedents for how AI and data privacy intersect, with implications for industries ranging from social media to healthcare.
In the age of AI, user data is invaluable. But without a foundation of trust and transparency, that data may become a liability rather than an asset.