Brazil follows Europe in blocking Meta from using public posts for AI training.
First in Europe and now in Brazil: Meta’s new privacy policy to mine people’s public posts on social media to train their artificial intelligence (AI) has again been officially blocked.
Brazil’s national data protection authority determined earlier this week that their new policy causes “the imminent risk of serious and irreparable or difficult-to-repair damage to the fundamental rights of the affected data subjects,” the agency said in the nation’s official gazette.
Brazil is one of Meta’s biggest markets, with Facebook alone having around 102 million active users in the country, the agency said in a statement.
Blocked in Europe, too
The social media company has also encountered resistance to its privacy policy update in Europe, where it recently put on hold its plans to start using people’s public posts to train AI systems – which was supposed to start last week.
The policy changes in Europe would have included publicly shared posts, images, image captions, comments and stories from users over 18 on Facebook and Instagram. It did not extend to private messages.
The Irish Data Protection Commission (DPC) interjected, sending a request to Meta from other European bodies asking the company to delay its training of large language models (LLMs) that power chatbots like OpenAI’s ChatGPT using this type of data.
Meta argued that without local data, it would only be able to provide MetaAI users “with a second-rate experience,” with AI products that “won’t accurately understand important regional languages, cultures or trending topics,” and so delayed its rollout.
The company would’ve made Llama, its AI chatbot and the MetaAI assistant available if they had access to that data.
Meta said its approach in Europe “complies with laws and regulations,” and that it is “more transparent than many of [their] industry counterparts”.
Method ‘complies with privacy laws’
A spokesperson for Meta told the Associated Press in a statement the company is “disappointed” and insists its method “complies with privacy laws and regulations in Brazil”.
“This is a step backwards for innovation, competition in AI development and further delays bringing the benefits of AI to people in Brazil,” the spokesperson added.
Refusing to partake is possible, Meta said in that statement.
Despite that option, there are “excessive and unjustified obstacles to accessing the information and exercising” the right to opt out, the agency said in a statement.
Meta did not provide sufficient information to allow people to be aware of the possible consequences of using their personal data for the development of generative AI, it added.
Decision will ‘hurt transparency’ on how data is being used
Hye Jung Han, a Brazil-based researcher for Human Rights Watch in Brazil, said in an email Tuesday that the regulator’s action “helps to protect children from worrying that their personal data, shared with friends and family on Meta’s platforms, might be used to inflict harm back on them in ways that are impossible to anticipate or guard against”.
The decision regarding Meta will “very likely” encourage other companies to refrain from being transparent in the use of data in the future, said Ronaldo Lemos, of the Institute of Technology and Society of Rio de Janeiro, a think-tank.
“Meta was severely punished for being the only one among the Big Tech companies to clearly and in advance notify in its privacy policy that it would use data from its platforms to train artificial intelligence,” he said.
Compliance must be demonstrated by the company within five working days from the notification of the decision, and the agency established a daily fine of 50,000 reais (€8,330) for failure to do so.