Meta has spotted coordinated campaigns on social media using AI-generated content, including one supporting Israel’s actions in its war with Hamas in Palestine.
Meta has revealed a batch of “covert influence operations” it has disrupted across social media and claims some of these campaigns likely used AI to spread deceptive content.
The tech giant revealed details of six of these operations in its latest Adversarial Threat Report, a quarterly update on how the company detects and counters security threats. These six operations were based in Israel, China, Iran, Croatia, Bangladesh and one of “unknown origin”.
Meta describes these campaigns as “coordinated inauthentic behaviour” or CIB, a concept the company first used publicly more than six years ago when it shared details on a Russian covert operation. The company describes CIB as “coordinated efforts to manipulate public debate for a strategic goal” and said fake accounts are central to these operations.
“In each case, people coordinate with one another and use fake accounts to mislead others about who they are and what they are doing,” Meta said in the report. “When we investigate and remove these operations, we focus on behaviour, not content – no matter who’s behind them, what they post or whether they’re foreign or domestic.”
The report noted that some of these operations used generative AI to boost their deception, such as AI-generated video news readers, text generation and AI-generated images. Meta claims it has not detected AI-boosted tactics that “would impede our ability to disrupt the adversarial networks behind them”.
“We found and removed many of these campaigns early, before they were able to build audiences among authentic communities,” Meta said.
“While we continue to monitor and assess the risks associated with evolving new technologies like AI, what we’ve seen so far shows that our industry’s existing defences, including our focus on behaviour (rather than content) in countering adversarial threat activity, already apply and appear effective.”
Campaign details
In the case of the Israel network, Meta said the campaign included comments praising the country’s handling of its war with Hamas in Palestine. These comments were put below posts from global news organisations and US lawmakers.
“This network’s accounts posed as locals in the countries they targeted, including as Jewish students, African Americans and ‘concerned’ citizens,” Meta said. “Their comments included links to the operation’s websites and were often met with critical responses from authentic users calling them propaganda.”
Meta said this network primarily targeted audiences in the US and Canada. The company attributed the campaign to Tel Aviv-based political marketing firm Stoic and sent a cease and desist letter to the company.
The report also details an Iran-based network focused on Israel’s war with Hamas in Palestine and a China-based network that targeted the global Sikh community. The Bangladesh-based network and Croatia-based networks both targeted domestic audiences in their countries, while the network of unknown origin targeted audiences in Moldova and Madagascar.
The rise of AI content
This is not the first time AI-generated content has been observed as a tool to support deceptive online campaigns. A report by Microsoft earlier this year claimed threat actors linked with the Chinese government used AI-generated content in attempts to “influence and sow division” in multiple countries.
With the rise of powerful chatbots and deceptive deepfake content, some experts believe AI could be used to influence elections. A study last year also suggested that AI models can trick people into believing false information better than humans can.
Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.