The Facebook and Instagram owner says its defences were able to prevent AI-driven misinformation operations from gaining an online foothold
Despite fears that artificial intelligence (AI) could influence the outcome of elections around the world, the United States technology giant Meta said it detected little impact across its platforms this year.
That was in part due to defensive measures designed to prevent coordinated networks of accounts, or bots, from grabbing attention on Facebook, Instagram and Threads, Meta president of global affairs Nick Clegg told reporters on Tuesday
“I don’t think the use of generative AI was a particularly effective tool for them to evade our trip wires,” Clegg said of actors behind coordinated disinformation campaigns.
In 2024, Meta says it ran several election operations centres around the world to monitor content issues, including during elections in the US, Bangladesh, Brazil, France, India, Indonesia, Mexico, Pakistan, South Africa, the United Kingdom and the European Union
Most of the covert influence operations it has disrupted in recent years were carried out by actors from Russia, Iran and China, Clegg said, adding that Meta took down about 20 “covert influence operations” on its platform this year
Russia was the number one source of those operations, with 39 networks disrupted in total since 2017, followed by Iran with 31, and China with 11
Overall, the volume of AI-generated misinformation was low and Meta was able to quickly label or remove the content, Clegg said.
That was despite 2024 being the biggest election year ever, with some 2 billion people estimated to have gone to the polls around the world, he noted
“People were understandably concerned about the potential impact that generative AI would have on elections during the course of this year,” Clegg told journalists
In a statement, he said that “any such impact was modest and limited in scope”.
AI content, such as deepfake videos and audio of political candidates, was quickly exposed and failed to fool public opinion, he added
In the month leading up to Election Day in the US, Meta said it rejected 590,000 requests to generate images of President Joe Biden, then-Republican candidate Donald Trump and his running mate, JD Vance, Vice President Kamala Harris and Governor Tim Walz.
In an article in The Conversation, titled The apocalypse that wasn’t, Harvard academics Bruce Schneier and Nathan Sanders wrote: “There was AI-created misinformation and propaganda, even though it was not as catastrophic as feared
However, Clegg and others have warned that disinformation has moved to social media and messaging websites not owned by Meta, especially TikTok, where some studies have found evidence of fake AI-generated videos featuring politically related misinformation
Propaganda on social platforms such as Facebook was not as ‘catastrophic’ as feared, academics say [Michael M Santiago/Getty Images/AFP
Meta has itself been the source of public complaints on various fronts, caught between accusations of censorship and the failure to prevent online abuses.
Earlier this year, Human Rights Watch accused Meta of silencing pro-Palestine voices amid increased social media censorship since October 7
Meta says its platforms were mostly used for positive purposes in 2024, to steer people to legitimate websites with information about candidates and how to vote.
While it said it allows people on its platforms to ask questions or raise concerns about election processes, “we do not allow claims or speculation about election-related corruption, irregularities, or bias when combined with a signal that content is threatening violence
Source: Al Jazeera and news agencies