In Davos, leaders expressed concerns about AI-generated misinformation as the World Economic Forum’s top risk for 2024. OpenAI aims to combat political misuse with new tools, addressing the challenge of deepfakes influencing elections. Despite efforts, traditional social media platforms like Facebook face an uphill battle against the spread of deceptive content. Deepfake ads featuring UK Prime Minister Rishi Sunak on Facebook persist, showcasing a persistent challenge for platforms in preventing misinformation. As the 2024 elections approach, the responsibility to curb deepfake impact lies with social media leaders, not just AI developers.
Sign up for your early morning brew of the BizNews Insider to keep you up to speed with the content that matters. The newsletter will land in your inbox at 5:30am weekdays. Register here.
By Parmy Olson
As global leaders descended on Davos this week, many fretted about the World Economic Forum’s top risk for 2024: AI-generated misinformation.
OpenAI is “quite focused” on stopping the political misuse of tools like ChatGPT, its CEO Sam Altman said at a Bloomberg event at the conference on Tuesday. His company teased new tools for thwarting misinformation in a big US election year. That effort is laudable, but leaders at the Swiss talking shop must remember how significant a role good old-fashioned social media still plays in disrupting elections and democracy itself.
No matter how realistic the details on that fake photo of Joe Biden, or how convincing that phony audio is of a British politician, AI’s real impact will be determined by how far deepfakes spread on platforms such as Facebook, Twitter and TikTok. And that spread will be harder to track than during the previous US elections of 2016 and 2020. This time, the stuff that doesn’t go viral could also be the problem.
Read more: Deepfakes: Blurring reality in the age of AI
On Facebook, for instance, 100 video ads reached tens of thousands of people in the UK through most of December, showing a deepfake video of Prime Minister Rishi Sunak supporting a stock market scam. The video starts with a BBC newsreader claiming that Sunak is earning “colossal sums” of money from a project with Elon Musk, and then cuts to the prime minister at a lectern, earnestly saying he can “vouch for the reliability of this investment platform.”
It’s all a scam, of course. Victims click through to a website and sign up for something called “quantum AI,” before getting rung by a call-center operator who encourages them to send over money in bitcoin, according to Marcus Beard, a former communications officer with the British government turned consultant, who conducted research on the deepfake ads. The scammers call back a few weeks later to tell the victim their investment has grown, and encourage them to put in more. Then they disappear.
Facebook says most of the deepfake ads that Beard found were disabled. The problem is they keep coming. A cursory search for “Sunak” on Facebook’s ad library on Wednesday morning found a handful of new, active ads, including one where Sunak says that his $5 billion quantum AI platform “trades stocks and does it very well.” Within a few hours, they’d been labeled as inactive. (You can search the ad library yourself, here.)
“A lot of these ads only got 1,000 to 5,000 impressions,” says Beard, who believes as many as 400,000 people saw the fake Sunak ads in December. “But add that up and it gets big quite quickly.” It’s also only recently that such advertisements have started featuring the British prime minister, he says.
It’s easy to presume nefarious political actors and ideologues are the ones driving misinformation, but often it’s just people trying to make a buck. Yet they can still shape voters’ opinions about someone like Sunak, perhaps deepening any existing suspicion that he is untrustworthy. Ahead of the 2016 US election, a group of teenagers in Macedonia made thousands of dollars in ad revenue from posting fake articles about the Clintons and Donald Trump, including one of their greatest hits, “Pope Francis Shocks World, Endorses Donald Trump for President.” Fake stories favoring Trump were shared 30 million times in the three months leading to the 2016 election, nearly four times the number of pro-Hillary Clinton shares, according to one Stanford University study. Many of those articles went viral on Facebook.
Today, Facebook is de-ranking news stories. That means whereas a Fox News article posted by your uncle with a long tail of heated comments would once have been at the top of your feed, Facebook users now tend to see random, entertaining videos from people they don’t know on Reels, Facebook’s clone of TikTok, instead.
These short-form videos are where misinformation can become dispersed and almost impossible to track. “We’re worried about these [deepfakes] being picked up by TikTok influencers,” says Beard. Influencers on TikTok and Reels typically make money through sponsored posts, which can spur them to post provocative content to grow their audience. What happens next is what Beard calls “content laundering,” where the deepfake hops from one social network to another.
Case in point: When a TikTok influencer posted themselves shaking their heads at a (seemingly unbeknownst to them) fake audio clip of the mayor of London last November, TikTok took it down — but the video found its way to Twitter and also Facebook, where it stayed up. (Facebook doesn’t remove deceptive audio of politicians and puts warning labels on them instead, arguing, wrongly, that’s a better way to fight misinformation.)
Twitter, now known as X, meanwhile is slow to enforce its own policies. It forbids political misinformation, but it chose not to remove a separate, AI-generated clip of British Labour Party leader Keir Starmer’s voice because it wanted “proof” the audio was fake, according to a person with knowledge of the matter. The Starmer audio has been debunked by both leading political parties in the UK and the independent fact-checking organization Full Fact. The clip remains up on X.
Despite all the talk around the Iowa caucus, this critical year for democracy is only warming up. As more elections get underway, there will be a mad scramble for attention from political parties, influencers, news organizations and online scammers. Facebook made a laudable decision to make its ad library transparent, and Twitter and TikTok should do the same. But Mark Zuckerberg’s company also needs to update its policy on deceptive audio and do a better job of policing deepfake ads that exploit politicians.
The deepfake content we’re seeing now will soon enough turn into a flurry. Keeping a lid on that won’t fall on OpenAI’s Altman, but on Zuckerberg, Musk and others overseeing the social platforms teeming with fakery.
- Facebook’s lax stance on deepfake audio sparks concerns ahead of elections: Parmy Olson
- Government’s Tourism Master Plan and Green Paper: Out of touch and AI-generated – Ivo Vegter
- OpenAI’s ChatGPT can speak to you now: Meet its five new voice personas
© 2024 Bloomberg L.P.