Deepfakes: Blurring reality in the age of AI

As artificial intelligence advances, the line between reality and fiction blurs, thanks to the rise of deepfake technology. Originally confined to adult videos, it’s now used to spread lies and tarnish reputations. By training algorithms on real footage, deepfakes convincingly blend faces and voices into new contexts. The technology’s accessibility skyrocketed when open-source code surfaced in 2017. With vast content libraries, deepfakes grow more convincing, raising concerns of financial market manipulation, political destabilisation, and privacy violations. Detection methods remain a challenge, but efforts to combat deepfakes are underway, offering hope amidst the threat to our digital reality.

Sign up for your early morning brew of the BizNews Insider to keep you up to speed with the content that matters. The newsletter will land in your inbox at 5:30am weekdays. Register here.


How Faking Videos Got So Easy and Why It’s a Threat

By Nate Lanxon

(Bloomberg) — Now that artificial intelligence allows anyone with a smartphone to conjure up lifelike images and sound from seemingly nothing, it’s getting harder to tell if what you see and hear online is reality or fiction. At first, so-called deepfake technology was mostly used in adult videos where a celebrity’s face is mapped onto the body of a porn actor without their consent. Increasingly, however, it is being used to spread falsehoods and damage reputations: Intelligence services spread fake clips of leaders of rival nations to discredit them; Politicians publish doctored videos of their opponents slurring their words or saying things they never said. Governments alarmed at the potential for industrial-scale disinformation that could undermine democracy have begun to fight back. 

1. How are deepfakes made?

While manipulation of digital files using Photoshop and other apps is nothing new, deepfakes are accomplished using a form of AI. An algorithm is trained to recognize patterns in real video recordings of a particular person, a process known as deep learning. It’s then possible to swap an element of one video, such as the person’s face, into another piece of content. The manipulations are most misleading when combined with voice-cloning technology, which breaks down an audio clip of someone speaking into half-syllable chunks that can be reassembled into new words that appear to be spoken by the person in the original recording. It’s the same method that’s used to create voice assistants like Apple’s Siri and Amazon’s Alexa. 

2. How did deepfake technology take off?

Motherboard, a Vice publication, reported in 2017 that a Reddit user called “deepfakes” had devised an algorithm for making fake videos publicly available using open-source code. Previously, the technology was the domain of academics and researchers, but now anyone could use it. Reddit banned the user “deepfakes,” but the practice spread. The launch of OpenAI’s Dall-E in 2021 and ChatGPT in 2022 took the possibilities to a new level. Today’s AI media generation tools can produce text, photorealistic images and — increasingly — video, just from a few keywords tapped into an instant messaging-like interface. 

3. How have deepfakes progressed? 

The bigger the library of content a deep-learning algorithm is fed with, the more realistic the phony can be. Apple recorded 10 to 20 hours of speech to create Siri. Actor-director Jordan Peele made a minute-long deepfake in 2018 appearing to show former US President Barack Obama using an obscenity to refer to his successor, Donald Trump. Peele imitated Obama’s voice and used 56 hours of sample video recordings of the former president. Those sample sizes are infinitesimal compared with what AI companies are now applying their new tools to: the entire corpus of material freely available on the web, from YouTube to Wikipedia to stock image libraries. The simplest way to understand the difference this makes is to refer back to that viral Obama clip: A person had to manipulate a video that already existed and provide a real vocal performance; today, someone can simply ask a machine to create a video of the former president and it will appear. 

 

4. What are some examples of deepfakes?

  • In May, an image purportedly of the Pentagon on fire circulated online, sending US stocks into a brief dip. Experts said the picture, showing a pillar of smoke next to the military complex, had the hallmarks of being generated by AI.
  • In March, social media users shared AI-generated images that appeared to show Trump being arrested by New York City law enforcement. The person who created many of the images circulating on social media confirmed they were produced using AI tool Midjourney.
  • In February, manipulated audio spread online claiming that a Nigerian presidential candidate Atiku Abubakar was planning to rig that month’s vote.
  • In 2021, a minute-long video emerged online appearing to show Ukraine President Volodymyr Zelenskiy telling his soldiers to lay down their arms and surrender to Russia.
  • Former US House Speaker Nancy Pelosi appeared to be slurring her words in a doctored video in 2019 that circulated widely on social media.

5. What’s the danger here?

While many deepfakes produced today are still quite shoddy and easy to detect, the fear is that they will eventually become so convincing that it’s impossible to distinguish the real from the fabricated. Imagine fraudsters manipulating stock prices by producing fake videos of chief executives issuing corporate updates; or falsified videos depicting a presidential candidate molesting children, a police chief inciting violence against a minority group, or soldiers committing war crimes. High-profile individuals such as politicians and business leaders are especially at risk, given how many recordings of them are in the public domain. For ordinary people, especially women, the technology makes revenge porn possibile even if no actual naked photo or video exists. Once a video goes viral on the internet, it’s almost impossible to contain. An additional concern is that spreading awareness about deepfakes will make it easier for people who truly are caught on tape doing or saying objectionable or illegal things to claim that the evidence against them is bogus. Some people are already claiming the deepfake defense in court, saying video material used against them may have been manufactured.  

6. How do you spot a deepfake?

The kind of machine learning that produces deepfakes can’t easily be reversed to detect them. Researchers have identified clues that might indicate a video is inauthentic — for example, if the speaker hasn’t blinked for a while, or seems slightly jerky — but such details could easily slip a viewer’s notice. By enhancing the colour saturation on a video of a person, it’s possible to detect his or her pulse from the almost invisible change in facial skin; an image made from a mishmash of clips would have an irregular or non-existent blood flow. 

7. Is anything being done about it?

A handful of startups such as Netherlands-based Sensity AI and Estonia-based Sentinel are developing deepfake detection technology, as are many of the big tech companies. Intel Corp. launched its FakeCatcher product last November as part of its work in responsible AI. It can detect fakes with 96% accuracy, according to the company. The US Defense Department is also developing tools to counter deepfakes, and it’s a key concern of regulators during their many meetings with AI industry bosses. With the federal government deadlocked, state legislatures have been quicker to advance laws that aim to tackle the immediate harms of AI. Nine states have enacted laws that regulate deepfakes, mostly in the context of pornography and elections influence, and at least four other states have bills at various stages of the legislative process. The European Union’s AI Act, which is still being negotiated, would require companies to label deepfakes as such. 

8. Are there benevolent uses?

Yes. Scottish firm CereProc creates digital voices for people who lose their own through disease, and vocal cloning could serve an educational purpose by recreating the sound of historical figures. A project at North Carolina State University synthesized one of Martin Luther King Jr.’s unrecorded speeches. CereProc created a version of the last address written by President John F. Kennedy, who was assassinated before delivering it. The John F. Kennedy Library rejected the recording, though, saying it didn’t meet its guidelines. 

— With assistance from Jillian Deutsch, Diana Li and Isaiah Poritz.

Read also:

The Reference Shelf

  • QuickTake explainers on regulating AI in the US, why AI is the next flashpoint in the US-China rivalry, a cheat sheet to AI buzzwords, and the tech behind generative AI tools.
  • Deepfakes also pose a threat to financial markets, says Securities and Exchange Commission Chair Gary Gensler.
  • How Google and Microsoft are supercharging AI deepfake pornography.
  • Bloomberg Law says US regulators are wrestling with how to stop deepfakes affecting the 2024 presidential election.
  • A Bloomberg video about Lyrebird, the AI company that puts words in your mouth.

© 2023 Bloomberg L.P.

Visited 711 times, 2 visit(s) today