🔒 AI-driven deceptions: The rise of surreal advertising and deepfake influencers – Parmy Olson

In the age of generative artificial intelligence, advertising blurs the line between embellishment and fabrication. Instacart, DoorDash, and others employ AI-generated imagery, showcasing unrealistic culinary creations. A Willy Wonka exhibit in Glasgow promised a fantastical experience through AI-generated posters, but attendees encountered a stark reality. Deepfakes extend beyond entertainment; influencers like Olga Loiek find their likenesses used to endorse products without consent. As regulators and platforms grapple with this emerging issue, disclaimers and regulations may offer a way forward in navigating the surreal landscape of AI-generated marketing.

Sign up for your early morning brew of the BizNews Insider to keep you up to speed with the content that matters. The newsletter will land in your inbox at 5:30am weekdays. Register here.

By Parmy Olson

Advertising has always walked a thin line between embellishment and fabrication. ___STEADY_PAYWALL___ In the new age of generative artificial intelligence, the latter is becoming easier. Making an online ad no longer requires careful staging of well-lit photographs because now they can be made and enhanced in fantastical ways. Consumers need to sharpen their wits as we move from unnaturally juicy hamburgers to depictions of people and food that aren’t physically plausible. Consider this bizarre pasta concoction that Instacart, the grocery-delivery service, used in its recent marketing:    

Instacart has now deleted the Frankenstein’s monster of food and recipes that don’t (or probably shouldn’t) exist, which included fare like “watermelon popsicles with chocolate chips,” which appears to have been conjured with new image-generation tools. But it wasn’t alone. Restaurants that sell food exclusively through delivery apps like DoorDash and Just Eat Takeaway.com NV’s Grubhub have also used images of unidentifiable breaded objects on their pasta, according to 404 Media.  

Topping them all was a recent Willy Wonka exhibition in Glasgow, Scotland, whose AI-generated posters suggested ticket holders would stroll through a vivid world of ceiling-high lollipops and chocolate bars. They instead entered a bleak, grey warehouse scattered with some cheap props:

Generative AI has allowed for even more sinister marketing, something Olga Loiek found out the hard way last December. The 20-year-old student was dabbling in the art of being a YouTube influencer when she discovered dozens of video advertisements of her hawking candy on Chinese social media sites. Loiek doesn’t speak the language, but her unauthorized likeness did.

A raft of other influencers and celebrities have been cloned to endorse everything from language apps to self-help courses, all without their permission. But it’s surprising that Loiek was picked to front a promotion too. She was relatively green on YouTube, having only posted eight videos for a month before the deepfaked videos started cropping up. Loiek thinks her cloners might have been drawn to her “Slavic” looks to appeal to Chinese consumers who support Russia. “This audience might like my avatar… and in the end they’re more likely to buy the product,” she says. The deepfakes, which she says were in the hundreds, found their way to the Chinese Instagram-style platform Xiaohongshu and video-sharing site BiliBili. Here’s an example: 

小红书: 中文真是太难了

Loiek’s efforts to report the videos to both companies went nowhere. (When she posted a YouTube video about her experience, some of the deepfakes finally started coming down.) Scroll through Xiaohongshu long enough and you’ll find many other videos of suspiciously artificial influencer promotions. And the issue isn’t limited to Chinese apps. Last year, ByteDance Ltd.’s TikTok hosted an ad in which podcaster Joe Rogan and Stanford University neuroscientist Andrew Huberman were cloned to sell supplements for men.

History is littered with innovations that were exploited by unscrupulous marketers. The telephone opened up the floodgates to robocalls and e-mail to spam. Generative AI seems to have opened the door to a new era of fantasy typified by alien-looking shellfish

It is bad enough for people like Loiek to have their identities stolen and publicized without permission. But even low-level fakery, like the unappealing, inauthentic food, poses a new challenge for consumers. One way to address the problem is to become far more skeptical about ads on the web-based platforms. Social media networks like TikTok and Meta Platforms Inc.’s Instagram will need to improve their methods of detection, and regulators should step in.

The UK’s main advertising regulator banned two ads from L’Oreal SA in 2011 over complaints that it used “excessive airbrushing” on its models. But that was the era of Photoshop. Now the Advertising Standards Authority (ASA) is carefully reviewing the use of generative AI, a spokesman for the regulator tells me, which could lead to new guidelines for advertisers this year. The technology shouldn’t be used, for example, to exaggerate a product’s efficacy, the spokesman said.

The US Federal Trade Commission says it’s also “focusing intensely” on the problem. 

Disclaimers could be one way to tackle the issue. Back in 2021, the Norwegian government amended its laws so that advertisers and influencers had to disclose their use of digitally altered images of people. The goal was to target unrealistic beauty standards, but similar forced disclaimers on AI-generated ads could increase public awareness of entirely conjured “photos” or “videos.”

Of course, policymakers can’t do much to stop whoever cloned Olga Loiek. That seems to be the crux of the problem. “I will keep doing it,” she says of her nascent YouTube channel. “But I think there has to be some regulation in place. I just don’t know who to reach out to.”

Read also: