🔒 Deepfake tech means Zoom worries have gone from “are you on mute” to “are you real” – Parmy Olson

In a new frontier of corporate fraud, deepfakes have infiltrated Zoom calls, with a finance worker in Hong Kong unwittingly transferring $25 million to scammers posing as colleagues. As AI-powered video tools advance, hackers exploit real-time deepfakes, challenging established security measures. Experts advise using visual cues, monitoring lip sync, employing multi-factor authentication, opting for secure channels, updating software, and avoiding unknown platforms to combat deepfake threats. The incident underscores the urgent need for heightened awareness as skepticism becomes crucial in navigating the evolving landscape of deceptive video technology.

Sign up for your early morning brew of the BizNews Insider to keep you up to speed with the content that matters. The newsletter will land in your inbox at 5:30am weekdays. Register here.

By Parmy Olson

Is the boss who’s giving you an order real or just realistic? Deepfakes are now taking Zoom calls to another level of awkwardness, by making us question whether our co-workers are genuine. A finance worker in Hong Kong transferred more than $25 million to scammers after they posed as his chief financial officer and other colleagues on a video conference call, marking perhaps the biggest known corporate fraud using deepfake technology to date. The worker had been suspicious about an email requesting a secret transaction, but the scammers looked and sounded so convincing on the call that he sent the money. ___STEADY_PAYWALL___

Corporate IT managers have spent more than a decade trying, often fruitlessly, to train office workers to spot phishing emails and resist the urge to click on dodgy attachments. Often hackers and fraudsters need just one person out of hundreds to inadvertently download the malware needed to tunnel into a corporate network. With AI-powered video tools, they’re moving into territory we have considered safe, underscoring how quickly deepfake technology has developed in just the last year. While it sounds like science fiction, such elaborate frauds are now relatively easy to set up, ushering us into a new age of skepticism.

The fraud in Hong Kong almost certainly used real-time deepfakes, meaning that the fake executive mirrored the scammer as they listened, talked and nodded during the meeting. According to David Maimon, a criminology professor at Georgia State University, online fraudsters have been using real-time deepfakes on video calls since at least last year for smaller-scale fraud including romance scams.

Maimon posted the video below to LinkedIn, showing a demo from developers who are selling deepfake video tools to potential fraudsters. In it, you can see the real image of a man on the left and his fake persona on the right, a beautiful young woman scamming the male victim in the middle:

This is uncharted territory for most of us, but here’s what the Hong Kong victim could have done to spot the deepfake, and what we’ll all need to do in the future for sensitive video calls:

  1. Use visual cues to verify who you’re talking to. Deepfakes still can’t do complex movements in real time, so if in doubt, ask your video conference counterpart to write a word or phrase on a piece of paper and show it on camera. You could ask them to pick up a nearby book or perform a unique gesture, like touching their ear or waving a hand, all of which can be difficult for deepfakes to replicate convincingly in real-time.
  2. Watch the mouth. Look out for discrepancies in lip syncing or weird facial expressions that go beyond a typical connection glitch.
  3. Employ multi-factor authentication. For sensitive meetings, consider involving a secondary conversation via email, SMS or an authenticator app, to make sure the participants are who they claim to be.
  4. Use other secure channels. For critical meetings that will involve sensitive information or financial transactions, you and the other meeting participants could verify your identities through an encrypted messaging app like Signal or confirm decisions such as financial transactions through those same channels.
  5. Update your software. Make sure that you’re using the latest version of your video conferencing software in case it incorporates security features to detect deepfakes. (Zoom Video Communications did not reply to questions about whether it plans to make such detection technology available to its users.)
  6. Avoid unknown video conferencing platforms. Especially for sensitive meetings, use well-known platforms like Zoom or Google Meet that have relatively strong security measures in place.
  7. Look out for suspicious behavior and activity. Some strategies stand the test of time. Be wary of urgent requests for money, last-minute meetings that involve big decisions, or for changes in tone, language or a person’s style of speaking. Scammers often use pressure tactics so beware of any attempts to rush a decision too.  

Some of these tips could go out of date over time, especially visual cues. As recently as last year, you could spot a deepfake by asking the speaker to turn sideways to see them in profile. Now some deepfakes can convincingly move their heads side to side.

For years fraudsters have hacked into the computers of wealthy individuals, hoovering up their personal information to help them get through security checks  with their bank. But at least in banking, managers can create new processes to force their underlings to tighten up security. The corporate world is far messier, with an array of different approaches to security that allow fraudsters to simply cast their nets wide enough to find vulnerabilities.

The more people wise up to the possibility of fakery, the less chance the scammers will have. We’ll just have to pay the price as the discomfort of conference calls becomes ever more agonizing, and the old Zoom clichés about your peers being on mute morph into requests for them to scratch their noses.  

Read also:

© 2024 Bloomberg L.P.

Visited 209 times, 9 visit(s) today