Imagine you meet someone new. Be it on a dating app or social media, you chance across each other online and get to talking. They’re genuine and relatable, so you quickly take it out of the DMs to a platform like Telegram or WhatsApp. You exchange photos and even video call each over. You start to get comfortable. Then, suddenly, they bring up money.
They need you to cover the cost of their Wi-Fi access, maybe. Or they’re trying out this new cryptocurrency. You should really get in on it early! And then, only after it’s too late, you realize that the person you were talking to was in fact not real at all.
They were a real-time AI-generated deepfake hiding the face of someone running a scam.
This scenario might sound too dystopian or science-fictional to be true, but it has happened to countless people already. With the spike in the capabilities of generative AI over the past few years, scammers can now create realistic fake faces and voices to mask their own in real time. And experts warn that those deepfakes can supercharge a dizzying variety of online scams, from romance to employment to tax fraud.
David Maimon, the head of fraud insights at identity verification firm SentiLink and a professor of criminology at Georgia State University, has been tracking the evolution of AI romance scams and other kinds of AI fraud for the past six years. “We’re seeing a dramatic increase in the volume of deepfakes, especially in comparison to 2023 and 2024,” Maimon says.
“It wasn’t a whole lot. We’re talking about maybe four or five a month,” he says. “Now, we’re seeing hundreds of these on a monthly basis across the board, which is mind-boggling.”
Deepfakes are already being used in a variety of online scams. One finance worker in Hong Kong, for example, paid $25 million to a scammer posing as the company’s chief financial officer in a deepfaked video call. Some deepfake scammers have even posted instructional videos on YouTube, which have a disclaimer as being for “pranks and educational purposes only.” Those videos usually open with a romance scam call, where an AI-generated handsome young man is talking to an older woman.
More traditional deepfakes—such as a pre-rendered video of a celebrity or politician, rather than a live fake—have also become more prevalent. Last year, a retiree in New Zealand lost around $133,000 to a cryptocurrency investment scam after seeing a Facebook advertisement featuring a deepfake of the country’s prime minister encouraging people to buy in.
Maimon says SentiLink has started to see deepfakes used to create bank accounts in order to lease an apartment or engage in tax refund fraud. He says an increasing number of companies have also seen deepfakes in video job interviews.
“ Anything that requires folks to be online and which supports the opportunity of swapping faces with someone—that will be available and open for fraud to take advantage of,” Maimon says.
Part of the reason for this increase is that the barriers for creating deepfakes are getting lower. There are a lot of easily accessible AI tools that can generate realistic faces and a lot of tools that can animate those faces or create full-length videos out of them. Scammers often use images and videos of real people, deepfaked to slightly change their faces or alter what they’re saying, to target their loved ones or hijack their public influence.
Matt Groh, a professor of management at Northwestern University who researches people’s ability to detect deepfakes, says that point-and-click generative AI tools make it much easier to make small, believable changes to already-existing media.
“If there’s an image of you on the internet, that would be enough to manipulate a face to look like it’s saying something that you haven’t said before or doing something you haven’t done before,” Groh says.
It’s not just fake video that you need to be worried about. With a few clips of audio, it’s also possible to make a believable copy of somebody’s voice. One study in 2023 found that humans failed to detect deepfake audio over a quarter of the time.
“ Just a single image and five seconds of audio online mean that it’s definitely possible for a scammer to make some kind of realistic deepfake of you,” Groh says.
Deepfakes are becoming more pervasive in contexts other than outright scams. Social media has been flooded over the past year with AI-generated “influencers” stealing content from adult creators by deepfaking new faces onto their bodies and monetizing the resulting videos. Deepfakes have even bled over into geopolitics, like when the mayors of multiple European capital cities held video calls with a fake version of the mayor of Kyiv, Ukraine. People have started using deepfakes for personal reasons, like bringing back a dead relative or creating an avatar of a victim to testify in court.
So, if deepfakes are everywhere, how do you spot one? The answer is not technology. A number of technology companies, including OpenAI, have launched deepfake detection tools. Researchers have also proposed mechanisms to detect deepfakes based on things like light reflected in a person’s eyes or inconsistent facial movements, and have started investigating how to implement them in real time.
But those models often cannot reliably detect different kinds of AI fakes. OpenAI’s model, for example, is specifically designed only to report content generated with the company’s own Dall-E 3 tool but not other image generation models.
There’s also the risk that scammers can abuse AI detectors by repeatedly tweaking their content until it fools the software.
“ The major thing we have to understand is that the technology we have right now is not good enough to detect those deepfakes,” Maimon says. “ We’re still very much behind.”
For now, as video deepfakes get more popular, the best way to detect one relies on humans. Studies on deepfake detection show that people are best at distinguishing whether videos are real or fake, as opposed to just audio or text content, and are in some cases even better than leading detection models.
Groh’s team conducted one study which found that taking more time to determine whether an image was real or fake led to a significant increase in accuracy, by up to eight percentage points for just 10 seconds of viewing time.
“ This sounds almost so simple,” Groh says. “But if you spend just a couple extra seconds, that leads to way higher rates of being able to distinguish an image as real or fake. One of the ways for any regular person to just be a little bit less susceptible to a scam is to ask, ‘Does this look actually real?’ And if you just do that for a few extra seconds, we’re all going to be a little bit better off.”
Deepfakes’ popularity could be a double-edged sword for scammers, Groh says. The more widespread they are, the more people will be familiar with them and know what to look for.
That familiarity has paid off in some cases. Last summer, a Ferrari executive received a call from someone claiming to be the CEO of the company. The person convincingly emulated the CEO’s voice but abruptly hung up the call when the executive tried to verify their identity by asking what book the CEO had recommended just days earlier. The CEO of WPP, the world’s biggest advertising agency, was also unsuccessfully targeted by a similar deepfake scam.
“I think there’s a balancing act going on,” Groh says. “ We definitely have technology today that is generally hard for people to identify. But at the same time, once you know that there’s a point-and-click tool that allows you to transform one element into something else, everyone becomes a lot more skeptical.”