close
close

Elon Musk’s deepfakes bring billions in fraud losses

Elon Musk’s deepfakes bring billions in fraud losses

She first saw the ad on Facebook. And again on TikTok. After seeing Elon Musk pitch investment opportunities over and over again, Heidi Swan decided it was true.

“Looked just like Elon Musk, sounded just like Elon Musk, and I thought it was him,” Swan said.

She contacted the company behind the field and opened an account with more than $10,000. The 62-year-old healthcare worker thought she was making a smart investment in cryptocurrency from a businessman and investor worth billions of dollars.

But Swan soon learns that she has been duped by a new wave of high-tech thieves using artificial intelligence to create deepfakes.

Even now, looking back at the videos and realizing they were fakes, Swan still thinks they look convincing.

“They still look like Elon Musk,” she said. “They still sound like Elon Musk.”

Heidi Swan / Photo: CBS News TexasHeidi Swan / Photo: CBS News Texas

Heidi Swan / Photo: CBS News Texas

Deepfake scams are on the rise in the US

As artificial intelligence technology develops and becomes more accessible, these types of scams are becoming more common.

Artificial intelligence-generated content accounted for more than $12 billion in fraud losses last year and could reach $40 billion by 2027 in the U.S., according to Deloitte, a leading financial research group.

Both the Federal Trade Commission and the Better Business Bureau warn that deepfake scams are on the rise.

A study by artificial intelligence company Sensity found that Elon Musk is the most common celebrity used in deepfake scams. One likely reason is his wealth and entrepreneurship. Another reason is the number of interviews he gave; The more content about someone online, the easier it is to create convincing deepfakes.

Anatomy of a deepfake

At the University of North Texas at Denton, Professor Christopher Meerdo is also using artificial intelligence. But he uses it to create art.

“It’s not a substitute for creative art,” Meerdo said. “It will just complement them and change our understanding of what we could do in the creative space.”

While Meerdo sees artificial intelligence as a way to be innovative, he sees dangers in it.

Meerdo showed the CBS News Texas I-Team how scammers can take real video and use artificial intelligence tools to spoof a person’s voice and mouth movements, making it seem like they’re saying something completely different.

Advances in technology are making it easier to create deepfake videos. All it takes to create a human being familiar with AI is one still image and a video recording.

To demonstrate this, Meerdo took a video of investigative reporter Brian New to create a deepfake of Elon Musk.

These AI-generated videos are hardly perfect, but they just need to be convincing enough to fool an unsuspecting victim.

“If you’re really trying to scam people, I think you can do some really bad things with it,” Meerdo said.

How to recognize a deepfake?

Some deepfakes are easier to detect than others; There may be signs such as unnatural lip movements or strange body language. But as technology improves, it will become increasingly difficult to tell just by looking.

There are a growing number of websites claiming to be able to detect deepfakes. Using three known deepfake videos and three genuine ones, the CBS News Texas I-Team subjected five of these websites to an unscientific test: Deepware, Attestiv, DeepFake-O-Meter, Sensity and Deepfake Detector.

In total, these five online tools correctly identified the tested videos nearly 75% of the time. The I-Team reached out to the companies with the results; their answers are below.

Deep software

Deepware, a free-to-use website, initially failed to flag two fake videos the I-Team tested. In an email, the company said the clips used were too short and that for best results, uploaded videos should be between 30 seconds and one minute in length. Deepware correctly identified all videos that were longer. Its detection rate is considered good for the industry at 70%, according to the company.

The FAQ section on the Deepware website states: “Deepfakes are not a solved problem yet. Our results show the likelihood of a particular video being a deepfake or not.”

Deepfake detector

Deepfake Detector, a tool that costs $16.80 per month, identified one of the fake videos as having “97% natural voice.” The company, which specializes in detecting AI-generated voices, said in an email that factors such as background noise or music can affect results, but the accuracy rate is about 92%.

In response to a question about recommendations for general consumers, the company wrote: “Our tool is user-friendly. Average consumers can easily upload an audio file to our website or use our browser extension to analyze the content directly. The tool will provide analysis to help determine whether a video may contain deepfake elements using probabilities, making it accessible even to those unfamiliar with AI technology.”

Certificate

Attestiv flagged two real videos as “suspicious.” According to the company’s CEO Nikos Vekiarides, false positives can be caused by factors such as graphics and editing. Both genuine videos labeled as “suspicious” contained graphics and editing. The site offers a free service, but it also has a paid tier where consumers can customize parameters and calibrations for deeper analysis.

While acknowledging that Attestiv is not perfect, Vekiarides noted that as deepfakes become increasingly difficult to detect with the naked eye, such websites are needed as part of the solution.

“Our tool can determine if something is suspicious, and then you can check it with your own eyes and say, ‘I really think this is suspicious,’” Vekiarides said.

DeepFake-O-Meter

DeepFake-O-Meter is another free tool supported by the University at Buffalo and the National Science Foundation. He determined that two real-life videos had a high percentage of being created by artificial intelligence.

In an email, the creator of the open-source platform said a limitation of deepfake detection models is that video compression can lead to problems with video and audio synchronization, as well as inconsistent mouth movements.

In response to a question about how general users can use the tool, the company emailed: “Currently, the main result shown to users is the probability value that this sample is the generated sample for various detection models. This can be used as a reference. if multiple models give the same answer with confidence (for example, more than 80% for artificial intelligence or less than 20% for real video). We are currently developing a clearer way to display results, as well as new models that can output the data. comprehensive detection results.”

Sensitivity

Sensity’s deepfake detector correctly identified all six clips, showing a heat map showing where AI manipulation was most likely.

The company is offering a free trial to use its service and told the I-Team that while it is currently aimed at private and government organizations, its future goal is to make the technology available to everyone.

Rebecca Yarros on her rise to fame, the success of The Fourth Wing and her new book

Paul Simon’s Inside Look at the Stanford Hearing Loss Initiative

American Airlines’ new system cracks down on passengers trying to board planes early