Real AI fraud alert: Why our voice, our face – and our money – could be next in line.
In this stock image: The OpenAI ChatGPT logo is displayed on a smartphone, with CEO Sam Altman in the background.
Credit: El editorial, Shutterstock
‘Just because we’re not releasing the technology doesn’t mean it doesn’t exist… Some bad actor is going to release it. This is coming very, very soon.’
You might want to think twice before trusting that FaceTime call from your mum – or that urgent voicemail from your boss. According to OpenAI CEO Sam Altman, the age of deepfake fraud isn’t coming. It’s already here – and it sounds exactly like you. According to Altman, this is just the beginning of a global crisis.
During a recent event in Washington, DC, Altman issued an ominous warning: generative AI will soon allow bad actors to perfectly imitate people’s voices, faces, and even personalities – and use them to scam you out of your money, your data, or both. Anyone will be able to do it.
‘Right now, it’s a voice call; soon it’s going to be a video or FaceTime that’s indistinguishable from reality,’ Altman told US Federal Reserve vice chair Michelle Bowman.
So, what’s actually going on here – and should you be worried?
Voiceprints and video fakes: The new weapons of fraud
Altman’s concern centres on the fact that some banks and companies still use voiceprint authentication – that is, they let you move money or access accounts just by recognising your voice. But with today’s AI tools, it takes just a few seconds of audio to clone someone’s voice. There are now dozens of apps – some free – that can do it.
Scammers are already calling people and recording their voices when they answer the phone. It just takes one sample for them to be able to produce a realistic version of your voice saying anything they want.
Combine that with increasingly realistic AI-generated video, and you’ve got a perfect storm: scammers can now create entirely fake FaceTime or video calls that look and sound like your spouse, your boss, or your child. You’re not just getting a suspicious email anymore – you’re getting a fake person.
Real-world scams: When your ‘son’ isn’t really your son
These warnings aren’t theoretical. Here are some examples of how AI fraud is already unfolding:
As reported by CBC Canada, scammers cloned the voice of a woman’s son and called her claiming he ‘needed to talk’. ‘It was his voice,’ she said. Manitoba mum hears son’s voice on the phone – but it wasn’t him.
Leann Friesen, a mother of three from the small community of Miami, Manitoba, received a strange call from a private number a few weeks ago. What she heard on the other end stopped her in her tracks – it was her son’s voice, sounding distressed.
“He said, ‘Hi mom,’ and I said hi,” Friesen recalled. “He said, ‘Mom, can I tell you anything?’ and I said yes. He said, ‘Without judgment?’”
That’s when alarm bells started ringing.
“I’m getting a little bit confused at that point – like, why are you asking me this?” she said.
Something about the conversation felt wrong. Friesen decided to cut it short, telling the caller she’d ring back on her son’s mobile – and hung up.
She immediately dialled his number.
She said she woke him up. He’d been asleep the whole time, because he worked shifts. “He said, ‘Mom, I didn’t call you.’”
“It was definitely my son’s voice that was on the other end of the line.”
The Hong Kong deepfake video and the FBI case
In Hong Kong, a finance worker was tricked by a video deepfake into transferring over $25 million USD after believing they were in a Zoom meeting with their company CFO.
According to the FBI, in the US, impersonators used AI-generated calls claiming to be a government official to access sensitive information – in one case, even pretending to be Senator Marco Rubio in calls to foreign diplomats.
So what exactly is OpenAI doing?
Altman insists OpenAI isn’t building impersonation tools. Technically, that’s true. But some of their projects can be used that way.
Sora, OpenAI’s video generator, creates ultra-realistic videos from text prompts. It’s a leap forward in creative AI – but potentially a leap forward for fraud too. Imagine feeding it a script and asking for “a video of Joe Bloggs calling his bank to request a password reset.”
Eyeball scanner controversy
Altman also backs Worldcoin’s Orb, a controversial biometric device that scans your eyeball to verify your identity. It’s being marketed as a new kind of proof-of-personhood – but critics argue it’s a dystopian answer to a digital problem.
OpenAI says it doesn’t condone misuse, but Altman admits that others might not play so nicely.
‘Just because we’re not releasing the technology doesn’t mean it doesn’t exist… Some bad actor is going to release it. This is coming very, very soon.’
The tech is outpacing the law
Governments are still scrambling to catch up. The FBI and Europol have issued warnings, but global laws around AI impersonation are patchy at best. The UK’s Online Safety Act doesn’t yet cover all forms of synthetic media, and regulators are still debating how to define AI-generated fraud.
Meanwhile, scammers are exploiting the lag.
What can you do to protect yourself?
Altman may be worried, but there are ways to protect yourself and your accounts. Here’s what you should consider doing today:
- Stop using voice authentication: If your bank uses it, ask for a different method. It’s no longer safe.
- Use strong, unique passwords and two-factor authentication (2FA): Prefer app-based 2FA over SMS wherever possible. It remains your best defence.
- Verify through another channel: If you get a suspicious call or video message – even if it looks real – contact the person separately on another platform or phone number.
- Educate your family members: Some older relatives are especially vulnerable. Help them understand what AI fraud looks and sounds like.
- Be cautious with your voice online: It only takes a few seconds of clear audio to create a convincing fake. Avoid posting long videos or voicemails if not necessary.
Final thoughts: We’re not in Kansas anymore
AI tools that can imitate your voice or face with chilling accuracy are no longer science fiction. They’re out in the wild. Sam Altman’s warning might sound self-serving, but he’s not wrong: this is going to get worse before it gets better.
And while the fraudsters are moving fast, our institutions – from banks to regulators – are moving painfully slowly.
Until the system catches up, the best security you’ve got is your own scepticism.
So next time your ‘boss’ sends a video message asking for a wire transfer at 4 AM? You might want to sleep on it.
Get more technology news.
Get more fresh celebrity news in the morning.


