As artificial intelligence-powered deepfake scams become more prevalent, security firms warn that the attack method could extend beyond video and audio.
On Sept. 4, software firm Gen Digital reported that malicious actors using AI-powered deepfake scams to defraud crypto holders have ramped up operations in the second quarter of 2024.
The company said that the scammer group called “CryptoCore” had already scammed over $5 million in crypto using AI deepfakes.
While the amount seems low compared to other attack methods in the crypto space, security professionals believe that AI deepfake attacks can expand further, threatening the safety of digital assets.
AI deepfakes threaten wallet security
Web3 security firm CertiK believes that AI-powered deepfake scams will become more sophisticated. A CertiK spokesperson told Cointelegraph that it could also expand beyond videos and audio recordings in the future.
The spokesperson explained that the attack vector could be used to trick wallets that use facial recognition to give hackers access:
“For instance, if a wallet relies on facial recognition to secure critical information, it must evaluate the robustness of its solution against AI-driven threats.”
The spokesperson said it’s important for crypto community members to become increasingly aware of how this attack works.
AI deepfakes will continue to threaten crypto
Luis Corrons, a security evangelist for cybersecurity company Norton, believes that AI-powered attacks will continue to target crypto holders. Corrons noted that crypto yields significant financial rewards and lower risks for hackers. He said:
“Cryptocurrency transactions are often high in value and can be conducted anonymously, making them a more attractive target for cybercriminals, as successful attacks yield more significant financial rewards and lower risk of detection.”
Furthermore, Corrons said that crypto still lacks regulations, giving cybercriminals fewer legal consequences and more opportunities to attack.
Related: Warren Buffett compares AI to nukes after seeing deepfake doppelganger
How to detect AI-powered deepfake attacks
While AI-powered attacks may be a big threat to crypto users, security professionals believe that there are ways for users to protect themselves from this type of threat. According to a CertiK spokesperson, education would be a good place to start.
A CertiK engineer explained that knowing the threats and the tools and services available to combat them is important. In addition, the professional also added that being wary of unsolicited requests is also important. They said:
“Being skeptical of unsolicited requests for money or personal information is crucial, and enabling multifactor authentication for sensitive accounts can help add an extra layer of protection against such scams.”
Meanwhile, Corrons believes there are “red flags” that users can try to spot to avoid AI deepfake scams. This includes unnatural eye movements, facial expressions and body movements.
Furthermore, a lack of emotion could also be a big tell. “You also can spot facial morphing or image stitches if someone’s face doesn’t seem to exhibit the emotion that should go along with what they’re supposedly saying,” Corrons explained.
Apart from these, the executive said that awkward body shapes, misalignments and inconsistencies in the audio should help users determine whether they’re looking at an AI deepfake or not.
Magazine: Lazarus Group’s favorite exploit revealed — Crypto hacks analysis