We spoke to Joshua McKenty and Khadem Badiyan about the many ways companies have become cyber vulnerable and the skills needed to prevent further risk.
Deepfakes, that is, digitally manipulated images, video and audio samples, have evolved from an obvious, clunky attempt at mimicry to a sophisticated tool, often used to exploit individuals and organisations at scale.
That is according to Joshua McKenty, the CEO of cybersecurity platform Polyguard, who explained that, like all forms of aggressive cybercrime, AI has enabled fraud to operate across several axes, by essentially democratising threats.
“More groups and individuals are engaging in fraud because it has become simpler and cheaper. Secondly, it has increased the pool of targets because language is no longer a barrier and since so much of the work is automated, fraud is now targeting everyone. It used to be a problem limited to ‘high-value targets’, now, it is a problem for anyone who has data that can be found or purchased on the internet.
“Thirdly, the fraud has become much more sophisticated and effective. Rather than fraud within a single channel, such as a phishing email or a text scam, AI-powered fraud is multi-channel and may include tailored messages across text, email, voice calls and even video chat on WhatsApp or Zoom.”
Post-attack policies
For McKenty’s colleague, CTO Khadem Badiyan, organisations are wholly unprepared to manage the impact of AI-powered technologies and their effect on fraud. He noted that many companies operate by deploying a policy of hypervigilance and continue to run outdated training programmes, despite it being documented that humans are not skilled at effectively spotting deepfakes.
He further explained that while organisations may have a dedicated team, trained to respond to a systems breach or attack, limited scope can render the response weak, ineffective or redundant.
“Fraud teams are typically made up of professionals trained to catch fraud after the fact, rather than prevent fraud. This leads to tools and procedures that are outdated and organised by the channel of the attack or the timing of the attack rather than holistically.
“Fraud can impact an organisation in four distinct ways, by targeting the organisation itself, targeting its clients by impersonation of the organisation, attacking the brand through social broadcasting of fake content and blackmail of an organisation’s executives through romance cons or kidnap scams. Usually only the first of these attacks is actually under the purview of the fraud team.”
This belief that we as individuals are vulnerable to attack based on an unrealistic sense of our own abilities or blind-faith in an organisation’s security network is shared by McKenty, who said, “there is a social challenge in the bilateral nature of identity proofs”.
“We’re all used to proving who we are when we talk to our bank, but there’s no mechanism in place for our bank to prove itself to us. This leaves us deeply vulnerable. Like many other facets of human cognition, we’re much worse at spotting fraud than we think. Our belief in our abilities causes us to underestimate the danger in ordinary calls and emails.
“Because deepfakes are used in satire, as well as in fraud and exploitation, most people have seen examples of deepfakes that they’re not impressed with. This is like judging the risk of counterfeiting by the quality of today’s monopoly money.”
What can we do?
McKenty explained that a healthy dose of scepticism in digital content training, as well as a robust education in provenance tools and methods of verification, will empower employees to better protect themselves and their workplace systems. By understanding that almost anything can be duplicated, for example ‘trusted’ numbers, IDs, email addresses and even voices, professionals can make themselves less vulnerable to fraud.
He advises people to use codewords or prompt specific actions. Also, don’t immediately share your name or other details whilst engaged in a phone conversation.
“Don’t be afraid to be rude and demand that the caller prove their identity. Reduce social stigma around ‘falling for’ or ‘getting duped’ by scammers. It’s critically important for employees to report scam attempts, successful or otherwise, immediately and any punitive measures will reduce that.
“The right solution for preventing fraud via deepfakes is to use strong remote identity verification and to integrate that with all communication channels, especially voice and video.”
McKenty thinks that society need to embrace widespread cybersecurity education and make a conscious effort to uphold what we know to be true and unearth solutions to the problems we don’t yet have a firm grasp on.
There are always areas of science and technology where we have accepted that our common sense fails us. This includes nuclear, gravitational and quantum physics, the areas of the very small and very large, even most of the details of radio transmission are counterintuitive.
“Embracing science requires us to trust the expertise of scientists and society as a whole, it gave us the benefits of high-speed 5G cell networks and the eradication of polio. Accepting that we can’t trust our own flawed judgements of voice and video is a similarly challenging moment, but one that will pay massive dividends by allowing us to get used to relying on verified identity proofs, instead of easily spoofed pseudo-credentials.”
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.