Are deepfake scams the future of cyber crime?

Posted on 28/10/2019

Share this post

One of the first major cases of machine learning-based “voice phishing” was reported recently, when deepfake technology was used to impersonate a chief executive’s voice and successfully demand the transfer of a large sum of funds. But will this be the last of its kind, or is this the start of a new generation of cyber attack?

When the UK CEO of an unnamed energy provider received a call from his German counterpart, requesting the immediate transfer of €220,000 to a Hungarian supplier’s bank account, he set to work and authorised the transaction straight away.

It was only after the money had been moved that he noticed the call had been made from an Australian number and alarm bells were raised. By this time, the funds were already in Mexico and no longer traceable.

Known as “deepfake” technology, cyber criminals had used artificial intelligence (AI) to create an impersonated audio of the German boss’s voice that sounded identical to the real deal.

First published on the internet in 2017, deepfakes are visuals or audio that replicate real-life footage using machine learning. Advanced deepfake algorithms are able to create a near-carbon copy of a target’s image or voice and manipulate it to behave in any way the creator wants.

With computer and machine learning capabilities continually strengthening, concern is rising as cyber criminals have now begun to leverage deepfake technology for social engineering cyber attacks such as CEO Fraud.

Typically, CEO Fraud entails a fraudster claiming to be a figure of authority within an organisation and making a request for a member of staff to transfer funds to an illegitimate bank account. Around £32 million has already been stolen from businesses through this type of scam,* with this figure set only to rise with the increased techniques harboured by cyber criminals.

“The use of deepfake technology in cyber crime is a real cause for concern. This scam is just the beginning of what could become a major threat for organisations in the future, regardless of business size. As machine learning escalates, social engineering attacks will also increase in complexity and the damage that businesses could face will be immeasurable.

Companies must learn quickly to mitigate their potential risks, ensuring they take a proactive approach to develop their cyber risk management strategies and protect against new threats.”

Lee Johnson, Chief Information Security Officer at Air Sec

The threat of AI-based cyber crime isn’t going away, and employee awareness training is the first step to defence. Ensuring all staff, not just management and finance teams are educated in the threats of today’s landscape is critical. Any communication – whether email, telephone or video – that requests the transfer of funds or information should be treated with extreme caution. Remember: err on the side of caution. If unsure, delete and report to your cyber defence team.

*National Fraud Intelligence Bureau

Want to know more?

We strongly recommend that you take action today to protect your systems and users from the threats of cyber attacks.

If you’d like to learn more deepfake scams, or would like to find out about our cyber security services, please contact us today.

Share this post

Similar posts you might like

Find out how our Cyber Security specialists can help