Tag Archives: Deepfake

The Deepfake Threat: Why it’s Time to Update Your Security Policies

by Ian Heritage

Could this be the year that deepfakes break through into popular culture? One ominous sign of things to come has been the scrambling of social media companies over the past few weeks to develop a coherent set of policies on faked content. Their actions should help raise awareness and limit the impact of malicious audio and video online.

But let’s not forget that deepfakes are already being used by cyber-criminals today, specifically in CEO fraud attacks. This will require CISOs to update their risk management and security strategies, as attacks become more widespread and convincing.

Keeping it real
AI-powered deepfakes are spoofed audio or video clips which are hard to distinguish from the original. They quite literally put words in the mouth of the subject; whether it’s a famous politician, a celebrity or a CEO. While it sounds like a lot of fun, there’s a serious side. Doctored video clips could be used ahead of elections to discredit candidates, for example. The bad news is that psychologists believe that once we’ve viewed something like this, it tends to have a lasting impact on our perception of a person, even if we subsequently find out the video is a fake.

Social media companies are understandably nervous about the potential for misinformation on a whole new scale spreading via their platforms. Earlier this week Twitter revealed its policy on deepfakes, promising to label any content that has been “significantly and deceptively altered or fabricated” and that has been shared deceptively. It said it would remove any such content also deemed capable of causing harm. The firm joins Facebook, which last month said it would ban deepfakes outright from its site, and YouTube, which has banned such content in the run up to the 2020 US Presidential election.

Firms under pressure
In this context, deepfakes represent a major threat to democratic countries like ours, especially following previous attempts by nation states to interfere in elections and referendums. But there’s another angle more relevant to businesses. Deepfake audio clips are already being used in quasi-BEC attacks, designed to impersonate CEOs and trick employees into wiring funds to hacker-controlled bank accounts.

A UK energy company lost €220,000 (£187,000) after its CEO was tricked into making a fund transfer by someone he thought to be his German boss. In reality, the ‘person’ on the other end of the phone was simply a deepfake audio clip. This is just the beginning. In our 2020 predictions report, we argue that the C-suite will increasingly find themselves targeted by this kind of hi-tech fraud, as their public profile will make it easier for cyber-criminals to record and mimic their voice.

Spotting the fakers
We’re just at the start of a very long road. In time, the technology will get better, making it harder to spot the fakes. We may even reach a point when organisations or individuals are held to ransom with fake clips of a CEO doing something outrageous, which could cause the company share price to tank.

CISOs must therefore act now to build this threat into their security strategies, by updating their employee awareness training, and tightening company policies on large fund transfers. Fortunately, the majority of CEO fraud today still occurs via email. And for these occasions Trend Micro has its own AI-powered solution, Writing Style DNA, which “blueprints” the writing style of senior executives so that it can raise the alarm when hackers try to impersonate them. We recommend its use as part of a layered approach to email security that focus on domain reputation and other elements.

Also, be reassured that cybersecurity remains an arms race. The deepfakers might appear to have the upper hand at the moment, but realistic fakes are few and far between, and we’re working all the time on ways to foil them.