Tech corner: Deepfakes

As they move from prank videos on YouTube to the mainstream, deepfakes are proliferating into the world of cyber crime, creating risks for institutions and individuals alike
by Christian Doherty

deep-fake_1920
A deepfake uses synthetic content, defined by the FBI as a “broad spectrum of generated or manipulated digital content, which includes images, video, audio, and text”, to produce media where the facial features, mannerisms and voice of a person can be altered to appear real.

It is created by a 'generative adversarial network' (GAN), a machine learning model that uses immense processing power to learn patterns from existing inputs such as digital images to create new content. GANs are so called because they effectively pit two neural networks against each other.

According to an article in MIT Technology Review:

Both networks are trained on the same data set. The first one, known as the generator, is charged with producing artificial outputs, such as photos or handwriting, that are as realistic as possible. The second, known as the discriminator, compares these with genuine images from the original data set and tries to determine which are real and which are fake. On the basis of those results, the generator adjusts its parameters for creating new images. And so it goes, until the discriminator can no longer tell what’s genuine and what’s bogus.

This marks a step beyond existing smart AI applications like Google Assistant or Alexa that can recognise and respond to linear questions; by contrast GANs use multiple models to understand input and create something new. As Forbes magazine put it, "GANs gave neural networks the power not just to perceive, but to create."

Positive and negative uses

The use of deepfake technology is going mainstream. Deepfakes have been used in viral political campaigns on Facebook, mocked up videos have spread across the internet, advertisers are using them to create convincing representations of fictional scenarios, and some companies are using the technology for training purposes.

But while there are many positive uses for the tech, such as David Beckham speaking in nine languages to launch an anti malaria petition, the ease with which deepfakes can be generated – there are numerous easy to use free apps available along with more sophisticated tools – has led some to wonder whether they represent the next frontier in cyber crime and specifically ID theft and fraud.

For example, in 2019, a chief executive officer of “a UK-based energy firm” (the company’s insurance firm did not name the victim) was duped into transferring €220,000 to a fake account after receiving a call from someone purporting to be his boss, according to a Wall Street Journal article. It was in fact a deepfake audio call built by scammers using public domain voice records. It was only on the third call that day that the executive grew suspicious.

Existential threat to current models of fraud prevention?

An August 2020 report published in Crime Science journal, titled ‘AI-enabled future crime’, identifies deepfakes as one of the most significant emerging threats in the use of AI in crime.

The report outlines results from an ‘AI & future crime’ workshop organised by the authors and the Dawes Centre for Future Crime at University College London. The workshop team envisage a diverse range of criminal applications for deepfakes technology to exploit people’s implicit trust in media sources, such as voice data, including: “Impersonation of children to elderly parents over video calls to gain access to funds; usage over the phone to request access to secure systems; and fake video of public figures speaking or acting reprehensibly in order to manipulate support.”

This goes beyond simple ID scams and bank fraud – the FBI recently issued its own warning of malicious foreign actors using the technology to spread disinformation across social and mainstream media.  

Not surprisingly, banks and other institutions are worried, and are investing to protect themselves by developing more counter measures, according to a Financial Times article from September 2020. The increased threat from deepfakes may be the factor that pushes biometrics into mainstream use. A growing number of banks – including HSBC, Chase, ABN Amro, Caixa Bank, Mastercard and Anna Money are now adopting a range of biometrics-based tools to root out and combat fraud.

Banks and other institutions are investing in biometrics-based tools to protect themselves

The growing awareness of the threat is in part due to the spread of the technology needed to create deepfakes, in particular the ease of access to greater processing power. Whereas previously criminals might have needed access to specialist hardware, since the advent of cloud computing, anyone wanting to make a deepfake doesn’t need to process anything on their local machine: they can simply upload the picture to Microsoft Azure or Google Cloud servers and access their immense processing power. And running a neural network to create a deep learning algorithm, which would previously have taken five or six hours, can now be done on Google’s free GPUs in just four minutes.

As the sophistication of synthetic media grows, banks are increasingly having to rely on some kind of access control which requires human intervention. And that will need two-factor authentication where users log on using voice or fingerprint on a device, followed up by an email or a text message sent to a separate device. Users may then have to take manual action to get access to particular services. Otherwise, if an institution relies purely on technological access control, it’s more than likely sophisticated scammers will find a way around it.

Published: 13 Oct 2021
Categories:
  • Risk
  • Fintech
Tags:
  • artificial intelligence
  • technological resilience

No Comments

Sign in to leave a comment

Leave a comment