Login | Register

 

Deepfakes: The new cybersecurity epidemic?

In the next part of our series on cybersecurity, we look at the phenomenon that takes the term ‘fake news’ to a whole new level and has taken the internet by storm, for better or for worse…

In recent years, deepfakes have gone viral as content creators use advanced editing skills to poke fun at famous faces by syncing their lips to alternative narration, to varying levels of convincingness. Even the UK Government has utilised the technology to resurrect Albert Einstein’s face to deliver a public advertising campaign (https://www.youtube.com/watch?v=ukQljlpe6A8) about the importance of smart energy meters.

Deepfakes relies on a machine learning system called the Generative Adversarial Networks (GANs), which consists of a generator that syncs the new audio to the visual and processes the content through a discriminator until it cannot detect differences between original and doctored versions. Artificial Intelligence helps edit the video by researching common facial movements that it can replicate. Developments in AI mean that the videos are increasingly realistic, if rather unsettling.

The possibilities created are limitless, but this ultimately results in some dangerous activity. A famous example spread around the internet is of a British executive receiving a phone call from the German CEO of his company, requesting that he transfer €220,000 to a Hungarian supplier. Since he recognised his boss’s voice, he did what was asked of him, not realising that it was an AI-generated deepfake impersonating the CEO after based off online audio footage found collected by a hacker. This stunt had significant financial implications for the business and is indicative of the issues created by this technological innovation.

All manner of problems sprawl from the malicious use of deepfakes, but perhaps the most controversial examples of senior businesspeople or politicians are portrayed in a damaging light, which could bring disgrace upon their employers (https://www.youtube.com/watch?v=vm_rjs9fyQk) if they go viral. Social media platforms have been tasked with deleting such content, but the human moderators often cannot detect the difference between what is and isn’t real, given the ever-improving quality produced by AI.

Instead, cybersecurity tech is needed to combat this threat. However, in many respects, there is an AI arms race between cybercriminals and cybersecurity developers as the technology continues to reach new heights. NWT remains ahead of the curve in monitoring how we can use these tools for good and work with our partners to provide the best protection against deepfakes.

Here are some simple steps that companies can take to protect their workers:

  • Arguably the most important is education. As a relatively new threat, few firms seem to include deepfakes as part of cyber-awareness courses, in the same way they might warn staff about phishing and ransomware, for example. People cannot be blamed for listening to orders if they appear to be delivered by their superiors, but they should be encouraged to verify requests such requests with a colleague. They should also be trained on the obvious signs on how to tell whether something is real or not.
  • If video looks or sounds slightly strange, look specifically at the person’s facial features and see if there are any weird shadows within the image. A common trick used, like in many scams, is to create a false sense of urgency. The best advice would always be not to rush into anything; if in doubt, always check it over with someone else.
  • To minimise the chances of hackers accessing the data they need to improve the quality of the deepfakes, keep your social media accounts secure by having a different password for each one and turning on two-factor authentication.

One useful step is the publishing of a specification for digital content outlined by major media groups including Microsoft and the BBC to ensure that their output cannot be tampered with. US lawmakers are beginning to respond to calls for action by passing bills that crack down on deepfakes on social media, but until these changes are realised, we can offer more advice on how to stay alert against deepfakes and other similarly harmful content online.

To find out more about how we keep our clients safe, head to the Cybersecurity section of our website by clicking here.

Together, Anexinet and NWT are uniquely positioned to help clients streamline their journey to the Cloud in the face of the pandemic by designing, building, automating and managing their workloads and applications on Enterprise-Cloud or Cloud-Hyperscalers, including AWS, Alibaba, Google Cloud, and Microsoft Azure. The strategic partnership has already helped one established financial institution unlock significant value by accelerating the development and delivery of effective, integrated Cloud-based solutions. Anexinet’s proven Kickstart process and comprehensive set of tools and services deliver an Agile, scalable Cloud-based environment that embraces traditional IT as well as Private, Public, and Managed Cloud. Migrating applications and business systems to the Cloud is a daunting task for even the most mature organization. As a result, a Cloud-adoption strategy and roadmap often means the difference between successful deployment and failure to launch. Anexinet helps organizations determine their ideal strategic approach.

 

How Anexinet aligns with our 5D framework