Deeply concerned about deep fakes?

Blog / Deeply concerned about deep fakes?

Here are two hundred million reasons why deep fakes are dangerous.

Forgeries have been around for a long time, and we’re not talking about just fake signatures on false cheques. Back before everyone was carrying camera-phones around and uploading pictures to the internet, physical photos captured on film and developed in chemical baths were the standard for still images, and even those could sometimes be faked. It was hard and required creative costumes, appropriate lighting, the right makeup, and other production assets, but it wasn’t impossible. However, the time, energy, and money needed to create these early ancestors of modern deep fakes kept image counterfeiting out of the reach of anyone without enough resources, so false photos were thankfully rare. The same was true when it came to the so-called “moving pictures” of film and television, which were enormously expensive to shoot and impossible to alter or fake.

Of course, with the advent of computers and image-manipulation software, the costs of counterfeiting dropped dramatically as more and more people gained access to new tools and online resources. Fast-forward a few more years, with technology advancing all the while, and we wind up in the modern age, where we’re all contending with a relatively new form of counterfeit image once thought impossible—faked or altered videos so realistic they can easily be mistaken for the real deal, also known as deep fakes.

What are deep fakes?

Unlike their photoshopped still-image counterparts, deep fakes are actually videos, but incredibly realistic ones that can include seemingly genuine audio tracks. They’re typically used to impersonate celebrities or political leaders, potentially misinforming viewers about the victim’s real position on an issue. However, it’s important to recognize that deep fakes rely on the unauthorized or illegitimate use of a person’s likeness, so not every uncannily-realistic video meets the standard.

Take Rob, our AI video newsletter narrator, for example. His voice sounds natural and his facial expressions and gestures look pretty good, but we’re authorized to use his likeness and aren’t misrepresenting anyone, so he’s not considered a deep fake. He’s meant to look lifelike and a casual observer or passerby might even think he’s real, but that’s the extent of the shenanigans. Even if someone did mistake Rob’s avatar for a live person, he’s a nobody and the information presented is legitimate. Deep fakes are instead about making fake versions of important people say and do things the real-world version never would.

So how good are deep fakes?

Forgive the potty mouth, but they’ve gotten pretty damn good. Two hundred million dollars good, to be precise.

Admittedly that’s $200 million Hong Kong dollars, which is just under $35 million Canadian, but it’s still an astonishingly large amount of money to lose to deep fake scam. In a nutshell, the CFO of a Hong Kong multinational firm was contacted via a video call from a colleague and convinced to transfer $200 million HKD (just under 35 million Canadian) to the scammers. This means deep fake video and audio was used to successfully impersonate a real individual. Now there’s a lot of details we don’t know, like how exactly the CFO was contacted, as the attacker may have needed internal access depending on which software the organization uses. It would also be great to know how exactly the video call worked, as deep faking both video and audio in real-time seems unlikely due to the computing resources needed. But however they did it, nothing changes the fact that a specific individual was plausibly faked.

Unfortunately, detecting deep fake attacks currently requires knowing the target that is being faked. Obviously, this is not something an organization can rely on. There may or may not be technology or software already under development to better identify deep fakes, but even if there is it’s likely still years away from widespread deployment. That doesn’t mean you can’t take measures to protect yourself, though. Implementing policies and procedures that include monitoring and double-checking contacts and their information can help even though it doesn’t guarantee protection, and would mean fakers would need to fool at least two people rather than just one. Never have a single point of failure.

For more information about deep fakes and how you can protect yourself, contact a TRINUS cybersecurity professional to get some stress-free IT for yourself.

This Shakespeare quote comes from Macbeth: “We fail! But screw your courage to the sticking-place, And we’ll not fail.”

Be kind to one another, courtesy your friendly neighbourhood cyber-man.

/Partners /Systems /Certifications

TRINUS is proud to partner with industry leaders for both hardware and software who reflect our values of reliability, professionalism and client-focused service.