Deepfake technology is rapidly becoming easier and quicker to create and it’s opening a door into a new form of cybercrime. Although it’s still mostly seen as relatively harmful or even humorous, this craze could take a more sinister turn in the future and be at the heart of political scandals, cybercrime, or even unimaginable concepts involving fake videos. And it won’t be just public figures that bear the brunt.
A deepfake is the technique of human-image synthesis based on artificial intelligence to create fake content either from scratch or using existing video designed to replicate the look and sound of a real human. Such videos can look incredibly real and currently many of these videos involve celebrities or public figures saying something outrageous or untrue.
New research shows a huge increase in the creation of deepfake videos, with the number online almost doubling in the last nine months alone. Deepfakes are increasing in quality at a swift rate, too. This video showing Bill Hader morphing effortlessly between Tom Cruise and Seth Rogan is just one example of how authentic these videos are looking, as well as sounding. If you search YouTube for the term ‘deepfake’ it will make you realise we are viewing the tip of the iceberg as to what is to come.
In fact, we have already seen deepfake technology used for fraud, where a deepfaked voice was reportedly used to scam a CEO out of a large sum of cash. It is believed the CEO of an unnamed UK firm thought he was on the phone to his boss and followed the orders to immediately transfer €220,000 (roughly US$244,000) to a Hungarian supplier’s bank account. If it was this easy to influence someone by just asking them to do it over the phone, then surely we will need better security in place to mitigate this threat.
Fooling the naked eye
We have also seen apps making DeepNudes where apps were able to turn any clothed person into a topless photo in seconds. Although, luckily, this particular app has now been taken offline, what if this comes back in another form with a vengeance and is able to create convincingly authentic-looking video?
There is also evidence that the production of these videos is becoming a lucrative business especially in the pornography industry. The BBC says “96% of these videos are of female celebrities having their likenesses swapped into sexually explicit videos – without their knowledge or consent”.
A recent Californian bill has taken a leap of faith and made it illegal to create a pornographic deepfake of someone without their consent with a penalty of up to $150,000. But chances are that no legislation will be enough to deter some people from fabricating the videos.
To be sure, an article from The Economist discusses that in order to make a convincing enough deepfake you would need a serious amount of video footage and/or voice recordings in order to make even a short deepfake clip.
Having said that, In the not-too-distant future, it may be entirely possible to take just a few short Instagram stories to create a deepfake that is believed by the majority of their followers online or by anyone else who knows them. We may see some unimaginable videos appearing of people closer to home – the boss, our colleagues, our peers, our family. Additionally, deepfakes may also be used for bullying in schools, the office or even further afield.
Furthermore, cybercriminals will definitely use such technology to spearphish victims. Deepfakes keep getting cheaper to create and become near-impossible to detect with the human eye alone. As a result, alt that fakery could very easily muddy the water between fact and fiction, which in turn could force us to not trust anything – even when presented with what our senses are telling us to believe.
Heading off the very real threat
So, what can be done to prepare us for this threat? First, we need to better educate people that deepfakes exist, how they work and the potential damage they can cause. We will all need to learn to treat even the most realistic videos we see that they could be a total fabrication.
Secondly, technology desperately needs to develop better detection of deepfakes. There is already research going into it, but it’s nowhere near where it should be yet. Although machine learning is at the heart of creating them in the first place, there needs to be something in place that acts as the antidote being able to detect them without relying on human eyes alone.
Finally, social media platforms need to realize there is a huge potential threat with the impact of deepfakes because when you mix a shocking video with social media, the outcome tends to spread very rapidly and potentially could have a detrimental impact on society.
Product of the Day4 days ago
Naspers invests R42-m in public transport
Stream of the Day4 days ago
E3: What to expect from Ubisoft Forward
People 'n' Issues3 days ago
Loyalty points get tax break
Product of the Day3 days ago
Opera launches Hype in SA
People 'n' Privacy3 days ago
POPI is NOT coming to get you
Stream of the Day3 days ago
Square Enix summer showcase comes to E3
Gadget of the Week4 days ago
Gadget of the Week: Orboot Interactive Earth Globe
Africa News3 days ago
Transsion leads Africa smartphone market