Hacked Reality
~A blog on Deepfakes and their dangers by Kashish Jain
Have you watched Mark Zuckerberg boast about having "absolute control over billions of people's stolen data," Barack Obama call Donald Trump a "complete dipshit," or Jon Snow's emotional apologies for the disappointing Game of Thrones finale? If you say yes, you've witnessed a deepfake. Deepfakes, the modern-day equivalent of Photoshopping, create photographs of fictitious events using a type of artificial intelligence known as deep learning; hence, the term "deepfake."
7,964 online deepfake videos were discovered in 2019 as well as the beginning of the deepfake content online. Just nine months later, the number increased to 14,678, according to research from the firm Deeptrace. Since then, it has surely kept growing.
While impressive, current deepfake technology is far from capable of producing realistic video footage; with just a close look, it’s usually possible to identify deep montage videos. However, technology is advancing really quickly. In the near future, deepfakes will be impossible to discern from authentic photographs, according to specialists.
DeepFake Infocalypse
The first use case where deepfake technology has been widely adopted, as often happens with new technologies, is pornography. As of September 2019, 96% of deepfake videos online are pornographic, as reported by Deeptrace.
A number of websites dedicated to deepfake pornography have sprung up, garnering hundreds of millions of views over the past two years. Deepfake pornography is almost always non-consensual, involving the artificial compilation of pornographic videos featuring celebrities or personal contacts.
Pornographic deepfakes can jeopardize, intimidate, and cause psychological harm on an individual. Some might find it amusing, but for some people it’s a lifetime trauma. People watch deepfake videos every day without even realising it’s social impact, laughing, liking and sharing it with others. From these dark corners of the web, the use of deepfakes has begun to spill over into the political realm, where the potential for chaos is even greater.
It doesn’t take much imagination to understand the harm that could be caused if entire populations could watch fabricated videos, they believe to be real. Imagine fake footage of a politician engaging in bribery or sexual assault right before an election; or U.S. soldiers committing atrocities against civilians abroad; or President Trump announced the launch of nuclear weapons against North Korea. In a world where there is even some uncertainty about the authenticity of these clips, the consequences could be dire.
There is still ample room for mischief-making. Last year, Tesla Stock Crash when Elon Musk smoked a joint on a live web show. In December, Donald Trump flew home early from a Nato meeting when genuine footage emerged of other world leaders apparently mocking him. Will plausible deepfakes shift stock prices, influence voters and provoke religious tension? It seems a safe bet.
Because of the technology’s widespread and easy accessibility, such footage could be created by anyone: state-sponsored actors, political groups, lone individuals.
What technology do you need?
On a typical computer, creating a good deepfake is challenging. The majority are produced using cutting-edge desktop computers with potent graphics cards, or even better, cloud computing resources. The processing time is cut from days to weeks to hours as a result. But it also requires skill to edit finished films to remove flicker and other aesthetic flaws. Nevertheless, a variety of technologies are now accessible to aid in the creation of deepfakes. You may get them made by a number of businesses, who will handle all the processing on the cloud. Even a smartphone app called Zao allows users to add their faces to a database of TV and movie stars that the system has been trained on.
How to spot a Deepfake — A nightmare
Users have been fooled by phishing attacks, so deep phishing attempts will be even harder to detect for ordinary users. Cybersecurity awareness training is a must in any good security program. Make sure it includes how to tell a fake from the real one.
The tech behind these attacks is good, but it isn’t perfect. Expects in this field have explained that facial features are very difficult to perfect, especially the eyes. If the eyes look unnatural or the movement of facial features looks out of place, it’s most likely an altered image.
Another way to identify the deepfake from the real is to use cybersecurity best practices and a zero-trust philosophy. Verify whatever you see. Check the source of the message. Do an image search to find the original, if possible.
When it comes to your own privacy of data and images, use a digital fingerprint or watermark that makes it more difficult for someone to create synthetic content from them.
In general, the defences already in place should work to prevent phishing and social engineering attacks. Deepfakes are still in the early stages as an attack vector, so cybersecurity teams have the advantage of preparing defences as tools to improve. It really should relieve insomnia.
Conclusion:
Will deepfakes wreak havoc?
According to the above research and analysis, we can expect more deepfakes that harass, intimidate, demean, undermine and destabilise. But will deepfakes spark major international incidents? Here the situation is less clear. A deepfake of a world leader pressing the big red button should not cause armageddon. Nor will deepfake satellite images of troops massing on a border cause much trouble: most nations have their own reliable security imaging systems.
Will they undermine trust?
Deepfakes, along with other synthetic media and fake news, have a more pernicious effect by fostering a zero-trust culture where individuals are unable to tell fact from fiction or don’t care. Additionally, it is simpler to cast doubt on certain occurrences when confidence has been damaged.
Check Out The Author:
Instagram: @malivore_2021