Simulating reality is where computers have evolved a lot and it feels like it has reached a point where it shouldn’t have. While some are using such technology for entertainment purposes only, such as in movies or TV shows, some are using it solely for the purpose of blackmailing or creating fake explicit videos where the face of the woman is swapped with that of a famous actress, thereby generating fake media which is called a deepfake. Here’s everything you should know about Deepfakes and ways to spot them:
What are Deepfakes?
The term Deepfake is inspired by the Deep Learning technology that is a type of Artificial Intelligence. Using the help of such technology, one can stitch/swap anyone in the world into a video or photo they are never actually a part of. Earlier, it used to take entire studios full of experts to create such effects but now, with the help of deepfake softwares, new automatic computer graphics or machine-learning systems can create these effects instantly if the person knows how to operate the software. A famous example of a deepfake is the resurrection of late actor Paul Walker in Fast & Furious 7.
How are these created?
The deepfakes are created using softwares that have advanced machine learning involved, along with Artificial Intelligence. Some of the software’s that are being used to create deepfakes include FaceSwap, a Chinese App Zao, Reface, etc. Even companies like Adobe provide software for morphing or changing characteristics of faces, such as Project Morpheus. However, these apps are provided mainly for entertainment purposes and are mostly harmless.
For a professional to create a Deepfake, they would first have to gain access to a video of the person which is hours long. This is because they will first have to train a neural network to give it an understanding of how the person looks under different lighting and from various angles. All of this process takes place under Machine Learning guidance. Further, this trained network will be combined with Computer-generated Imagery (CGI) to generate a copy of the person’s face and replace it within the intended footage.
The most advanced class of machine-learning algorithms, called generative adversarial networks (GANs) are assumed to be the main engine of deepfakes development in the future. This is because GAN-generated faces are near-impossible to tell from real faces. However, GANs require huge amounts of training data and are most suitable for image morphing and not changing faces in a video.
Read More: Top 10 Industries attacked by Malware in India
Threat from Deepfakes
The threat from deepfakes is persistent for Women as famous actress’ faces are being used in pornographic content. Even though researchers have made significant improvements in the detection of realistic-looking deepfakes, it is estimated that deepfakes might someday become a powerful weapon for hate speech, political misinformation, or spreading lies on social platforms. Furthermore, there are an increasing number of reports of deepfakes being used to create fake revenge pornographic videos as well.
What is being done about such a threat?
Not a lot of countries including India, are working on resolving this threat. Many states in the US have criminalized deepfake porn along with some countries such as China and South Korea also taking action to prohibit the use of Deepfakes. Companies such as Facebook recruited researchers from Berkeley, Oxford, and other institutions to build a deepfake detector and help it enforce its new ban.
Next, Twitter also made changes to its policies, where it was planning ways that could tag any deepfakes which weren’t removed right away. Even YouTube reiterated a while back that it will not allow deepfake videos related to the U.S. election, voting procedures, or the 2020 U.S. census.