by Ahtisab Ul Haq
Deepfake is a new media technology wherein a person simply takes existing text, picture, video, or audio and then manipulates it – ‘fakes’ it to look like someone else using advanced artificial intelligence (AI) and neural network (NN) technology
In his statement on November 3, 2020, India’s permanent representative to United Nations, TS Tirumurti, highlighted, among other issues, the (mis)use of deepfakes to disturb global peace, fuel misinformation, divisions, and political instability. The concerns about the misuse of deepfake technology has been at the centre stage of the new age of computational propaganda.
The elephantine amount of digital photos and videos on various social media platforms my drive the modern man into a sinister digital apocalypse. Synthetic media can create possibilities and opportunities for all people, regardless of who they are, where they are, and how they listen, speak, or communicate. It can give people a voice, purpose, and the ability to make an impact at a scale and with huge speed. But every innovative technology can be weaponised to inflict harm.
What Is Deepfake?
Deepfake is a new media technology wherein a person simply takes existing text, picture, video, or audio and then manipulates it – ‘fakes’ it to look like someone else using advanced artificial intelligence (AI) and neural network (NN) technology.
The data-driven machines need a sizable amount of data in few hundred photos or a few hours of video of any person and can create an entirely new video that seems real. If we have been talking two decades back it might have been impossible. But now with the data easily available on social media, it is really simple. In India, the first noticeable video of deepfake technique was used in the parliamentary election of 2019. Since then in Asia alone, the number of deepfake videos has been doubling every six months.
How Is It Created?
The main ingredient in deepfakes is machine learning, which has made it possible to produce deepfakes much faster at a lower cost. To make a deepfake video of someone, a creator would first train a neural network on many hours of real video footage of the person to give it a realistic “understanding” of what he or she looks like from many angles and under different lighting.
Then they would combine the trained network with computer graphics techniques to superimpose a copy of the person onto a different actor. Although a few years ago it took a considerable amount of time to synthesize a near-perfect chunk of video, but today with better technology-driven systems it is synthesizing time is slashing day after day.
What Can It Do?
“We are already at the point where you can’t tell the difference between deepfakes and the real thing. However, the greater threat is the potential for deepfakes to be used in political disinformation campaigns,” Professor Hao Li of the University of Berkley told the BBC. “Elections are already being manipulated with fake news, so imagine what would happen if you added sophisticated deepfakes to the mix?”
It could be even more dangerous in developing countries where digital literacy is more limited. There you could really impact how society would react. You could even spread stuff that got people killed. A deepfake can aid in altering the democratic discourse and undermine trust in institutions and impair diplomacy. False information about institutions, public policy, and politicians powered by a deepfake can be exploited to spin the story and manipulate belief.
To influence people by creating videos or images, that can create an emotional and personal feeling of hatred, lawlessness, and terrorism. Deepfakes can cause short and long-term social harm and accelerate the already declining trust in news media. Such erosion can contribute to a culture of factual relativism, fraying the increasingly strained civil society fabric.
The most important factor we need to discuss the impact of deepfakes is revenge porn. Deepfake can create an entire video of any person especially women involving pornography. This can create havoc in the social fabric of society. The very first use case of malicious use of a deepfake was seen in pornography, inflicting emotional, reputational, and in some cases, violence towards the individual. Pornographic deepfakes can threaten, intimidate, and inflict psychological harm and reduce women to sexual objects. Deepfake pornography exclusively targets women.
Detecting deepfakes is a hard problem. Amateurish deepfakes can, of course, be detected by the naked eye. Other signs that machines can spot include a lack of eye blinking or shadows that look wrong. The machine-software combination that generates deepfakes are getting better all the time, and soon we will have to rely on digital forensics to detect deepfakes – if we can, in fact, detect them at all.
How To Stop?
To defend the truth and secure freedom of expression, we need a multi-stakeholder and multi-modal approach. Collaborative actions and collective techniques across legislative regulations, platform policies, technology intervention, and media literacy can provide effective and ethical countermeasures to mitigate the threat of malicious deepfakes.
Generally, the masses needs to aware themselves the various challenges and threats posed by this technology. We should educate ourselves about what this technology is and how it is going to impact us. We should organize the events to create awareness in the general public. Digital literally is going to play a decisive role in this.
The government’s role is most important. Its primary duty is to bring in legislation to check the dangers of deepfake. In its secondary role, it can help to disseminate the information to make the public aware on a greater scale and mobilize the various law enforcement agencies and institutions to make necessary steps to counter it. It can also undertake research on how to counter it.
The various companies like Google, Facebook etc will have to develop their infrastructure to detect the deepfakes at the source and penalize the sources who try to unfold these kinds of material on their platform. They can create deepfake busting technology to prevent its unabated usages. It must take steps as soon as possible for with deepfakes, “there’s little real recourse after the video or audio is out”.
(The author is Masters in Psychology. The opinions expressed in this article are those of the author’s and do not purport to reflect the opinions or views of Kashmir Life.)