What Is A Deepfake?
Learn what deepfakes are, how they are made, how to recognize them, and how to fight them.
2025 CYBER THREAT PREDICTIONS Speak with an ExpertDeepfake is a form of artificial intelligence (AI) that can be used to create convincing hoax images, sounds, and videos. The term "deepfake" combines the deep learning concept with something fake.
Deepfake compiles hoaxed images and sounds and stitches them together using machine learning algorithms. As a result, it creates people and events that do not exist or did not actually happen.
Deepfake technology is most notably used for nefarious purposes, such as to mislead the public by spreading false information or propaganda. For example, deepfake videos could show a world leader or celebrity saying something they have not said, which is also referred to as “fake news” that shifts public opinion.
Deepfake technology has both illicit and acceptable applications. The intent behind the deepfake technology and how it is important.
Deepfake technology can be used for a wide variety of appalling purposes, including:
Cyber criminals can use deepfake technology to create scams, false claims, and hoaxes that undermine and destabilize organizations.
For example, an attacker could create a false video of a senior executive admitting to criminal activity, such as financial crimes, or making false claims about the organization’s activity. Aside from costing time and money to disprove, this could have a major impact on the business’s brand, public reputation, and share price.
A major threat that deepfake poses is nonconsensual pornography, which accounts for up to 96% of deepfakes on the internet. Most of this targets celebrities. Deepfake technology is also used to create hoax instances of revenge porn.
Deepfake videos have been used to spread fake videos of world leaders like Donald Trump and Barack Obama, which raises concerns that it could be used for election manipulation. For example, there were widespread concerns that deepfake videos would affect the 2020 U.S. election campaign.
Deepfake technology has been used within social engineering scams, with audio deepfakes fooling people into believing trusted individuals have said something they did not. For example, the CEO of a U.K. energy firm was tricked into believing he was speaking to the chief executive of the company’s parent company in Germany. The deepfake voice impersonated the chief executive and convinced the CEO to transfer €220,000 to a supposed Hungarian supplier’s bank account.
Deepfake can also be used to spread automated disinformation attacks, such as conspiracy theories and incorrect theories about political and social issues. A fairly obvious example of a deepfake being used in this way is a fake video of Facebook founder Mark Zuckerberg claiming to have “total control of billions of people's data,” thanks to Spectre, the fictional organization in the James Bond novels and movies.
Deepfake technology can be used to create new identities and steal the identities of real people. Attackers use the technology to create false documents or fake their victim’s voice, which enables them to create accounts or purchase products by pretending to be that person.
When properly disclosed, deepfakes are considered to be fairly or acceptably used. The guiding principle is whether the existence of the deepfake violates someone's privacy or intellectual property rights, or if the deepfake misleads consumers, the general public, or notable individuals who have not been properly warned.
Deepfakes have been used for parody or satire in both political speech and entertainment.
Deepfake technology has been used to demonstrate the capabilities of the field in news reports and technical show demonstrations.
Both professional and amateur historians and media enthusiasts have used deepfake technologies to animate old photos and paintings, or to recreate historical events.
Deepfake technology has been used to simulate "retro" or pseudo-historical footage including reimagining popular films, books, posters, and other media as if they had been produced decades before the original art was created.
The term "deepfake" first came into the public domain in 2017, when a Reddit user with the username “deepfakes” shared doctored pornographic videos on the site. He did so by using Google’s open-source, deep-learning technology to swap celebrities’ faces onto the bodies of pornographic performers. Modern deepfakes are descended from the original codes that were used to create these videos.
There are several methods for creating deepfakes. One of the most popular is using the generative adversarial network (GAN), which trains itself to recognize patterns using algorithms, which can also be used to create fake images.
Another method is AI algorithms called encoders, which are used in face-replacement and face-swapping technology. The decoder retrieves and swaps images of faces, which enables one face to be superimposed onto a completely different body. Deepfakes use autoencoders, which go beyond the compression and decompression of classic encoders, enabling cyber criminals to create completely new images. Deepfake applications use two autoencoders, which enable images and movement to be transferred from one image onto another.
Deepfakes can be spotted by recognizing unusual activity or unnatural movement, including:
A lack of eye movement is a good sign of deepfakes. Replicating natural eye movement is challenging as peoples’ eyes usually follow and react to the person they are speaking with.
A lack of blinking is also a flaw with deepfaked videos. Replicating the natural, human action of regular blinking is difficult with deepfake technology.
Deepfake technology involves morphing facial images, with faces simply being stitched from one image over another. This typically results in unusual or unnatural facial expressions.
If a person’s body does not look to have a natural shape, then it is most likely fake. Deepfake technology largely focuses on faces rather than the entire body, which leads to unnatural body shapes.
Fake images cannot generate realistic individual characteristics, such as frizzy or messed-up hair.
Deepfakes are unable to replicate the natural colors of images and videos. This leads to them showing abnormal skin colors.
Deepfake images will often feature inconsistent or awkward-looking head and body positioning. Examples of this include jerky movements and distorted images when people move or turn their heads.
Deepfake images will often feature inconsistent or awkward-looking head and body positioning. Examples of this include jerky movements and distorted images when people move or turn their heads.
Similar to the reasons for unnatural skin tones, deepfake images are also prone to discoloration, misplaced shadows, and unusual lighting.
Deepfake videos will likely feature lip-syncing that does not align with the words being spoken by the people in the video.
People or animals may move in unnatural ways, or body parts may vanish as they walk into each other or objects.
Shallowfakes are videos that appear out of context or are edited using more simplistic tools. A good example of shallowfake is a speech by Nancy Pelosi, the U.S. Speaker of the House, edited to make her voice sound slurred, implying she was drunk.
Steps have already been taken to combat deepfakes and prevent the images and videos from being shared online.
Facebook has hired researchers from universities to help it build a deepfake detector, which enforces its ban on deepfakes. Twitter has policies in place to prevent fake content and is working to tag deepfake images that are not immediately removed. YouTube also vowed to block any deepfake content related to the 2020 U.S. election and census.
Researchers have been working on data science solutions that detect deepfakes. Many of these have quickly become ineffective as the attackers’ technology evolves and creates more convincing results.
Filtering programs are also working to prevent deepfakes. AI firm DeepTrace’s program acts in the same way as an antivirus or spam filter and diverts fake content into a quarantine zone, while Reality Defender, from AI Foundation, aims to tag manipulated content before it can do any damage.
One of the best ways to prevent deepfakes is for employees to understand the signs of fake images and videos. Corporate best practices include advising users on the telltale signs of cyberattacks and fraudulent online activity.
Laws have already been passed in several U.S. states to criminalize deepfake pornography and prevent the technology’s use around elections or to harm the reputations of individuals. A deepfake legislation was also introduced into the National Defense Authorization Act (NDAA) in December 2019.
Deepfake is a form of artificial intelligence (AI) that can be used to create convincing hoax images, sounds, and videos.
Deepfake technology is most notably used for nefarious purposes, such as to mislead the public by spreading false information or propaganda.
The term "deepfake" first came into the public domain in 2017, when a Reddit user with the username “deepfakes” shared doctored pornographic videos on the site.
Deepfakes can be spotted by recognizing unusual activity or unnatural movement.
Please fill out the form and a knowledgeable representative will get in touch with you soon.