Deepfakes are artificially created videos, images, or audio files created using deep learning models. For example, existing video sequences are used and faked by replacing faces. They are intended to appear as realistic as possible, even though they were generated by an ML model. In addition to using deepfakes for private videos, they can also be used to disseminate targeted misinformation.
How are deepfakes made?
Currently, two main technical models are used to produce deepfake videos.
Autoencoders are machine learning models that consist of an encoder part and a decoder part. They are actually used to learn a compressed yet information-rich representation of unstructured data. For example, we can use the same image as input and output. This would allow the autoencoder to learn a vector representation of the image (code in the diagram) that is as compressed as possible and stores all the important features. This vector is then used by the decoder to generate the original image from it again. The better the learned vector representation of the autoencoder, the more realistic the generated image.
A total of two autoencoders are trained for a deepfake. The first model is fed with images or videos of the person who is to be seen in the final product. In most cases, these are celebrities, politicians, or athletes, in our example person A. The second model is trained on images of another person (person B), who provides the facial expressions or gestures to be imitated.
When these two models are trained, one uses an image of person B and encodes it with the encoder from the second model. The resulting vector is then fed into the decoder from the first model, which creates an image that looks like person A, but has taken over the movements and facial expressions of person B.
The so-called Generative Adversarial Networks are the second way to train an ML model to create deepfakes. In short, we train two neural networks together. The first is trained to produce artificial images that share as many features as possible with the original training images. The second network, in turn, tries to find the differences between the artificially created images and the original images. So we train two networks that are competing against each other, both getting better and better as a result.
What are the types of deepfakes?
The training of such models and accordingly the creation of good deepfakes is very time and computationally intensive. Due to the great advances in the field of Graphics Processing Unit (GPU), this technique has only become accessible to the masses, as the training costs have dropped significantly as a result. Most deepfake files fall into one of the following categories:
- Face Swapping: The face and facial expressions of person A should be projected onto the body of person B. This can even replace the entire body of person B in a video or image with the body of person A.
- Body Puppetry: Movements, gestures, or facial expressions of person A are recorded and these are then to be artificially taken over by person B.
- Voice Swapping: A freely written text is to be performed as authentically as possible with the voice of a person. This method can also be combined with body puppetry, for example.
How can you detect deepfakes?
High-quality deepfakes are initially tricky or even impossible to detect for the naked eye and novices in this field. In general, there are two approaches to unmasking such fake video or audio files.
The first approach is less concerned with the specific file and more with the circumstances. The following questions can be helpful in dealing with deepfakes:
- Would the person showed really do or say something like that? Is it to be expected that what is shown really happened?
- Can you find other sources, e.g. videos, newspaper articles, etc., that confirm what is shown?
- Can you find other footage of the same scene from a different angle?
If these questions can be answered with “yes”, the risk of falling victim to a deepfake is significantly lower. Beyond that, however, there are even more detailed and technical questions that can provide information about a deepfake:
- Are there typical deepfake features, e.g. a perfectly symmetrical face, crooked glasses, two different earrings, or similar?
- Do the lip movements look human? Do they match the spoken text?
- Does the person blink unusually often?
For training purposes, we have linked some videos at the end of the article that are proven deepfakes. You are welcome to test there if you would have recognized them right away. However, there are also software and programs that help in detecting such fakes.
There are several types of fake detection software. Deepfake detection apps use machine learning and other techniques to analyze videos and images for signs of tampering. Video and audio analytics software can detect deepfakes by analyzing the audio and visual features of a video, such as eye movements, blink patterns, and audio1om patterns.
Some companies are exploring the use of blockchain-based certification systems to verify the authenticity of digital media and create a digital signature or watermark to confirm that a video or image has not been tampered with. Researchers are developing deep-learning algorithms to detect forgeries by using machine learning to analyze videos and images for signs of tampering.
What is the danger of deepfakes?
Deepfakes can pose a threat to us in many areas of everyday life.
For example, it is possible that these artificial files are used for the so-called CEO fraud. In this case, one gets a call from the superior or even the management as realistically as possible, which is aimed at making money transfers to fraudsters. When we hear the real voices of colleagues or superiors, we are unlikely to be as suspicious as when we receive a phishing email with a malicious link attached.
Beyond that, however, there are much more serious dangers that threaten the widespread distribution of high-quality deepfakes. These media files can be used to spread targeted disinformation by creating and distributing offensive videos or audio files. Not only does this put individuals in a bad light, but in the worst case, it can even lead to upheaval in society.
How are these counterfeits handled legally?
The advent of deepfakes has raised several legal and ethical considerations that must be taken into account. Here are some of the key issues:
- Misuse of personal information: The creation of deepfakes that use a person’s image, voice, or personal information without their consent raises privacy issues. Unauthorized use of a person’s image can also lead to reputational damage and even financial loss.
- Fraud and misinformation: Deepfakes can be used to create false or misleading information, such as fake news or political propaganda. The spread of such content can lead to social and political unrest, damage the reputation of individuals and organizations, and even harm national security.
- Criminal Use: Deepfakes can be used for criminal purposes such as extortion or racketeering by creating false evidence of a crime or tampering with evidence in a criminal case.
- Legal liability: The question arises as to who is legally responsible for the creation and dissemination of deepfakes. This includes the liability of technology providers, creators, and users.
- First Amendment rights: Deepfakes raise questions about the limits of free speech and the right to create and disseminate misinformation.
- Consent and Copyright: The creation and dissemination of deepfakes using copyrighted material or without the consent of the subject raise legal issues related to intellectual property and privacy.
- Verification and authentication: The proliferation of such files highlights the need for better systems and methods to verify and authenticate digital media.
These are just some of the legal and ethical considerations surrounding deepfakes. As deepfake technology continues to evolve, it is important that policymakers, technology providers, and users work together to develop legal frameworks and ethical guidelines to protect privacy, combat fraud and disinformation, and preserve the integrity of digital media.
This is what you should take with you
- Deepfakes are artificially created media, such as videos, images, or audio files, which have been created using Deep Learning methods.
- They try to represent people in a different context or environment, of which there are no original files.
- Technically, these files are created with the help of so-called autoencoders or general adversarial networks.
What is Quantum Computing?
Dive into the quantum revolution with our article of quantum computing. Uncover the future of computation and its transformative potential.
What is Anomaly Detection?
Discover effective anomaly detection techniques in data analysis. Detect outliers and unusual patterns for improved insights. Learn more now!
What is the T5-Model?
Unlocking Text Generation: Discover the Power of T5 Model for Advanced NLP Tasks - Learn Implementation and Benefits.
What is MLOps?
Discover the world of MLOps and learn how it revolutionizes machine learning deployments. Explore key concepts and best practices.
What is Jupyter Notebook?
Learn how to boost your productivity with Jupyter notebook! Discover tips, tricks, and best practices for data science and coding. Get started now.
Other Articles on the Topic of Deepfakes
- An interesting video on how to create a deepfake of David Beckham, you can find here.