Researchers develop method to detect deepfake images

The progress made by technology in achieving the so-called deepfake images, which are generated as a result of automatic learning algorithms, is remarkable. These advances have made it increasingly difficult for people to distinguish these types of images from a real photo.

In this context, a new method for identifying deepfake images has been developed by researchers at the Horst Grtz Institute. It has been developed for the Ruhr-Universitt Bochum computer security and the Cluster of Excellence Cyber ​​Security in the Age of Large-Scale Adversaries.

Through this initiative, the team performed the analysis of objects within the range where the frequency acts through an established signal processing technique. The results of this work were presented by the team on July 15, 2020 during the International Conference on Machine Learning (ICML), an event considered one of the most important in this field.

At this conference the researchers made their code publicly available online for free so that other groups could replicate their results.

Origin and detection of deepfake images

The so-called Generative Adversary Networks (GAN) or computer models are in charge of generating the deepfake images, in the process of which two algorithms work together. The first one creates random images based on certain input data, while the second algorithm deals with deciding whether the image is false or not. If the image turns out to be false, the second algorithm sends an alert to the first algorithm to analyze the image until it no longer recognizes it as false.

Thanks to this method, it has been possible in recent years for deepfake images to obtain an increasingly authentic appearance.

Through the website the users who enter are put to the test through the contrast of two images where they must distinguish which image of the face is false and which is true.

In reference to this, the professor of the Chair of Systems Security Thorsten Holz indicated In the age of fake news, it can be a problem if users do not have the ability to distinguish computer generated images from originals.

The data sets, used as the basis for the aforementioned website, served as a resource for the Bochum researchers to carry out their analyzes.

Among the participants of this project are Joel Frank, Thorsten Eisenhofer and Professor Thorsten Holz from the Chair of Systems Security who worked together with Professor Asja Fischer from the Chair of Machine Learning,

Along with these, there is also the participation of Lea Schnherr and Professor Dorothea Kolossa from the Chair of Digital Signal Processing.