top of page
Search

What are deepfakes & can they be detected?

Updated: May 3, 2024

Momentarily, one will hear the word ´deepfake´ often being used in the media due to the rips in misleading and fraudulent videos spreading disinformation and non-consensual pornography. But what exactly are deepfakes, what threat do they pose, and can platforms detect them using AI?


What are they?


The definition of a deepfake is an image or recording that has been convincingly altered or manipulated to misrepresent someone as doing or saying something that was not actually done or said. Deepfakes were initially created as a demonstration of the capabilities of AI and machine learning techniques, particularly in the field of computer vision. They emerged as a result of advancements in deep learning algorithms which enabled the generation of highly realistic synthetic media.


How are they made?

Deepfakes are typically created using deep learning algorithms, particularly a type of neutral network called generative adversarial network (GAN). The way people are creating deepfakes varies. There are several apps and websites that provide tools for this. These platforms often simplify the process making it more accessible to people without advanced technical knowledge. Some of these tools allow users to create deepfake videos or images by uploading their own source material and selecting a target persons face to superimpose.


How are deepfakes used for non-malicious purposes?


Often in the media one will hear the negative and troublesome ways that deepfakes are used.  However, apart from malicious purposes here are some of the reasons why/how deepfakes are used (but not limited to):


1. Entertainment and creative reasons. Deepfakes can be used in filmmaking, video games and other forms of entertainment to create convincing visual effects.

2. Development & research for developing techniques and advancing the field of computer vision.

3. Historical and cultural preservation


What are the malicious ways that deepfakes are used?


Deepfakes have been misused in several harmful and malicious ways. Some of these ways include:


1. The spreading of misinformation and fake news. Deepfakes have been used to create realistic but entirely fabricated videos or images that spread false information. This can lead to distrust and confusion from the public towards media outlets and social platforms.

2. Political manipulation. Deepfakes can be used to manipulate political discourse by creating fake videos or audio recordings of politicians or public figures. These manipulated media can be used to disseminate false statements, incite conflict, or damage the reputations of individuals and organisations.

3. Fraud & scams. Deepfakes can be used in various forms of fraud and scams. For instance, they can be used to create fake videos or audio recordings impersonating people from a particular company in order to get personal information disclosed.

4. Non-consensual pornography. Deepfakes have been widely used to create non-consensual pornography by superimposing the faces of individuals onto pornographic content.

5. Identity theft and social engineering. Deepfakes can be used to impersonate individuals in online interactions.


In sum, the malicious use of deepfakes poses a significant risk to individuals, organisations, and society as a whole.


How are deepfakes being tackled?


There are many rising concerns about deepfakes not just for social media and adult sites but also for KYC companies & more. The proliferation of deepfakes is raising ethical, legal, and societal concerns. There are now many efforts to address these concerns include:


1. Legislation and regulation: Governments around the world are in the process of considering implementing legislation specifically targeting deepfake technology. These regulations may include measures to criminalise the creation and dissemination of malicious deepfake content, establish liability for platforms hosting deepfakes, and protect individuals´ privacy and reputation rights. Additionally, existing laws related to defamation, fraud, privacy, and intellectual property may be enforced to address deepfake-related offenses.

2. Platform policies and content moderation. Social media platforms, online forums, and other websites have implemented policies and guidelines to regulate the creation and dissemination of deepfake content. These policies may include restrictions on the posting of manipulated media, mechanisms for reporting and removing deepfakes and measures to verify the authenticity of user-generated content. Many platforms are also looking into deepfake detection solutions to implement.

3. Technological solutions. Many companies are trying to develop tools and techniques to detect and combat deepfakes.


The addressing of deepfake concerns will require a comprehensive and collaborative approach that combines legislatives measures, technological solutions and platform policies.


How does deepfake detection work and can it solve the problem?


Currently, there are some deepfake detection solutions on the market, however, this is still very much a developing space. Some of the ways that deepfake detection solutions work include:

  • Forensic analysis to examine various aspects of digital media to detect anomalies that are indicative of manipulation.

  • Feature based detection methods to extract specific features or attributes from digital media such as facial landmarks, eye blinking patterns or lip movements to analyse them for inconsistencies or irregularities.

  • Device based authentication.

  • Behavioural analysis techniques to examine the behavioural patterns and dynamics of digital media content, such as temporal coherence and semantic consistency, to assess its authenticity.

  • Watermarking authentication.


Despite there being some approaches on the market there are some inherent flaws with these. A key issue with developing these solutions is the fact that deepfakes are rapidly getting more and more advanced making certain solutions redundant. This makes the possibility of a ´full solution´ for deepfake detection appear to be a bit of a loosing game.


 
 
 

Comments


bottom of page