Deepfakes Know No Borders: How the European Union Artificial Intelligence Act Paves the Way for AI Regulation

https://www.freepik.com/free-vector/digital-world-map_1095019.htm#fromView=search&page=1&position=3&uuid=788603b4-fe52-46ed-bf5e-4a3030eb7df9&query=ai+world+map
https://www.freepik.com/free-vector/digital-world-map_1095019.htm#fromView=search&page=1&position=3&uuid=788603b4-fe52-46ed-bf5e-4a3030eb7df9&query=ai+world+map

Deepfakes generated by Artificial Intelligence (“AI”) are altering global communication, diminishing the integrity of our social relations. Deepfakes are AI-generated content that manipulates images, audio, and video.[1] Whether it’s threats to privacy or a celebrity superimposed doing a TikTok dance, the rise of deepfake technology poses a growing international threat. Deepfakes can be used to spread false information, manipulate public opinion, create threats to privacy, or damage reputations.[2] Despite the detrimental consequences, most countries lack definitive legal regulations for AI use. The European Union’s (“EU”) Artificial Intelligence Act (“the AI Act”) is one of the first major attempts to regulate AI actions.[3] As these AI threats continue to grow, how will other nations keep up? The rise of AI-generated deepfakes presents a growing international threat to privacy, public trust, and political stability. The AI Act sets a precedent for addressing these challenges. Moving forward, global collaboration is essential where a developing resolution can be made.

The EU AI Act: Setting the Bar for Regulation

In August 2024, the EU initiated the AI Act, the first-ever legal framework on AI, to assist in regulation of the risks associated with AI-driven misinformation.[4] The AI Act provides requirements and obligations for AI developers and deployers to ensure AI use is respectful of fundamental rights, safety, and ethical principles, including penalties for non-compliance.[5] Some of these penalties for companies include administrative fines up to €35 million or seven percent of global turnover, whichever is higher.[6] There is some criticism about the complexity of regulating difficult and evolving technology with technical rules.[7] Having some regulation is better than no regulation because it can provide a starting point for future guidelines and create a base to prevent abuse.  

The Delay in Global Regulation

As the EU has implemented AI regulations, most countries have fallen behind. Since AI technology is constantly evolving, it can be difficult to detail laws concerning all the potential risks. Specifically, with deepfakes, it can be hard to trace since abusers of deepfakes operate anonymously and share information via undetected platforms.[8] According to law enforcement officials, the industry still struggles to detect deepfakes.[9] With these complications and different nations taking independent approaches, legislation is fragmented.[10] Despite some attempts at regulations, this is a global challenge, especially in international politics. For example, deepfakes of political leaders making false statements about war could spark fear, violence, or even trigger cross-border conflicts. Global collaboration is needed to reduce the potential for these deepfakes to be mistaken for reality. With different approaches from various countries, it is essential to share resources and expertise to create a unified effort protecting our fundamental rights. The AI Act is a step in the right direction, but global collaboration is where a developing resolution can be made.

The Need for Global Collaboration

International cooperation is one of the best solutions for deepfake risks. As each country has their strengths and weaknesses towards this complex and growing issue, collaboration appears to be at the forefront for innovation. Especially when different approaches can create barriers to innovation and diffusion.[11] “For example, the EU seeks to achieve a competitive advantage in “industrial AI”: EU enterprises could exploit that AI without the prospect of having to engage in substantial reengineering to meet requirements of another jurisdiction.”[12] Since the EU is taking competitive steps, if we had international collaboration, other countries could benefit from the EU’s AI technologies.

There are several reasons why the need for collaboration is important: Scale and Resource Efficiency, Shared Democratic Principles and Trust, Regulatory Alignment to Prevent Barriers, Support for Specialized AI Innovation, Free Flow of Goods and Data, Solving Global Challenges Together, and Protecting Democratic Values and Human Rights.[13]

For now, there are plenty of recommendations for regulators to implement. Nations should prioritize criminalization when deepfakes are produced with the intent to manipulate.[14] A popular deepfake scam uses someone’s voice without consent to exploit others.[15] These types of scams are a violation of personal rights including the right to privacy and the right of publicity, and the law should be clear on criminal sanctions. More countries should implement legislation similar to the United Kingdom’s Online Safety Act 2023 which addresses scams like this by requiring platforms to remove such non-consensual content.[16]

Introducing procedural guidelines for potential victims of deepfake manipulation is a good way for them to navigate clear instructions on how to report deepfake content.[17] Not all deepfakes are focused on manipulation or cause harm; some serve purely entertainment purposes, therefore, a transparency measure requiring all AI-generated content—whether for entertainment or otherwise—to be labeled as such ensures and promotes clarity.[18] Lastly, raising awareness of the risks of deepfakes and how to recognize AI-generated content would benefit the public by educating them and reducing the spread of misinformation.[19]

Conclusion

The EU AI Act stands as an important first step towards building the foundation of regulation as AI-generated deepfakes continue to threaten privacy and information. The AI Act represents a significant measure in AI regulation, but it does not address the critical threat of deepfakes such as political manipulation, reputational harm, and threatening public trust. An international approach to collaboration is necessary to protect against these threats. This is a global problem and to effectively resist these threats, nations must collaborate to create an international framework. As there are different approaches from various countries, sharing resources and expertise to create a unified effort protecting our fundamental rights is the best solution.


[1] Mauro Fragale & Valentina Grilli, Deepfake, Deep Trouble: The European AI Act and the Fight Against AI-Generated Misinformation, Colum. J. Eur. L. (2024), https://cjel.law.columbia.edu/preliminary-reference/2024/deepfake-deep-trouble-the-european-ai-act-and-the-fight-against-ai-generated-misinformation/?cnreloaded=1.

[2] Amanda Lawson, A Look at Global Deepfake Regulation Approaches, Responsible AI (April 24, 2023), https://www.responsible.ai/a-look-at-global-deepfake-regulationapproaches.

[3] Fragale & Grilli, supra note 1.

[4] Fragale & Grilli, supra note 1.

[5] European Comm’n, AI Act (2024), https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai.

[6] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 on Artificial Intelligence (AI Act), 2024 O.J. (L 168) 1, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689.

[7] Fragale & Grilli, supra note 1.

[8] Aled Owen, Deepfake Laws: is AI outpacing legislation?, Onfido (Feb. 2, 2024), https://onfido.com/blog/deepfake-law.

[9] Id.

[10] Id.

[11] Cameron F. Kerry et al., Strengthening International Cooperation on AI, Brookings (Oct. 25, 2021), https://www.brookings.edu/articles/strengthening-international-cooperation-on-ai/.

[12] Id.

[13] Id.

[14] Jana Kazaz, Regulating Deepfakes: Global Approaches to Combatting AI-Driven Manipulation, Globsec (Nov. 12, 2024), https://www.globsec.org/what-we-do/publications/regulating-deepfakes-global-approaches-combatting-ai-driven-manipulation.

[15] Id.

[16] Id.

[17] Id.

[18] Id.

[19] Id.