Social media was originally designed to connect people and facilitate global communication.[1] Over time, it has evolved into a major source of news, shaping public opinion.[2] However, this influence has also been exploited to spread disinformation by both individual users and foreign governments.[3] One prominent example is Russia’s interference in the 2016 United States (“U.S.”) presidential election, where disinformation campaigns were used to manipulate voters through platforms like Facebook and Twitter.[4] The rapid spread of false information on these platforms highlights the global challenge of disinformation, especially within political spheres.
As social media platforms prioritize user engagement through algorithms that emphasize sensational content, users are primarily exposed to information that reinforces their existing beliefs.[5] This creates echo chambers where opposing views are filtered out, deepening societal polarization.[6] The accessibility and anonymity of social media make it an ideal tool for spreading disinformation, enabling false narratives to go viral without repercussions because these platforms lack stringent mechanisms to verify information and content is generated rapidly, targeted at specific individuals.[7] This has fueled growing mistrust in information and has challenged the legitimacy of democratic institutions worldwide.[8]
To address this global threat, international efforts have emerged. The Digital Services Act (DSA), enacted by the European Union (“EU”), holds digital platforms accountable for harmful content and imposes transparency requirements.[9] Similarly, the G7 Media Ministers, a group of government officials from the G7 countries who are responsible for media, communications, and information policy, met in June 2022 due to growing concerns of disinformation.[10] They stressed the importance of cross-border regulatory alignment and collaboration to effectively combat disinformation.[11] Despite efforts by social media platforms to combat disinformation, the pervasiveness of false narratives, exacerbated by algorithms that make sensational content viral, presents a significant threat to democratic values.[12] This paper argues that a comprehensive global approach to regulating digital transparency and accountability—modeled after the EU’s Digital Services Act—is crucial for effectively combating disinformation and safeguarding public trust on a global scale.
What Companies Have Done to Mitigate
Clearly, disinformation on social media is a significant and current threat, not only to Americans but to the world.[13] Social media companies are aware of this issue and some steps have been taken to combat false content.[14] As outlined in their company policy, X has implemented measures to remove “misleading media” that may impact public safety and cause serious harm, label posts with warnings, and lock violator accounts.[15] The platform also introduced a new fact-checking system called “Community Notes,” which allows users to add context or clarify any post they deem inaccurate.[16] However, despite these steps, X continues to be known as the platform with the most significant issues related to the spread of disinformation and hate speech.[17] Many question how accurate and potentially biased this moderation is.[18]
AI has emerged as a mitigation tool, and social media companies are considering incorporating it into their mitigation strategies to evaluate content more efficiently.[19] For example, Instagram and Facebook use AI to sort through posts and comments that may violate company policy.[20] While it may seem more efficient, the challenge of utilizing AI lies in how it is coded; it may be biased and lead to inaccurate results.[21] In addition, hacking and cybersecurity pose risks where attackers can obtain data or manipulate the AI software moderating platforms.[22] While AI could offer a potential solution, it is important to recognize that is also often used to generate the very false content the platforms aim to mitigate.[23] AI enables spammers to create posts that may evade detection and distribute them rapidly.[24] Using AI to combat AI-driven disinformation presents a significant challenge because algorithms will need to continually evolve in response to hackers and coders who exploit algorithm weaknesses, making subtle changes to content to trick AI evaluation models, creating a game of “catch-up” between AI moderators and AI disinformation generators.[25]
International Efforts in Combating Disinformation
The EU has taken significant steps to mitigate disinformation through the DSA and broader initiatives such as the EU Democracy Action Plan.[26] The EU’s focus has shifted towards addressing Foreign Information Manipulation and Interference (FIMI), a concept defined by the European External Action Service as manipulative behavior that, while not always illegal, threatens democratic values and political processes.[27] To counter this, the EU established the East StratCom Task Force and EU vs. Disinfo, an initiative aimed at increasing public awareness and understanding of disinformation, particularly from foreign sources.[28] Since its inception in 2015, EU vs. Disinfo has tracked over 5,500 disinformation cases regarding Ukraine, working to identify and debunk false narratives that threaten European security and democracy.[29]
The EU Democracy Action Plan works together with the DSA, a co-regulation framework that requires platforms to assess the risks that their services may potentially present to society.[30] The DSA has two main goals: 1) to create a safer digital space that protects the fundamental rights of all digital services users, and 2) to foster innovation, growth, and competitiveness both within the European Single Market and globally by establishing a level playing field.[31] To achieve these goals, the DSA sets clear, proportionate rules that safeguard consumers and protect fundamental rights online.[32] The roles of users, platforms, and public authorities are rebalanced, working together to create a sort of co-regulation in which citizens and overall society are at the forefront of consideration.[33] An essential aspect of the DSA is the enforcement framework, which includes several different investigative and sanctioning measures used in cases of DSA obligations breaches.[34] For example, if a “Very Large Online Platform” (as defined by the Commission as platforms that reach more than ten percent of the EU’s 450 million users) is found to have breached DSA obligations, it may face fines of up to six percent of its worldwide annual turnover.[35]
Additionally, in the G7 Media Ministers meeting, there was a strong effort to foster international cooperation to combat disinformation, recognizing that digital misinformation often transcends national borders.[36] In June of 2022, the G7 emphasized the need for cross-border regulatory alignment and collaboration among governments to counteract coordinated disinformation campaigns, especially those orchestrated by foreign actors.[37] The G7 calls on digital platforms to take greater responsibility for identifying and removing false content, urging them to work more transparently in coordination with governments and international organizations.[38] Additionally, the G7 advocates for increased media literacy as a preventive measure, recommending that governments implement national education programs to help citizens recognize and resist disinformation tactics.[39] The G7 also supports developing international treaties or agreements to ensure consistent regulation of digital platforms, further advancing the global legal response to the disinformation crisis.[40]
Potential Solution
One potential solution to the global disinformation crisis is to establish an international agreement aimed at ensuring digital transparency and accountability, modeled after the DSA and the G7 approach. This agreement would establish uniform standards for transparency and accountability across digital platforms worldwide, ensuring they assess risks, report on moderation practices, and face penalties for violations. It would focus on combating FIMI by encouraging cross-border cooperation. Additionally, it could promote media literacy programs to help citizens worldwide recognize and resist disinformation tactics.
Such an agreement would encourage collaborative efforts between like-minded governments, tech companies, and international organizations, ensuring consistent regulations across borders. By establishing clear standards and enforcement mechanisms, the international community could more effectively combat the impact of disinformation on democracy and public trust.
[1] Muhammed, S. T., & Mathew, S. K., The Disaster of Misinformation: A Review of Research in Social Media, 13 Int’l J. Data Sci. & Analytics 271, 271–85 (2022).
[2] Id.
[3] Pherson, R. H., Mort Ranta, P., & Cannon, C., Strategies for Combating the Scourge of Digital Disinformation, 34 Int’l J. Intelligence & Counterintelligence 316, 316–41 (2020).
[4] Press Release, Intelligence Committee (2019).
[5] Id. at 26.
[6] Id.
[7] World Economic Forum, Global Risks Report 2024 (Jan. 10, 2024).
[8] Id.
[9] European Commission, Digital Services Act Package, Digital Strategy.
[10] G7 Germany, G7 Media Ministers Meeting, (May 19, 2024).
[11] Id.
[12] See, Muhammed, supra note 1.
[13] See, Muhammed, supra note 1.
[14] See, Hao, K., Instagram is Using AI to Stop People from Posting Abusive Comments, MIT Tech. Rev. (July 9, 2019).
[15] X Help Center, Manipulated Media, available at https://help.x.com/en/rules-and-policies/manipulated-media.
[16] Id.
[17] Id.
[18] See, Mathieu Landriault, Gabrielle LaFortune & Gregory A. Poelzer, Arctic Disinformation on X (Twitter) – An Empirical Investigation, Polar Geography (July 3, 2024).
[19] Hao, K. supra note 13.
[20] Id.
[21] Id.
[22] Id.
[23] See, Katerina Sedova, Christine McNeill, Aurora Johnson, Aditi Joshi & Ido Wulkan, AI and the Future of Disinformation Campaigns: Part 2: A Threat Model, CSET Policy Brief, Center for Security and Emerging Technology (CSET).
[24] Id.
[25] Id.
[26] Supra note 9.
[27] Nicolas Hénin, Foreign Information Manipulation and Interference: Understanding the Threat, EU DisinfoLab (2023).
[28] Council of the European Union, The Fight Against Pro-Kremlin Disinformation; See, EU vs Disinfo, EU vs Disinformation.
[29] Id.
[30] Supra note 9.
[31] Id.
[32] Id.
[33] Id.
[34] Id.
[35] Id.
[36] U.S. Dep’t of State, G7 Media Ministers’ Meeting Communiqué (June 22, 2022).
[37] Id.
[38] Id.
[39] Id.
[40] Id.