What Do We Mean by Disinformation?
Deliberately spreading false information about political opponents is not a new phenomenon, although the amplification and potential mass impact of such information via the internet is newer and has been at the forefront of concerns and debate by state actors, politicians, civil society, the media, and the public. IFES and its partners in the Consortium for Elections and Political Process Strengthening (CEPPS) use the term information disorder to create a conceptual framework for understanding the information ecosystem and its implications for democracy. The information disorder framework “describes how misinformation … disinformation … and malinformation … are all playing roles in contributing to the disorder, which can also be understood as contributing to the corruption of information integrity in political systems and discourse.”
Within the information disorder framework, the issue of what might be classed as disinformation is subject to significant debate. There is no internationally accepted legal definition of disinformation, although practitioners and academics globally engage in ongoing discussions around the benefits and problems of coining a singular, all-encompassing definition. Various definitions have been advanced. For example, the European Commission’s High-Level Expert Group on Fake News and Online Disinformation defines disinformation as “all forms of false, inaccurate, or misleading information designed, presented and promoted to intentionally cause public harm or for profit” [emphasis added]. Facebook defines disinformation as “inaccurate or manipulated information content that is spread intentionally. This can include false news, or it can involve more subtle methods such as false flag operations, feeding inaccurate quotes or stories to innocent intermediaries, or knowingly amplifying biased or misleading information.” Academics define disinformation as “intentional falsehoods spread as news stories or simulated documentary formats to advance political goals” and also refer to it as “… systematic disruptions of authoritative information flows due to strategic deceptions.” Furthermore, some speech that is intended to deceive or cause harm (thereby meeting the definitions outlined above) is still legally permissible. Establishing the boundaries between what constitutes harmful but legal speech and speech that is legally violative is a core contention in many of the cases considered in this paper.
Like disinformation, misinformation contains false, inaccurate, or incomplete information. It is spread mistakenly or unintentionally. Misinformation may also be amplified via the internet and can reach a significant audience. This difference between understanding the intent behind the spread of information, as well as identifying the harm caused by that information, is important when courts are faced with cases that fall within the wide remit of what might be considered disinformation.
The lack of legal definitions means that the way disinformation cases come before courts is not necessarily uniform, as the case law demonstrates. Our case analysis demonstrates three broad types of disinformation issues that come before the courts:
- Cases that allege harm to electoral processes, contestants, or officials as a result of prohibited speech upon which the court must issue a judgment (whether on grounds of hate speech, defamation, electoral disinformation, etc.). Examples include Dominion Voting Systems, Inc. v. Fox News, Senior Advocate Dinesh Tripathi v. Election Commission of Nepal (#NoNotAgain Campaign), Decision no. 2018-773 DC of France’s Constitutional Council, and 2016Hun-Ma90 (Case on Restricting Online Media from Publishing Columns, etc. Written by Candidates for Public Official Election).
- Unfounded cases alleging irregularity in electoral processes – which is a disinformation tactic in and of itself. These are not cases that deal with disinformation; rather, they deal with election processes in a way that is meant to deceive or manipulate public perception of the integrity of the processes. Examples include Presidential Election Petition E005, E001, E002, E003, E004, E007 & E008 of 2022 (Consolidated) (Kenya), Appeal No. CA/PEPC/03/2023; CA/PEPC/04/2023; and CA/PEPC/05/2023 (Nigeria).
- Overt or covert disinformation campaigns directed at the courts to undermine their credibility. This challenge is unrelated to cases that need to be decided by the court; rather, it is a separate discussion about how courts can engage in reputation management and preserving public trust. Examples include Civil Petition No. 0601958-94.2022.6.00.0000 (Brazil), Matter of Giuliani, King v. Whitmer, and O’Rourke v. Dominion Voting Systems.
In efforts to understand the notion of disinformation as a justiciable issue, our case law analysis consistently shows that the concept of “disinformation” always contains some element of intentionality (i.e., that actors or adversaries spread the information with knowledge of what they are doing and in a deliberate manner), and they do it to cause harm.
Arnaudo, D., Barrowman, B., Brothers, J., Reppell, L., Scott, V., Studdart, A., Wainscott, K., and Zakem, V. Countering Disinformation: The definitive guide to promoting information integrity: Introduction to the Guide. Consortium for Elections and Political Process Strengthening. https://counteringdisinformation.org/introduction. The term “information disorder” is not original to CEPPS but is built on the work of Data and Society, First Draft and the Oxford Internet Institute’s Computational Propaganda Project, as outlined in the cited resource.
Directorate-General for Communication Networks, Content and Technology (2018). A multi-dimensional approach to disinformation. High Level Expert Group on Fake News and Online Disinformation (2018, March 12). Final Report of the High Level Expert Group on Fake News and Online Disinformation.
Weedon J., Nuland W., and Stamos, Aa (2017). Information operations and Facebook. Facebook.