The digital age, like the age of the printing press, has transformed our modes of social interaction — our reading and writing habits and methods of communication and consumption. It is also disrupting long-standing institutions, threatening old hierarchies of knowledge and power.
The benefits of this transformation are manifest. But the near universal availability of information also brings certain dangers. One is the prevalence of disinformation online, and the attendant challenge of discriminating between reliable and unreliable information.
Both the need for reliable information and the dangers of disinformation have been thrown into sharp relief by the novel coronavirus pandemic.
The prevalence of disinformation online about the virus — its nature, origins, effects and possible treatments — is not merely anecdotal. Ongoing research allows us to identify, quantify and track digital disinformation, which are necessary first steps in helping address the problem.
So what, exactly, is digital disinformation? And how extensive is it, especially during this ongoing public health crisis?
When inaccurate information is spread intentionally, mainly by malicious actors, that is called disinformation. When inaccurate information is spread unintentionally, often by someone sharing a fake news story without realizing it, that is called misinformation. Since it's often difficult or impossible to determine if a sender knows the information is inaccurate, it's convenient to refer to all such inaccurate information as "digital disinformation."
Sign up for The Fulcrum newsletter
During the pandemic, disinformation has overwhelmed the digital environment. There are hundreds of different disinformation stories circulating worldwide — significantly more than during past crises. Much of this disinformation is being spread by non-human actors referred to as "bots." Bots are used to support influence campaigns so the same disinformation is seen in many venues. According to research from Carnegie Mellon's Center for Informed Democracy & Social - cybersecurity (IDeaS), approximately two-thirds of users spreading fake stories on Twitter are bots. Oftentimes the disinformation is spread by providing a link to a website, a YouTube video or a meme.
Digital disinformation can be harmless or even humorous; take a look at the popular images of swans and dolphins returning to the canals of Venice thanks to social distancing. But it can also be lethal. One popular story touts drinking bleach as a cure for coronavirus. Many digital disinformation stories promote such fake preventions and cures, while others provide inaccurate descriptions of the nature of the virus or the emergency procedures taken by governments to respond to it.
And of course, there are well-known conspiracy theories, such as the one suggesting the virus was manufactured as a bioweapon.
Disinformation campaigns often target minority groups or vulnerable populations, such as senior citizens. But no one is immune. Various members of the public, from celebrities to politicians, often unwittingly consume and share disinformation online. To err in this way is human, of course, but it can have devastating consequences. When it's not deadly, disinformation often provokes fear, anger and polarization.
Governments and private-sector businesses around the world are exploring different strategies to deal with the problem. The European Union employs data scientists to combat disinformation and has insisted that private-sector actors do the same. For their part, social media platforms and internet companies are addressing disinformation by removing posts and videos that are blatantly false or harmful, providing easy access to accurate information or promoting trustworthy sources.
Still, it's not always easy to discern what is accurate or true. Computational social science can help develop methods to identify disinformation and counter it, including techniques to help us read more critically. But these scientific methods must put humans first. Purely artificial intelligence solutions are likely to fail, as there is often a fine line between what is disinformation and what is not.
Human judgment has to play a role.
Ultimately, resilience will depend on an active citizenry. As information consumers, each of us should evaluate information critically: Be skeptical of cures or preventions that sound too good, or government action that sounds draconian or otherwise implausible. Be careful what stories you share online, especially if you know they contain disinformation. Even seemingly innocuous or funny memes can contribute to the spread of disinformation. In other words, exercise good judgment.
We often think of disinformation as something that affects the "public," usually exempting ourselves or those in positions of power. However, just as we are all the beneficiaries of digital information, we are all potential victims of disinformation.
Digital technology certainly allows for greater access to information and more creative and flexible ways to integrate it into our social and political institutions. That does not diminish, but only intensifies, the need to be able to sift through information and judge the good from the bad.
No one is totally immune to confirmation bias or human error, and that includes both policymakers and those who have cultivated deep expertise. This makes it all the more important that all of us — the politicians, the experts and non-experts alike — cultivate the skills needed to discern genuine information from its counterfeit.