It is now widely recognized that state and nonstate actors in numerous countries are deliberately using digital communications platforms to promulgate information known to be untrue for the purposes of disempowering opponents. There are numerous mechanisms by which this can happen, from bogus news stories on Facebook, to selectively edited videos circulating through YouTube.
One of the most eye-catching of these new mechanisms is ”deepfakes,” the process of using still-emerging artificial intelligence technologies to produce videos in which people are portrayed doing and saying things they never did or said. The technology is somewhat immature, and the quality of video and audio is at present mostly unconvincing. But the increasing use of artificial intelligence (AI) techniques is leading to a new generation of fake imagery and audio with increasingly realistic results. There are, understandably, widespread concerns of the impact of deepfake technologies if they improve to the point of being indistinguishable from authentic footage.4
Even if deepfakes do not improve to the point of being truly disruptive, more traditional forms of online misinformation still matter significantly to the success of bona fide citizen engagement. Two key problems emerge from our analysis.
First, the focus of public debates is likely to take a further and unhealthy shift toward disputes over the authenticity of statements and evidence, which will in turn reduce the time and energy left to discuss possible actions or solutions to problems. As trust in empirical evidence is undermined, the quality of public debate will decline with more discussions of the type “did Person X really say statement Y?” instead of “how are we going to fix a particular policy problem?”
Second, citizens may react to a larger amount of unreliable news or active disinformation by simply tuning out of civic and political discourse altogether.5This would mean that even if opportunities arise for citizens to have a say, they may simply fail to leverage these opportunities.
Unfortunately, there is every reason to believe that efforts to disempower opponents through digital misinformation attacks will only grow. There are still numerous countries in the world with internet penetration below 50 percent. Currently the rewards for delivering online disinformation in these countries may be relatively limited, but those rewards will grow as internet usage rises, particularly in weakened democratic systems where these activities are less likely to be scrutinized and sanctioned.
The phase ahead can be characterized as an arms race because malign actors and those attempting to mitigate them will invest larger and larger sums on more and more sophisticated techniques in the years ahead. Considerable resources will be spent both on more sophisticated technologies built by software developers and on armies of lower-skilled content processors, who will either produce or help remove disinformation. What side-effects this arms race will have are still entirely unclear.
-
4: Part of these concerns stem from a perception that deep fakes hold greater potential to mislead, particularly given some evidence suggesting that audiovisual content is more persuasive and more likely to be watched, shared, and remembered by users than textual content (see Tucker et al. 2018; Chesney and Citron 2018). ↩
-
5: This potential scenario draws from emerging evidence suggesting a deleterious effect of fabricated information and propaganda on citizen engagement, leading to increased apathy and cynicism (e.g., Balmas 2014; Huang 2018). However, as noted by other authors (e.g., Lazer et al. 2018; Tucker et al. 2018), knowledge is still limited on the medium- and long-term impacts of fake digital content on political behavior and disaffection. ↩