• Abstract

     

  • Introduction

     

  • Prediction

    1

  • Prediction

    2

  • Prediction

    3

  • Prediction

    4

  • Prediction

    5

  • Prediction

    6

  • Prediction

    7

  • Prediction

    8

  • Prediction

    9

  • Prediction

    10

  • Prediction

    11

  • Implications

     

  • Measure

    1

  • Measure

    2

  • Measure

    3

  • Measure

    4

  • Measure

    5

  • Measure

    6

  • Conclusion

     

  • Authors

     

  • References

     

Nations will diverge in their regulatory approaches to the use of AI by social media platforms, leading to very different spaces in which citizens and civil society will talk to each other and to government.

The majority of public discourse about political issues now takes place on a small number of very large social media platforms, most significantly Facebook, Twitter, WeChat, and Sina Weibo. There is also an unknowably large amount of political discourse within private chat applications like iMessage and WhatsApp.

The companies that run these platforms are already regulated by different regimes in different countries. For example, Germany has legal prohibitions against certain kinds of hate speech that would be constitutionally protected speech in the United States. Both legal systems need to be respected by platforms like Facebook, which means programming different solutions for different markets, all while keeping the overall systems interoperable.

Regulatory regimes focusing on controlling unacceptable kinds of speech date largely to the era before social media and were often put in place to limit what could be said in print publications or on broadcast media. As well as being old, the regulations were put in place primarily to control the decisions and actions of newspaper editors and professional writers.

The digital revolution is now challenging one of the core assumptions within all pre-modern regulatory models: the assumption specific humans are making the key editorial choices. More and more today, software code, rather than editors, determines whether a piece of content is promoted, censored, or shown to some people but hidden from others.

Once it becomes more widely understood that software rather than people make many of the key decisions about what speech can be public and what cannot, debates will begin about what algorithms should or shouldn’t do in a complex variety of different circumstances. These debates will produce very different conclusions depending on local cultural and political factors and lead to very different environments in which civic discourse happens. To be more concrete, consider the effects on the experience of citizens who have widely differing opinions if the following questions were answered by governments:

  • Should citizens see many posts and videos about civic and political issues, or few?
  • Are activists permitted to “blast” large numbers of potential supporters with messages containing calls to action, or should they be blocked to reduce “spam”?
  • Are citizens deliberately exposed to civic or political ideas that come from “outside their comfort zone,” or are they encouraged to consume only what they find most comfortable?
  • What kinds of speech are defined as “simply unacceptable” and banned?

In some countries, for example, speech will be heavily regulated to protect a local notion of taste and decency, and AI systems will be instructed to heavily upweight content that praises civic, family, and religious virtues. In other countries, there will be no state-enforced attempt to control for taste and no attempt to ensure that “virtuous” content trumps “mere” entertainment. In some countries, there might be relatively little control over the limits of speech, but strong legal mandates that require social media platforms to show citizens the content designed to bridge political extremes.

As different countries settle on different acceptable boundaries for the conduct of AI algorithms, differing impacts will be felt on the discourse of citizens about civic issues and their interactions with power centers. In some places, it may become very easy to tell everyone in a neighborhood about an important new local issue; in others, it may be close to impossible. In some places, it may be easy for activists to mobilize their own supporters; in others, it may be prevented by “anti-spam” mechanisms built into platforms. Without regulation it will be the platform companies that decide this balance. With regulation it will become the decision of the regulators, and their regulations are likely to vary as widely as human cultures vary today.

Ultimately, the regulation of AI in social media and online searches will be about more than just how the extremes of political discourse are treated — the issue that dominates this conversation at present. It will be about the extent to which power and change can be mobilized in different countries and the fluidity with which new ideas can appear and evolve. It will be about how easy regular interaction between citizens and decision makers is made, or how hard. It will be about who gets invited to have a voice and express an opinion, and who is simply never shown the opportunity to do so.