Processes, People and Public Accountability


Researchers and reporters documented three forms of harmful online communication during Canada’s 2019 federal election campaign:

  • Abuse of individuals: Minister Catherine McKenna received thousands of negative messages on social media during the campaign period, including threats of violence, which culminated in the defacement of her constituency office with misogynistic slurs.i
  • Intolerance and hate toward marginalized groups in public online spaces: Significant volumes of intolerant content, ranging from casual use of dismissive terms to racist slurs and conspiracy theories, were directed toward Muslims and other social groups on Twitter, Facebook, Reddit and YouTube.iiiii
  • Building support for hate in private online spaces: White supremacist, ethnonationalist and anti-government networks, which included members of Canada’s military, shared ideas and coordinated activities in private Facebook groups and online chatrooms.ivv

These cases show some of the myriad forms of online communication that may be considered harmful. These include forms of speech that are already illegal in Canada (e.g., uttering threats), instances of harmful but not illegal communication (e.g., anti-Muslim posts that don’t reach the threshold of criminal hate propaganda) and harmful patterns of communication that contribute to systemic discrimination (e.g., large volumes of dismissive and disrespectful communication toward women).

In this report we propose a framework to distinguish key dimensions of harmful online communication in Canada. We summarize initial findings from our study of online abuse of political candidates in the 2019 federal election, which emphasizes how patterns of discourse and interactions between online and offline experiences may be harmful. We then analyze international policy responses by governments and social media companies. We conclude with several principles to guide policy development in Canada: 1) focus on systemic processes rather than individual pieces of content; 2) pay attention to the people who perpetrate, suffer and address harm and not just to online spaces; and 3) promote public accountability – including, but not limited to, transparency – of regulators and platform companies. Although our report focuses on the negative impacts of online communications, we should not forget the potential benefits, including how social groups and political actors leverage these spaces for democratic purposes. Indeed, addressing harmful communication can promote a more just distribution of the benefits of internet use.

Processes, People and Public Accountability: How to Understand and Address Harmful Communication Online
Download PDF