Report Highlights | Poisoning Democracy: How Canada Can Address Harmful Speech Online
The rise of harmful speech online is a threat to Canada’s democracy. This report outlines how governments and digital platform companies can better address hate and harassment, including the creation of a Moderation Standards Council.
How is harmful and hateful speech threatening Canada’s democracy?
Social media platforms provide unprecedented opportunities for citizens, political candidates, activists and civil society groups to communicate, but they also pose new challenges for democracy. One key problem is the rise of harmful speech online. Harmful speech refers to a range of forms of problematic communication, including hate speech, threats of violence, defamation and harassment.
BOTTOM LINE: If not properly addressed, harmful online speech may undermine people’s full, free and fair participation in politics and political debates.
But isn’t hate speech already illegal?
Some forms of harmful speech — such as hate propaganda and uttering violent threats — are already illegal in Canada, though enforcement policies need to be improved. Even when harmful speech does not violate laws, it can still undermine democratic participation and discourse, and promote social tension and conflict. In most cases, complaints about harmful speech are addressed (or not) according to the content moderation policies of private companies, rather than through legal systems or other processes with public oversight.
BOTTOM LINE: The current regulatory approaches cannot address the speed, scale and global reach of harmful speech online.
What more should the government and the social media platforms do?
This report explains some of the most problematic forms of harmful speech, how they affect democratic processes, and how they are currently addressed in Canada. The report draws lessons from policy responses that are being developed or implemented in other countries and at the international level. It then sets out three mutually supporting policy recommendations for the Canadian context. In brief, the report proposes that the Canadian government and key stakeholders should:
1. Implement a Multi-track Policy Framework to Address Harmful Speech
An inter-agency task force should be created immediately to clarify how governments in Canada can better apply existing regulatory measures to address harmful speech online, and to examine the growing role of social media platforms in regulating free expression.
The federal government should set clear expectations for social media companies to provide information, to the public and to researchers, regarding harmful speech on their platforms and their policies to address it.
The federal government should launch a multi-stakeholder commission to examine the social and political problems posed by harmful speech online and to identify solutions that fall outside of current regulatory measures. This commission would contribute to a broader Canadian discussion regarding public input and oversight of online content moderation.
2. Develop a Moderation Standards Council
The multi-stakeholder commission should consider the creation of a Moderation Standards Council (MSC), analogous to the Canadian Broadcast Standards Council, but adapted for the specific context of online content.
The MSC would enable social media companies, civil society and stakeholders to meet public expectations and government requirements on content moderation. It would improve transparency and help internet companies develop and implement codes of conduct for responding to harmful speech. It would create an appeals process to address complaints about content moderation policies and decisions. It would also address potential jurisdictional conflicts over the regulation and standards of content moderation within Canada and contribute to international standards-making.
3. Build Civil Society Capacity to Address Harmful Speech Online
Compared to other countries, Canada lacks robust research and civil society programs to address harmful speech. Governments, universities, foundations and private companies should provide significant support for research and monitoring programs, and for efforts to develop, test and roll out measures to respond to harmful speech.
Artificial intelligence will play a major part in efforts to weaponize harmful speech, as well as efforts to identify and respond to it. Canada needs a research network or institute to focus on the role of AI in this and other human rights issues.
Social media companies and stakeholders should create an “election contact group” to quickly and effectively share information about threats to electoral integrity, including the use of harmful speech.
These policies can help government, internet companies and civil society work together to create a digital public sphere in Canada that is healthy, inclusive and democratic.
Poisoning Democracy was written by Chris Tenove and Heidi Tworek of the University of British Columbia, and Fenwick McKelvey of Concordia University. The report builds on published research, background discussions with experts in Europe and the United States, and a workshop of international experts held in Ottawa in May 2018, with assistance from the Public Policy Forum. The authors of this report are independent from PPF. Their views and recommendations may not necessarily reflect those of PPF.