This report takes a closer look at the effectiveness of fact-checking and surveys reaction to the Trudeau blackface controversy on social media.

Executive Summary

If there is a general theme emerging from these Digital Democracy Project reports, it is that much of the received wisdom about the current state of the media and democracy is, if not false, at least poorly supported by the available evidence. For example, contrary to popular belief, Canadians do not live in media-driven partisan echo chambers, and their trust in mainstream media remains quite high. And while there is some evidence of political polarization, that appears to be driven largely by the parties themselves and not by the media.

The good news, then, is that there appears to be a great deal of room for the media to play a valuable and constructive role in sustaining good-faith democratic engagement. This week’s report builds on these themes in two ways.

First, we have new survey results that suggest that there is a great deal of support among Canadians for fact-checking, and that it can be effective. Yet while fact-checking is a practice that has long been considered part of the bread and butter of accountability journalism, there is little evidence that journalist-based fact checking is more effective than any other means.

Second, a data-driven analysis of Justin Trudeau’s blackface controversy shows that the initial flood of online activity was marked by widespread sharing of solid journalism. As the initial interest in the story waned, further activity consisted by and large of conservatives talking among themselves, with little evidence of the conversation being influenced by inauthentic actors.

Key Findings: Fact-checking

  1. Canadians are in broad agreement that fact-checking is valuable in politics. This echoes a key finding from Research Memo 1, which found high levels of trust in traditional media to deliver objective and accurate information about the election.

  2. Our finding from Research Memo 2, that fact-checking works in correcting misinformed voters, was further confirmed in this report. However, this report found that journalists are no more convincing than politicians or unaffiliated Twitter users when it comes to fact-checks.

  3. Fact-checks may help people to feel more confident in their ability to understand and participate in politics, but this is more likely to be the case for those who already understand the issues. We also find no evidence that fact-checking politicians reduces trust in politicians as a source of political information.

Key Findings: Trudeau and Blackface

  1. The general public’s discussion of the blackface story on social media dropped dramatically after three days. Tweets on the topic from journalists and election candidates had a similar decline.

  2. Discussion on blackface-related hashtags is dominated by Conservative partisans; features links to generally informative pieces of journalism about the controversy; and lacks evidence of disproportionate inauthentic activity.

Fact-checking Findings

Fact-checking as a journalistic practice has grown in prominence in the past few years, especially since the election of an American president who habitually bends or outright ignores the truth. It’s estimated the number of fact-checking outlets around the world has quadrupled since 2014, with a 26% increase in the past year. But as fact-checking gains public profile, so do arguments about whether it is effective in informing the public or changing the minds of people who are inclined to reject information that contradicts their beliefs.

Regardless of the answers to those questions, the Canadians we surveyed were broadly in favour of fact-checking. Fifty-three percent of respondents said they supported the practice, while only 14% expressed any level of opposition. An overwhelming 73% of respondents indicated they wanted more of the practice in journalism, while only 3% indicated they wanted less of it. And there were only minor differences in support between left- and right-leaning partisans, or between those who were more or less exposed to traditional or social media.

Canadians clearly want to see more fact-checking, but how effective is it when it comes to informing the public? The Digital Democracy Project has been using survey research to study how Canadians respond when they are provided with facts related to policy issues. In Research Memo 2, we found that survey respondents who received information about Canada’s progress on its Paris Accord commitments were more likely to answer a related question correctly, regardless of their partisan leanings.

This report builds on those earlier findings by tracking Canadians’ responses to fact-checking scenarios. We provided statements from a fictional MP that included false information about Canada’s greenhouse gas emissions and violent crime rate. Respondents then received a fact-check from either a journalist, another politician or an unaffiliated Twitter user (all fictional), or no fact checks at all. We found that the fact checks made a difference: For the crime rate question, 55% of respondents who did not receive the fact-check answered incorrectly, compared to 43% who did—a difference of 12 percentage points. For the question about emissions, the share of respondents answering incorrectly was almost 18 points lower among those who received the fact-check than among those who did not.

But perhaps surprisingly, the source of the fact-check made less of a difference. In Research Memo 1, respondents reported trusting the traditional news media as a source of political information at a rate of 5.6 out of 10, surpassing their trust in politicians (4.8) and social media (3.3). However, in this survey, there was no significant difference in responses to fact-checks from the journalist, the politician or the random Twitter user. If journalists want to convince their audiences that they are the most reliable source for verifying information, they may have more work to do.

Finally, we used a second fact-checking scenario to evaluate the effect on respondents’ internal efficacy—or their belief that they can understand and participate in politics. Respondents were again given inaccurate claims on four different topics from a fictional politician, and anywhere from zero to four fact-checks on those claims from a fictional journalist. Respondents who knew the correct information about the politicians’ claims at the outset of the experiment saw their efficacy improve as they read more fact-checks. By contrast, those who did not know the information ahead of time saw no significant change in their efficacy. This suggests that fact-checking may actually increase the inequality in internal efficacy between those with high and low levels of policy knowledge.

Social Media Findings

Our social media case study focused on the controversy around the images of a younger Prime Minister Justin Trudeau in blackface and brownface that surfaced one week into the election campaign. When a political story emerges as suddenly as this one did, it allows us to observe how social media behaviour changes in response.

Indeed, there was a dramatic spike in tweets and Facebook posts about blackface and brownface in the 24 hours after the story broke, but the decline was almost as dramatic. At its peak on Sept. 19, there were more than 13,400 tweets an hour on the topic; three days later, there were no more than 2,614 tweets an hour. Journalists and politicians—particularly NDP and People’s Party of Canada candidates—continued to tweet about the issue for the rest of the week, albeit in much smaller numbers. Our previous reports have shown that political insiders and journalists are not always having the same conversations as the rest of the public on social media. In this case, the dwindling conversations in all three spheres seemed to reflect each other.

We also found that a substantial amount of Twitter activity around related hashtags such as #blackface or #brownface came from Conservative partisans. We identified Twitter users’ likely partisan affiliations by looking at the number or candidates they followed. About 11% of Conservative partisans tweeted or retweeted blackface-related hashtags, the highest for any party. By comparison, 10% used right-wing hashtags (e.g. #trudeaumustgo, #scheer4pm) and 16% used general election hashtags (e.g. #cdnpoli, #elxn43) over the same time period. Not surprisingly, Liberal partisans continued to use general election hashtags far more frequently than blackface-related hashtags, but so did the NDP, Bloc Québécois and Greens. PPC partisans continued to promote right-wing hashtags in greater numbers than blackface hashtags.

A similar picture emerged when we looked at co-occurrence of hashtags: how often certain election-related hashtags coincide with each other in the same tweet. Hashtags can allow Twitter users to go beyond their usual online networks by seeing what anybody else is posting on that hashtag. The #blackface hashtag was used by more Conservative partisans than anyone else, and on top of that, most of the hashtags that interacted with #blackface were also dominated by Conservative and PPC partisans. This suggests that when it comes to social media conversations about the controversy, right-leaning partisans are mainly talking among themselves.

Methodology

Our survey data team conducted an online panel survey of 1,594 Canadian citizens 18 years and older using the online sample provider Qualtrics. The sample was gathered from Sept. 19-24. Data was weighted within each region of Canada by gender and age to ensure it adequately represented the Canadian public. Survey respondents were asked questions related to basic demographics, as well as their partisan, ideological and issue preferences. We present 90% confidence intervals for each of our figures below. Partisan sub-groups are restricted to the Conservatives, Liberals and NDP for sample size considerations.

For our online data research, we continue to track data from across the digital ecosystem. This week we rely on approximately 3 million tweets collected since Sept. 17. We gathered a list of all the Twitter followers of all declared major party candidates (as of Sept. 28). We identified active partisan consumers in the election, which we defined as individuals who follow more than five rank-and-file candidates from major parties (we exclude the top 5% most-followed individuals from each party to account for the very high number of followers of some party leaders, where a follow does not meaningfully indicate partisanship). We labelled each of these approximately 55,000 Twitter users as likely partisans based on the party that includes most of the candidates they followed. We also used text and links from candidates, news outlets, and other public groups and pages on Facebook.