To encourage innovation in artificial intelligence while minimizing risks, Canada should adopt an incremental risk management approach to AI governance, supported by two new advisory institutions.

Summary and recommendations

Artificial intelligence—the ability of machines to perform intelligent tasks such as sorting, analyzing, predicting and learning—promises substantial benefits for Canadians. Businesses that develop and commercialize AI have the potential to grow and create jobs, while organizations that adopt AI technologies can improve operations, enhance productivity and generate health, social and economic benefits for all.

Yet, some AI applications pose risks for individuals and communities:

AI policy makers face a tension. They must establish conditions that allow AI to thrive and deliver benefits, while recognizing and responding to the harm that some AI applications can generate or reinforce. Options for addressing the tension range from a laissez-faire approach that would allow AI to develop and diffuse without limit, to a precautionary approach that would restrain the development of AI until risks are better understood and capacity to manage them is in place. Given that AI is a platform technology with many possible applications—and thus various risk profiles—it should be governed with an incremental risk-management approach that is case- and context-sensitive, rather than a blunt laissez-faire or precautionary approach. A risk-management approach allows space for AI technologies and applications to develop while monitoring and managing risks as they emerge in specific applications. To institutionalize a risk-management approach to governing AI in Canada we recommend that the Government of Canada create two new institutions:

  • an AI risk governance council
  • an algorithm impact assessment agency

Read the full report