YouTube player

Panel discussion with:
Chris Barry, President, Microsoft Canada
Owen Larter, Director of Public Policy, Office of Responsible AI, Microsoft
Moderated by Nick Taylor-Vaisey, Journalist, POLITICO Canada

Nick:

Our first speaker has over 20 years of experience at Microsoft. He’s held several leadership roles, including chief operating officer for Microsoft’s Industry Solutions business. He’s also a member of the TECHNATION board of directors, but he’s also — perhaps most importantly — the president of Microsoft Canada. Welcome to Chris Barry.

Chris:

Thank you. Great to be here.

Nick:

We’re also joined by an eight-year vet of the company. He started as a U.K. government affairs manager before making the move to the greater Seattle area. He’s now director of public policy in the Office of Responsible AI at Microsoft — Owen Larter. Welcome.

Owen:

Thank you very much.

Nick:

why does it seem like everybody in my life, and perhaps yours, was caught off guard by, and excited, and maybe a little nervous, but absolutely sort of rapt by the rise of chatbots, whether it’s ChatGPT or Bing or others that are emerging?

Chris:

First, it’s great to be here with this group and great to participate in this event, including the dinner that we hosted last night on many of these topics. What we have going on right now, we can draw almost a direct parallel to what happened 30 years ago this year, which was in 1993, in January specifically: the release of the first Mosaic Web browser. And for those of us who were of age at that time to begin tinkering with the internet, well, the internet itself had been under development for years prior, led by work at DARPA and other universities around the world. The advent of the Mosaic browser is really what brought to the fore the portal by which people could first interact with the internet. And that world of possibility was immediately opened. And while it’s probably fair to say at the time that very few people understood what a hypertext transfer protocol was, they did understand very quickly that if they typed http//:www.something.com it would transport them magically to a new place and it was simultaneously exhilarating and maddening and fraught with anxiety and possibility. Many of us will remember that the fastest internet speed at that time was 14.4 kilobytes per second, which is 14/1000 of 1,000th of the speed we get today at home if we have a gigabit service.

There’s sort of a direct parallel there. These things have been around, but the usefulness, the immediacy is there. And while the technology set around things like generative AI and large language models is complex, for sure, and these things take massive compute power, the ability for people in all walks of life to immediately ideate on how it is they may use them and be able to simply get into a text window in a browser and query it, ask a question, ask it to generate something, that’s immediately graspable. That, in addition to its ready accessibility, I think is what’s caused this shift in the last 12 to 16 weeks as we sort of opened up this year.

Owen:

I would also point to the really organic bottom-up adoption of this technology. You use the phrase people being caught off guard. Some people have been caught off guard. A heck of a lot more people are actually going out and using this technology: ChatGPT, the fastest ever consumer product to a million users, fastest ever consumer product to 100 million users. And this technology is the product of many decades of incremental improvements. It’s now just reaching a point, as Chris elucidated, where it’s actually increasingly useful to people across society. The other thing that’s really helpful in my world, the public policy conversations — there have been lots of people talking about the risks and the challenges of AI for a number of years now and how you address them. I think what’s happened over the last few months is all of that has been mainstreamed. The use of the technology has been mainstreamed and the conversations around the opportunities and risks and how to balance them has been mainstreamed as well. That’s a really good thing in terms of making progress.

Chris:

Just to add just a small dimension to it, Owen was intimating towards ChatGPT, you hit 100 million users in two months’ time. I believe Instagram took something like nearly five years to reach the same. It’s something that we think is ubiquitous, that almost all of us would have on our phone. You’re talking about a rate of adoption, which is just nearly vertical relative to some other technologies, which signifies the interest.

Nick:

Are people justified in being a little nervous, if they are, about the thin line between fascination and — fear might be fear-mongering — but a little bit of unease about a world that was human, that seems to be slightly less human because now ChatGPT is opening a whole new world? From a public policy perspective, how do you talk about that with people?

Owen:

It’s a great question. Should people be nervous? Should people be fearful? It’s a question we get asked a lot. There is a lot of uncertainty and there’s a lot of change that is happening and that always brings uncertainty. I’m not sure just being fearful alone is that helpful. I think people should definitely be aware of the technology, aware of the opportunities, aware of the challenges. And then I think we all need to take steps as a society to put in guardrails so that we can realize the benefits in a responsible way. This is where I’m quite heartened by the development of the conversation over the last few months. I’ve been in the Responsible AI team at Microsoft for about three years now and I’ve seen more change and more progress in the last three months in the public conversation around AI than in the rest of my time in the team. That’s really helpful.

There’s three main trends that I think I’ve identified, which I think are all quite heartening. One is around adoption. We talked about ChatGPT, I think that’s really helpful. You can see it with our customers using our Azure Open AI service. This is where we’re providing enterprise-grade access to the open AI models. And Chris is obviously an expert on this and I’m sure can say more. We’ve seen, month on month, a 10x increase in people using this service. So, there is just a huge amount of excitement, both bottom up and organically on the ChatGPT in the new Bing side, and then also from enterprise customers as well. The adoption tells you something — people are finding value in this technology. What you’re also seeing is an acceleration of public policy conversations around the world, so countries right around the world now trying to work out what the right regulatory guardrails for this technology are. And I think that’s really, really helpful.

You’re starting to see an international aspect to that conversation as well, with countries realizing that this is international technology that is developed and used across borders. And so, when you’re building regulation, it makes a lot of sense to make sure that regulation is interoperable across borders. That’s not to say every country should have identical regulation, but core concepts should be interoperable. And I do think Canada is right at the front of this global conversation, the discussion on the AI and Data Act. People around the world are looking at what is going on here, seeing it as a really good starting point, a really good philosophy around the AI and Data Act, and that there’s a conversation to be paid attention to in Canada.

The last point I’ll make where I think there’s been really positive progress, really over the last six months, and I think this is accelerating as well, is on standards and frameworks and tools that we can use right now to use AI in a more responsible way. Regulation will take a little while to come into place, but there are ISO standards now around things like risk management, building AI management systems. There is a framework — the NIST AI Risk Management Framework in the U.S. sets out a template for how any company can put together an AI governance framework to address the risks of AI. There’s big progress on tooling on the responsible AI side so that developers can better understand and address challenges in their models. Really good progress across all those different areas that I’ve mentioned is what gives me a lot of hope.

Chris:

In terms of things that might be unsettling during this time, certainly there’s been any number of press and conversations we’ve been part of, like how disruptive is this in terms of things like employment? And it’s a double-edged sword. On the one hand, there are significant productivity enhancements that can be gained by this set of technologies. We have a service built into our development platform. A company we acquired several years ago, GitHub, we’ve introduced a copilot concept that allows developers to code with. And what we’re finding is, on average, they’re 45 percent more productive, which is a staggering figure when you think about the expense of a developer. And so, at one level you might say, ‘Well, gosh, that’s terrifying.’ Or maybe it’s great if you own a company; you can fire half the developers. But that’s not the point. The point is, in addition, they’re about 75 percent more satisfied with their jobs. Why? Because they’re getting to focus on higher-order problems rather than doing the drudgery of debugging, of compiling libraries and that type of thing. They can focus on higher-order problems. And we will see this play out as a set of technologies is adopted and there will be moments that are disquieting. There’s going to be massive opportunity as well.

Nick:

I know we want to talk about regulation and expand on the opportunities. Before we get there, you work for a company that is innovating very quickly on this. And when you’re talking about risk, the protocols in place to manage that risk as you innovate, not just chatbots but the sector more broadly, what kinds of risks are there and how are you managing them at what feels like lightning speed?

Owen:

It’s a really important part of the conversation. I don’t think we want to give the impression at Microsoft that we don’t appreciate that there are risks there. There is an immense opportunity and it has to be a balance threat, but for sure there are risks and I break them down in a few ways. Firstly, we need to make sure that we’re developing and deploying AI in a way that is responsible and ethical. One thing that comes to mind is the way in which AI is being increasingly used right across society to take material, consequential decisions about individuals. Does someone get access to credit? Does someone get into a university? And it just simply cannot be the case that a system that is being used to support a decision like that might be discriminatory or biased in some way. That is a real risk that needs to be addressed.

There is also a geopolitical element to this conversation. We need to make sure that we’re advancing AI in a way that supports the economic competitiveness and the sovereignty and national security of democratic countries, in particular. That’s something we need to give a lot of attention to. And then I would build on the point that Chris made: this will be a period of real change. There is real opportunity, but we need to be particularly thoughtful about some very sensitive aspects of society. So, the education space, for example. What will the changes for the world of work mean to those of us that are working at the moment? And how do we help guide those changes in the right direction so that AI is beneficial broadly across society, not just to a small group.

Nick:

There is legislation on the table right now, Bill C-27, which is about more than artificial intelligence, but certainly there’s a big chunk of it about AI. Can you talk about what problem that bill may be solving or should solve and what opportunities it may unlock? And then we can talk about the reality of what the bill does accomplish and how that actual debate is playing out. But what can legislation do? What can it accomplish?

Owen:

It makes total sense. Zoom out a little bit. Legislation and regulation are going to play a really important role. But this is a broad societal conversation around how you create new institutional frameworks for a completely transformative technology. Regulation is needed for sure; you need new rules to guide new technology. Government is going to have to play a leading role here, just as it has right across history in leading the charge, putting the rules in place for new transformative technology. Government’s going to have to build infrastructure, and it’s going to have to make the rules. The AI and Data Act is a really good starting point. The philosophy of the AI and Data Act, which is to focus on flexible processes for risk identification and mitigation, which are able to stretch to the breadth of the AI ecosystem and are able to keep pace with the technology as it develops because it’s going to continue to develop quickly, I think that’s a really important core part of the philosophy in the AI and Data Act that we’re very supportive of.

Some of the conversation that has been around: With things moving so quickly, how do you make sure the really sensible intent and philosophy around the bill is properly captured with the language in the bill so everyone knows what the framework’s going to look like and what they’re going to be subject to, and that we can all debate it in the meantime? Government has a really, really important role to play and to lead. I would call out just a couple of other sectors of society that I think are also important: Industry clearly has a really important responsibility to step up, will need to follow the rules when they’re made and, of course, we’ll do that. I think we also need to take steps to demonstrate that we are trustworthy and continue to take steps to demonstrate that we are trustworthy. So, we’ve put a lot of work into our internal Responsible AI program. We’ve been building that out for six years now. We are trying to do more in sharing that information externally. So, we have our Responsible AI Standard, which is a set of requirements that any team at Microsoft that is developing and deploying AI has to abide by. We’ve published that now. So, if you want to go and check it out and learn more of Microsoft Responsible AI Standard, that’s now a public document. We’ve also published our AI impact assessment, as well. So, any team at Microsoft that’s developing and deploying AI has to put together an impact assessment. And we do this to show that we’re walking the walk, quite frankly, not just saying nice words around Responsible AI, but also hope that others can build on the work that we’re doing.

The final point I’ll make in terms of this broader societal framework and shift that we need to be mindful of is civil society and academia have a really, really important role to play here. These are very complicated, often technical, conversations that are going to impact the breadth of society. We need civil society and academia to help chart a way forward, to identify the opportunities and challenges and, quite frankly, to provide scrutiny; to scrutinize the technology, scrutinize the companies developing and using the technology. So, they’re going to play a really important role as well.

Chris:

To be clear, we at Microsoft, we support the regulation of AI globally. We absolutely do. And we think it’s great that Canada is taking a leadership position on that journey with Ada. The tension point that I think Owen was getting at is what’s in the bill itself versus what is left to be relied upon in regulation that is subsequently developed, and what’s the balance between that in terms of specificity that can move at the pace at which this technology is evolving and this groundswell of demand but doesn’t become an impediment to innovation leadership opportunity, particularly in Canada, which is a world leader in AI technology. And so, it’s this careful balance. We absolutely support regulation; these are powerful tools. And the bill as drafted — and I’m not a total expert — there are criminal penalties and you’ve got to strike that right balance so that the bill is specific enough that players in the ecosystem broadly know how to comport themselves. And I know that’s obviously being worked on right now.

Nick:

What you phrased as a question was actually the question I wanted to put to both of you, which is what is that balance? With so many stakeholders who want to and deserve to be a part of the conversation, and a government that has legislation that leaves many of the details to regulation, how do you not end up with a law that falls hopelessly behind the technology or has some sort of gap in its enforcement or something that is unforeseen? How do you strike that balance?

Owen:

Look, it’s a big problem that we all have to solve and certainly in the public policy team working on AI this is something that we feel we have to make a meaningful contribution to. It’s something we need to get right as a society. I think there are some important first steps to be taken. And I would agree with Chris; we definitely feel there is a need for a regulatory framework here. We feel the conversation in Canada is really, really positive and productive and so that’s why we want to lean into it. A few points that are a bit more concrete: I think you need to identify the particular harms in the here and now that you can address. There are AI systems, again making consequential decisions that I mentioned, too, that are relatively well understood and are increasingly being used across society. So, how do you make sure that when AI is being used in what you might refer to as a high-risk domain, to take a consequential decision that there is regulation around that? That’s something that we understand a bit more.

You also need to make sure that this regulation is durable, as you mentioned, both durable and flexible, because if you think about the breadth of the AI ecosystem, AI is not just one thing; there are a whole load of different types of AI systems that are being deployed in a whole load of different scenarios across pretty much every sector of society. So, how do you regulate that? It’s challenging, but I think what you can do is sort of the philosophy of Ada, I would say, to focus on flexible processes around risk management. How do I go through a process of identifying the risk of your particular use case and mitigating those challenges? Focusing on those processes will be really helpful.

Finally, because of the speed element, because of the need to be nimble and agile and keep up with the development of the technology, relying on some of the other tools in the regulatory toolkit will be really helpful. I’m thinking about things like standards again. So, the ISO standards; I think there’s great work that has been going on on that front for a number of years now in the background. It’s really helpful to be able to draw on that. So, there are more standards being developed on risk management tools, there are more standards being developed on evaluation and measurement; we’ll have to build those out. And then these templates — I’ll mention it again because I think it’s a really important contribution to the conversation — the NIST AI Risk Management Framework, this is good to go. This had been published a couple of months ago. They’ve also created really helpful frameworks on the privacy front and the cybersecurity front. This AI Risk Management Framework, companies can use now to start to build out their own AI governance framework.

Nick:

Let’s talk about opportunities. When you think about the impact that AI can have on the workforce and growth in Canada, what comes to mind as you’re looking at the work your company is doing, looking at the sector more broadly?

Chris:

The opportunity here is incredibly vast. We at Microsoft are on a mission to make these tools readily available to people throughout society. At a product level in terms of products that we create, we’ve announced that we will be infusing our office products essentially with the copilot capability. So, it will be able to author with you and it will be able to analyze data with you in Excel and things of that nature. And that’s great. So, there’s the broadly applicable piece. There’s also what we’re already seeing happen in the early stages of this unfolding.

Two examples I’d point to: You look at the city of Kelowna in British Columbia. They have a trial going leveraging the capability of ChatGPT and the open AI technologies to essentially create a bot that can evaluate building permits for homes, and be able to do it at a level of specificity to look at variances and code variances and all of that minutia that a city planner would normally spend days, months, weeks vetting that can be reduced down into literally minutes, mere minutes. This isn’t about, ‘Let’s get rid of the city planners of Kelowna, who are already strapped for resources,’ but rather how do we make them more effective and how do we, in a sense, over time, turn that city office into becoming open 24/7? That’s a tiny example. But they will scale that out and bring it into production soon. And my understanding is the City of Vancouver is looking at something similar and who knows where it will go from there. That’s just a very tangible example.

Another that I’d point to — and I’m looking around the room for Dr. Amrit Sehdev — he was part of our experts dinner last night. Dr. Sehdev is a physician, as is his wife, who’s an optometrist. He was explaining to us that in Canada there is an acute need to increase the capacity to do diagnostics using retinal scans. A retinal scan is very informative about detecting early diabetes, high blood pressure and a number of other maladies. And yet the University of Waterloo is the only school in this country that mints new optometrists every year, somewhere between 20 and 22 per year for the entire nation. We have this massive capacity gap and using computer vision technologies, amongst others, these scans can be digitized and automated, and those diagnoses can be made much quicker. It’s a very tangible example of how you compress that early intervention, the time to get people properly diagnosed using a very noninvasive set of technology, no blood test, nothing like that. I thought that was just a very impactful example about the promise, the potentiality for this type of technology. The list is endless, but those are two that have been resonating.

Owen:

Just to build on what Chris was saying, the risks of this technology, they’re real; we’ve talked about them. But the opportunity is so vast. I thought Chris gave a really good overview of it. Just a few of the things that I’m really excited about: The way in which generative AI is already being used to drive some really impressive scientific breakthroughs. You’re already seeing research into how you can use generative AI to generate net new medicines and therapies; this is going on now. New high-performance materials are going to be completely transformative right across society. One of the things — it’s a bit wonkish — but I’m quite excited about is how you can use AI to better measure and understand complicated systems. I think about this in the public policy world; how can you better understand the impact of the policy that you’re putting together? There’s some really interesting research that’s been done using satellite imagery to better understand and predict where economic growth is occurring, just by looking at satellite data. I think we’ll see much more of this. We have a much better understanding of how the economy is working, how society is working. We might even be able to predict the weather accurately at some point in the future, you never know. So, I’m excited about that as well.

One of the things I’m actually most excited about, which sounds really mundane but I think will be really helpful for everyone: Chris talked a little bit about Microsoft 365 and the way we’re bringing AI to our office products. We’ve already been able to play around a little bit with this. There is going to be a day soon where you can just tap into your chatbot, ‘Can you make sure that the background on slide six is the same colour as the background on slide two? Can you format the document so these bullets actually line up,’ stuff like that. And it’s really mundane; I think everyone would agree it’s fairly low risk. It’s going to be really transformative and really helpful for all of us. So, I’m really excited about this; quite small, but I actually think very helpful adjustments or improvements as well.

Nick:

A perfectionist’s dream.

Owen:

Yes, there you go.

Nick:

And this is a room of perfectionism. We joked when we had a call before this panel that this would be a fast 30 minutes and it’s been 29 minutes and 57 seconds. That flew by and we’re out of time. So, thank you, Chris. Thank you, Owen. And thank you everybody for listening.