Demis Hassabis on AI at ‘a pivotal moment in human history’
The tech industry can’t move fast and break things on its way to developing artificial intelligence, Dr. Demis Hassabis, founder and CEO of Google DeepMind, warned on Tuesday evening at an event hosted in Toronto by the Gairdner Foundation and the Public Policy Forum.
“We should use the scientific method as we’re approaching this very pivotal moment in human history and…not do the typical Silicon Valley thing of moving fast and breaking things and then fixing them afterwards,” Hassabis said. “I think this is too important a technology to work like that.”
Hassabis was joined on stage by Shingai Manjengwa, head of education at ChainML, founder of Fireside Analytics, and a PPF Fellow. Together they discussed the important implications of AI programs, such as those being developed by DeepMind, both to health care and research, as well as the more existential questions that AI is suddenly prompting.
The hype surrounding the sector plays a role in Hassabis’s warning about the way AI is being developed and released into the world. In March, Hassabis – who has been named the 2023 Canada Gairdner International Award Laureate – co-signed an open letter from global AI luminaries suggesting a six-month pause on training new AI systems. And, ahead of his talk Tuesday in Toronto, Hassabis told The Guardian that humans should “take the risks of AI as seriously as other major global challenges, like climate change.”
Hassabis called for an international oversight body akin to the UN’s Intergovernmental Panel on Climate Change to monitor its development.
But Hassabis’s own work in the AI field has already shown how the technology can create significant benefits for humans, particularly when it comes to health research. After DeepMind’s AlphaGo proved its ability to outmanoeuvre Lee Sedol, the South Korean master of Go (the ancient strategy board game) in 2016, Hassabis’s company moved toward what he said Tuesday had always been his passion – applying AI to scientific discovery, making it the “ultimate tool to help with science.”
At the top of his list of challenges to solve was the “protein folding problem,” which had vexed the scientific community for decades. Made up of strings of amino acids that naturally fold themselves into three-dimensional structures, proteins are vital to all life on earth. To know their structure is to know what the protein is for, or what it can do. And knowing what it can do can, in turn, help understand how and why organisms live and die.
But a protein structure’s natural 3D complexity had, for many years, meant that to predict what they look like could itself take years of painstaking research. “There are potentially 10300…possible shapes that an average protein can take,” Hassabis explained. It would be easier if a protein’s structure could be predicted just from its amino acid sequence. Since 2020, DeepMind’s AlphaFold has proven remarkably adept at doing just that. It has so far predicted over 200 million of them, all of which are available in a database for researchers to reference for free.
“We’ve been kind of amazed by what the scientific community has done with all of this,” Hassabis said, including developing enzymes that break down single-use plastics and vaccine development for neglected diseases.
“One of the early adopters of AlphaFold was the Drugs for Neglected Diseases Initiative… and they work on things like Leishmaniasis and Zika virus and virus that affect poor parts of the world and [that] often don’t have a lot of…pharma companies working on those things.”
And while Hassabis said he hopes that in a decade that AlphaFold “won’t just be an isolated success story” – pointing to AI’s potential to revolutionize things like quantum chemistry, pure mathematics, or plasma containment – “we need to pioneer in a responsible way.”
This is especially true for something as potentially transformative as AGI, or artificial general intelligence, a (currently) theoretical form of machine intelligence that is equal to, or greater in capability than the human mind, he said.
Five key takeaways from Demis Hassabis’s Gairdner Foundation talk:
What AI can and can’t do (yet)
In discussing AlphaGo, Hassabis offered his perspective on the current state of AI based on layers of creativity. Hassabis sees three layers of creativity, two of which AI can currently exhibit.
Interpolation
If you were to prompt an AI to create a new picture of a cat based on a database of one million cat photos, it could do an average of all those cats and create a new cat image. This is creative in as far as the AI-generated cat did not exist amongst the examples, “but obviously that’s a very simple form of creativity,” Hassabis said.
Extrapolation
According to Hassabis, this is what AlphaGo did, coming up with something truly new – “that was not in the distribution of what you saw before,” but rather based on experience. He pointed to AlphaGo’s infamous Move 37 that it played against Lee Sedol, a move no one was known to have ever made in the game’s long history but that was ultimately key to AlphaGo’s win.
Invention
Also known as “out-of-the-box thinking,” Hassabis said – and something AI is currently not capable of doing. Invention would mean not simply mastering Go, but inventing it.
AI hype is distracting and dangerous
During the event’s Q-and-A, Manjengwa asked Hassabis whether he missed the days when nobody was talking about AI. “I do, actually,” Hassabis responded. “The hype today is a little bit out of control, I would say. It’s phenomenal technology, and of course I’ve believed in it for decades now and it’s fantastic to see it working to the level it is. But it would be better if it was maybe more considered, how we’re advancing the frontier, rather than what seems like, at the moment, more like the wild west.”
Workforce implications
AlphaFold’s dramatic reduction in protein structure research time may also mean less need for researchers, both professional and academic. What are PhD students supposed to work on now? Hassabis said that within a few months of AlphaFold’s release, he witnessed students at UK universities already using it as “a completely standard part of their workflow.” That frees them up to downstream work, like understanding the protein’s function or designing a compound to bind to it.
“It’s not like science is solved. There are plenty and plenty of things to do, and I think that’s what good technology has always done in the past – it’s a really helpful tool that frees you up… to think of the next level up in the problem stack.”
AlphaFold and the future of mRNA vaccines
Will Google DeepMind take a stab at the problem of 3-D RNA structures as it did proteins, given the growing importance of mRNA vaccines? “Yeah, we have,” Hassabis said, suggesting his company is already working on an “AlphaFold 3.”
While AlphaFold solved the “static picture of the protein,” the reality is that proteins interact with other proteins and compounds, “but also RNA and DNA,” he said. As far as the results from that work? “Watch this space,” Hassabis said, suggesting something to come “in the next year or so.”
Is AI sentient?
No, Hassabis said – even if the current definition of consciousness is still a matter of debate. However, as they become more sophisticated over the next few years or decades, AI programs will likely be capable of carrying out more complex cognitive functions – maybe even “the invention aspect of creativity,” Hassabis said.
Yet it is at that point that AI may ultimately help us define consciousness, he explained. By using a highly sophisticated AI as a control, we could explore what’s unique about our own human minds, Hassabis said. “Invention, dreaming, emotions, consciousness would be another one in that mix – all these mysteries of the mind. It would help to know how that all works.”