- The University of Oxford's Future of Humanity Institute is led by Professor Nick Bostrom, who is the author of "Superintelligence."
- Over at the University of Cambridge, just 66 miles away, there is the Center for the Study of Existential Risk and the Leverhulme Center for the Future of Intelligence.
- Researchers at both universities are carefully studying how to ensure artificial intelligence is developed safely.
Oxford and Cambridge, the oldest universities in Britain and two of the oldest in the world, are keeping a watchful eye on the buzzy field of artificial intelligence (AI), which has been hailed as a technology that will bring about a new industrial revolution and change the world as we know it.
Over the last few years, each of the centuries-old institutions have pumped millions of pounds into researching the possible risks associated with machines of the future.
Clever algorithms can already outperform humans at certain tasks. For example, they can beat the best human players in the world at incredibly complex games like chess and Go, and they're able to spot cancerous tumors in a mammogram far quicker than a human clinician can. Machines can also tell the difference between a cat and a dog, or determine a random person's identity just by looking at a photo of their face. They can also translate languages, drive cars, and keep your home at the right temperature. But generally speaking, they're still nowhere near as smart as the average 7-year-old.
The main issue is that AI can't multitask. For example, a game-playing AI can't yet paint a picture. In other words, AI today is very "narrow" in its intelligence. However, computer scientists at the the likes of Google and Facebook are aiming to make AI more "general" in the years ahead, and that's got some big thinkers deeply concerned.
Nick Bostrom, a 47-year-old Swedish born philosopher and polymath, founded the Future of Humanity Institute (FHI) at the University of Oxford in 2005 to assess how dangerous AI and other potential threats might be to the human species.
In the main foyer of the institute, complex equations beyond most people's comprehension are scribbled on whiteboards next to words like "AI safety" and "AI governance." Pensive students from other departments pop in and out as they go about daily routines.
It's rare to get an interview with Bostrom, a transhumanist who believes that we can and should augment our bodies with technology to help eliminate ageing as a cause of death.
"I'm quite protective about research and thinking time so I'm kind of semi-allergic to scheduling too many meetings," he says.
Tall, skinny and clean shaven, Bostrom has riled some AI researchers with his openness to entertain the idea that one day in the not so distant future, machines will be the top dog on Earth. He doesn't go as far as to say when that day will be, but he thinks that it's potentially close enough for us to be worrying about it.
If and when machines possess human-level artificial general intelligence, Bostrom thinks they could quickly go on to make themselves even smarter and become superintelligent. At this point, it's anyone's guess what happens next.
The optimist says the superintelligent machines will free up humans from work and allow them to live in some sort of utopia where there's an abundance of everything they could ever desire. The pessimist says they'll decide humans are no longer necessary and wipe them all out. Billionare Elon Musk, who has a complex relationship with AI researchers, recommended Bostrom's book "Superintelligence" on Twitter.
Bostrom's institute has been backed with roughly $20 million since its inception. Around $14 million of that coming from the Open Philanthropy Project, a San Francisco-headquartered research and grant-making foundation. The rest of the money has come from the likes of Musk and the European Research Council.
Located in an unassuming building down a winding road off Oxford's main shopping street, the institute is full of mathematicians, computer scientists, physicians, neuroscientists, philosophers, engineers and political scientists.
Eccentric thinkers from all over the world come here to have conversations over cups of tea about what might lie ahead. "A lot of people have some kind of polymath and they are often interested in more than one field," says Bostrom.
The FHI team has scaled from four people to about 60 people over the years. "In a year, or a year and a half, we will be approaching 100 (people)," says Bostrom. The culture at the institute is a blend of academia, start-up and NGO, according to Bostrom, who says it results in an "interesting creative space of possibilities" where there is "a sense of mission and urgency."
If AI somehow became much more powerful, there are three main ways in which it could end up causing harm, according to Bostrom. They are:
- AI could do something bad to humans.
- Humans could do something bad to each other using AI.
- Humans could do bad things to AI (in this scenario, AI would have some sort of moral status).
"Each of these categories is a plausible place where things could go wrong," says Bostrom.
With regards to machines turning against humans, Bostrom says that if AI becomes really powerful then "there's a potential risk from the AI itself that it does something different than anybody intended that could then be detrimental."
In terms of humans doing bad things to other humans with AI, there's already a precedent there as humans have used other technological discoveries for the purpose of war or oppression. Just look at the atomic bombings of Hiroshima and Nagasaki, for example. Figuring out how to reduce the risk of this happening with AI is worthwhile, Bostrom says, adding that it's easier said than done.
Asked if he is more or less worried about the arrival of superintelligent machines than he was when his book was published in 2014, Bostrom says the timelines have contracted.
"I think progress has been faster than expected over the last six years with the whole deep learning revolution and everything," he says.
When Bostrom wrote the book, there weren't many people in the world seriously researching the potential dangers of AI. "Now there is this thriving small, but thriving field of AI safety work with a number of groups," he says.
While there's potential for things to go wrong, Bostrom says it's important to remember that there are exciting upsides to AI and he doesn't want to be viewed as the person predicting the end of the world.
"I think there is now less need to emphasize primarily the downsides of AI," he says, stressing that his views on AI are complex and multifaceted.
Bostrom says the aim of FHI is "to apply careful thinking to big picture questions for humanity." The institute is not just looking at the next year or the next 10 years, it's looking at everything in perpetuity.
"AI has been an interest since the beginning and for me, I mean, all the way back to the 90s," says Bostrom. "It is a big focus, you could say obsession almost."
The rise of technology is one of several plausible ways that could cause the "human condition" to change in Bostrom's view. AI is one of those technologies but there are groups at the FHI looking at biosecurity (viruses etc), molecular nanotechnology, surveillance tech, genetics, and biotech (human enhancement).
When it comes to AI, the FHI has two groups; one does technical work on the AI alignment problem and the other looks at governance issues that will arise as machine intelligence becomes increasingly powerful.
The AI alignment group is developing algorithms and trying to figure out how to ensure complex intelligent systems behave as we intend them to behave. That involves aligning them with "human preferences," says Bostrom.
Roughly 66 miles away at the University of Cambridge, academics are also looking at threats to human existence, albeit through a slightly different lens.
Researchers at the Center for the Study of Existential Risk (CSER) are assessing biological weapons, pandemics, and, of course, AI.
"One of the most active areas of activities has been on AI," said CSER co-founder Lord Martin Rees from his sizable quarters at Trinity College in an earlier interview.
Rees, a renowned cosmologist and astrophysicist who was the president of the prestigious Royal Society from 2005 to 2010, is retired so his CSER role is voluntary, but he remains highly involved.
It's important that any algorithm deciding the fate of human beings can be explained to human beings, according to Rees. "If you are put in prison or deprived of your credit by some algorithm then you are entitled to have an explanation so you can understand. Of course, that's the problem at the moment because the remarkable thing about these algorithms like AlphaGo (Google DeepMind's Go-playing algorithm) is that the creators of the program don't understand how it actually operates. This is a genuine dilemma and they're aware of this."
The idea for CSER was conceived in the summer of 2011 during a conversation in the back of a Copenhagen cab between Cambridge academic Huw Price and Skype co-founder Jaan Tallinn, whose donations account for 7-8% of the center's overall funding and equate to hundreds of thousands of pounds.
"I shared a taxi with a man who thought his chance of dying in an artificial intelligence-related accident was as high as that of heart disease or cancer," Price wrote of his taxi ride with Tallinn. "I'd never met anyone who regarded it as such a pressing cause for concern — let alone anyone with their feet so firmly on the ground in the software business."
CSER is studying how AI could be used in warfare, as well as analyzing some of the longer term concerns that people like Bostrom have written about. It is also looking at how AI can turbocharge climate science and agricultural food supply chains.
"We try to look at both the positives and negatives of the technology because our real aim is making the world more secure," says Seán ÓhÉigeartaigh, executive director at CSER and a former colleague of Bostrom's. ÓhÉigeartaigh, who holds a PhD in genomics from Trinity College Dublin, says CSER currently has three joint projects on the go with FHI.
External advisors include Bostrom and Musk, as well as other AI experts like Stuart Russell and DeepMind's Murray Shanahan. The late Stephen Hawking was also an advisor when he was alive.
The Leverhulme Center for the Future of Intelligence (CFI) was opened at Cambridge in 2016 and today it sits in the same building as CSER, a stone's throw from the punting boats on the River Cam. The building isn't the only thing the centers share — staff overlap too and there's a lot of research that spans both departments.
Backed with over £10 million from the grant-making Leverhulme Foundation, the center is designed to support "innovative blue skies thinking," according to ÓhÉigeartaigh, its co-developer.
Was there really a need for another one of these research centers? ÓhÉigeartaigh thinks so. "It was becoming clear that there would be, as well as the technical opportunities and challenges, legal topics to explore, economic topics, social science topics," he says.
"How do we make sure that artificial intelligence benefits everyone in a global society? You look at issues like who's involved in the development process? Who is consulted? How does the governance work? How do we make sure that marginalized communities have a voice?"
The aim of CFI is to get computer scientists and machine-learning experts working hand in hand with people from policy, social science, risk and governance, ethics, culture, critical theory and so on. As a result, the center should be able to take a broad view of the range of opportunities and challenges that AI poses to societies.
"By bringing together people who think about these things from different angles, we're able to figure out what might be properly plausible scenarios that are worth trying to mitigate against," said ÓhÉigeartaigh.