I think it would be desirable if everyone at the forefront of developing next-generation AI systems, especially truly transformative superintelligent systems, had the opportunity to take a break during key milestones. This would be useful for security.
I would be much more skeptical of proposals that appear to create a risk that this could turn into a permanent ban on AI. This seems much less likely than the alternative, but more likely than two years ago. Ultimately, it wouldn’t be a huge tragedy if this was never developed, if we were somehow confined to being monkeys in need, in poverty and disease. Like, are we going to do this for a million years?
Coming back for a moment to the existential risk of AI, are you generally satisfied with the efforts made to address it?
Well, the conversation is all over the place. There are also a number of more immediate issues that deserve our attention: discrimination, privacy, intellectual property, et cetera.
Companies interested in the long-term consequences of their activities are investing in AI security and trying to mobilize policy makers. I think the bar will kind of have to be raised incrementally as we move forward.
Unlike the so-called AI “doomers,” some advocate worrying less and speeding up more. What do you think of this movement?
People sort of divide into different tribes who can then fight pitched battles. To me it seems clear that it is just very complex and difficult to understand what actually makes things better or worse in particular dimensions.
I’ve spent three decades thinking seriously about these things and I have some opinions on specific things, but the overall message is that I still feel in the dark. Maybe these other people have found shortcuts to brilliant ideas.
Perhaps they are also reacting to what they see as knee-jerk negativity towards technology?
This is also true. If something goes too far in another direction, it naturally creates this. I hope that even though there are a lot of individually irrational people who take strong, confident positions in opposite directions, that it somehow balances out in some overall sanity.
I think a lot of frustration is building up. Maybe as a corrective they’re right, but I think ultimately there needs to be some sort of synthesis.
Since 2005, you have worked at the Future of Humanity Institute at the University of Oxford, which you founded. Last month, it announced its closure after friction with university bureaucracy. What happened?
It took several years of preparation, a sort of struggle with the local bureaucracy. A hiring freeze, a fundraising freeze, just a bunch of impositions, and it became impossible to operate the institute as a dynamic, interdisciplinary research institute. To be honest, we were always a bit of a misfit in the philosophy department.
What’s next for you?
I feel an immense sense of emancipation, having perhaps had enough time to take care of the faculties. I want to spend time, I think, just looking around and thinking about things without a very well-defined agenda. The idea of being a free man seems very attractive.