Cambridge University will set up a new academic centre, which fill focus on studying the negative effects of advances in AI and automation. More from the Daily Mail:
A centre for ‘terminator studies’, where leading academics will study the threat that robots pose to humanity, is set to open at Cambridge University.
Its purpose will be to study the four greatest threats to the human species – artificial intelligence, climate change, nuclear war and rogue biotechnology.
The Centre for the Study of Existential Risk (CSER) will be co-launched by Lord Rees, the astronomer royal and one of the world’s top cosmologists.
Rees’s 2003 book Our Final Century had warned that the destructiveness of humanity meant that the species could wipe itself out by 2100.
The idea that machines might one day take over humanity has featured in many science fiction books and films, including the Terminator, in which Arnold Schwarzenegger stars as a homicidal robot.
I’ll admit my own incredulity, but this appears to be a real thing. From the Cambridge site:
Now a philosopher, a scientist and a software engineer have come together to propose a new centre at Cambridge, the Centre for the Study of Existential Risk (CSER), to address these cases – from developments in bio and nanotechnology to extreme climate change and even artificial intelligence – in which technology might pose “extinction-level” risks to our species.
“At some point, this century or next, we may well be facing one of the major shifts in human history – perhaps even cosmic history – when intelligence escapes the constraints of biology,” says Huw Price, the Bertrand Russell Professor of Philosophy and one of CSER’s three founders, speaking about the possible impact of Good’s ultra-intelligent machine, or artificial general intelligence (AGI) as we call it today.
“Nature didn’t anticipate us, and we in our turn shouldn’t take AGI for granted. We need to take seriously the possibility that there might be a ‘Pandora’s box’ moment with AGI that, if missed, could be disastrous. I don’t mean that we can predict this with certainty, no one is presently in a position to do that, but that’s the point! With so much at stake, we need to do a better job of understanding the risks of potentially catastrophic technologies.”
Happy Cybermonday, folks! 🙂