It’s hard to think of a company more infatuated with AI than Google. With multi-billion dollar investments in deep learning startups like DeepMind, and responsible for some of the biggest advances involving neural networks, Google is the greatest cheerleader artificial intelligence could possibly hope for.
But that doesn’t mean there aren’t things about AI that scare the search giant.
In a new paper, entitled “Concrete Problems in AI Safety,” Google researchers — alongside experts from UC Berkeley and Stanford University — lay out some of the possible “negative side effects” which may arise from AI systems over the coming years. Instead of focusing on the distant threat of superintelligence, the 29-page paper instead examines “unintended and harmful behavior that may emerge from poor design.” Two big themes which emerge are the idea of a machine purposely misleading its creators in order to complete an objective, or else causing injury or damage to achieve “a tiny advantage for [its] task at hand.”
“This is a great paper that achieves a much-needed systematic classification of safety issues relating to autonomous AI systems,” George Zarkadakis, author of the book In Our Own Image: Will Artificial Intelligence Save or Destroy Us?, tells Digital Trends.
As to whether fears about AI are justified, Zarkadakis says that Google’s warnings — while potentially alarming — are a far cry from some of the other AI warnings we’ve heard in recent months from the likes of Stephen Hawking and Elon Musk. “The Google paper is a matter-of-fact engineering approach to identifying the areas for introducing safety in the design of autonomous AI systems, and suggesting design approaches to build in safety mechanisms,” he notes.
Indeed, despite its raising of issues, Google’s paper ends by considering the “question of how to think most productively about the safety of forward-looking applications of AI,” complete with handy suggestions. In all, whether you think working to achieve artificial intelligence is going to be a net positive or potentially disastrous negative for humanity, the newly-published paper is well worth a read.