Skip to main content

Google raises 5 safety concerns for the future of artificial intelligence

nestor ai paying attention artificial intelligence
While artificial intelligence was once sci-fi subject matter, the field is advancing at such a rate that we’ll likely see it become a part of everyday life before too long. As a result, Google wants to make sure that an AI can be trusted to carry out a task as instructed, and do so without putting humans at risk.

That was the focus of a study carried out by Google in association with Stanford University; University of California, Berkeley; and OpenAI, the research company co-founded by Elon Musk. The project outlined five problems that need to be addressed so that the field can flourish, according to a report from Recode.

Recommended Videos

It’s noted that these five points are “research questions” intended to start a discussion, rather than offer up a solution. These issues are minor concerns right now, but Google’s blog post suggests that they will be increasingly important in the long-term.

The first problem asks how we’ll avoid negative side effects, giving the example of a cleaning AI cutting corners and knocking over a vase because that’s the fastest way to complete its janitorial duties. The second refers to “reward hacking,” where a robot might try take shortcuts to fulfill its objective without actually completing the task at hand.

The third problem is related to oversight, and making sure that robots don’t require too much feedback from human operators. The fourth raises the issue of the robot’s safety while exploring; this is illustrated by a mopping robot experimenting with new techniques, but knowing to fall short of mopping an electrical outlet (for obvious reasons).

The final problem looks at the differences between the environment a robot would train in, and their workplace. There are bound to be major discrepancies, and the AI needs to be able to get the job done regardless.

It’s really just a matter of time before we see AI being used to carry out menial tasks, but research like this demonstrates the issues that need to be tackled ahead of a wide rollout. User safety and the quality of the service will of course be paramount, so it’s vital that these questions are asked well ahead of time.

Brad Jones
Brad is an English-born writer currently splitting his time between Edinburgh and Pennsylvania. You can find him on Twitter…
‘Godfather of AI’ quits Google to speak more freely on concerns
google deepmind collaboration head and neck cancer treatment artificial intelligence

Artificial intelligence pioneer Geoffrey Hinton surprised many on Monday when he revealed he'd quit his job at Google where he worked for the last decade on AI projects.

Often referred to as “the godfather of AI” for his groundbreaking work that underpins many of today's AI systems, British-born Hinton, now 75, told the New York Times that he has serious concerns about the speed at which the likes of Open AI with its ChatGPT tool, and Google with Bard, are working to develop their products, especially as it could be at the cost of safety.

Read more
Google’s ChatGPT rival is an ethical mess, say Google’s own workers
ChatGPT versus Google on smartphones.

Google launched Bard, its ChatGPT rival, despite internal concerns that it was a “pathological liar” and produced “cringeworthy” results, a new report has claimed. Worker say these worries were apparently ignored in a frantic attempt to catch up with ChatGPT and head off the threat it could pose to Google’s search business.

The revelations come from a Bloomberg report that took a deep dive into Google Bard and the issues raised by employees who have worked on the project. It’s an eye-opening account of the ways the chatbot has apparently gone off the rails and the misgivings these incidents have raised among concerned workers.

Read more
Google Bard vs. ChatGPT: which is the better AI chatbot?
ChatGPT versus Google on smartphones.

Google Bard and ChatGPT are two of the most prominent AI chatbots available in 2023. But which is better? Both offer natural language responses to natural language inputs, using machine learning and millions of data points to craft useful, informative responses. Most of the time. These AI tools aren't perfect yet, but they point to an exciting future of AI assistant search and learning tools that will make information all the more readily available.

As similar as these chatbots are, they also have some distinct differences. Here's how ChatGPT and Google Bard measure up against one another.

Read more