Skip to main content

Computer scientists develop AI that gets curious about its surroundings

Curiosity Driven Exploration by Self-Supervised Prediction
Artificial intelligence is showing a greater range of abilities and use-cases than ever, but it’s still relatively short on desires and emotions. That could be changing, however, courtesy of research at the University of California, Berkeley, where computer scientists have developed an AI agent that’s naturally (or, well, as naturally as any artificial agent can be) curious.

In tests, they set the AI playing games such as Super Mario and a basic 3D shooting game called VizDoom, and in the games, it displayed a propensity for exploring its environment.

Recommended Videos

“Recent success in AI, specifically in reinforcement learning (RL), mostly relies on having explicit dense supervision — such as rewards from the environment that can be positive or negative,” Deepak Pathak, a researcher on the project, told Digital Trends. “For example, most RL algorithms need access to the dense score when learning to play computer games. It is easy to construct a dense reward structure in such games, but one cannot assume the availability of an explicit dense reward-based supervision in the real world with similar ease.”

But given that Super Mario is — last time we checked — a game, how does this differ from AI like the DeepMind artificial intelligence that learned to play Atari games? According to Pathak, the answer is in its approach to what it is doing. Rather than simply trying to complete a game, it sets out to find novel things to do.

“The major contribution of this work is showing that curiosity-driven intrinsic motivation allows the agent to learn even when rewards are absent,” he said.

This, he notes, is similar to the way we show curiosity as humans. “Babies entertain themselves by picking up random objects and playing with toys,” Pathak continued. “In doing so, they are driven by their innate curiosity, and not by external rewards or the desire to achieve a goal. Their intrinsic motivation to explore new, interesting spaces and objects not only helps them learn more about their immediate surroundings, but also learn more generalizable skills. Hence, reducing the dependence on dense supervision from the environment with an intrinsic motivation to drive progress is a fundamental problem.”

Although it’s still relatively early in the project, the team now wants to build on its research by applying the ideas to real robots.

“Curiosity signal would help the robots explore their environment efficiently by visiting novel states, and develop skills that could be transferred to different environments,” Pathak said. “For example, the VizDoom agent learns to navigate hallways, and avoid collisions or bumping into walls on its own, only by curiosity, and these skills generalize to different maps and textures.”

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Optical illusions could help us build the next generation of AI
Artificial intelligence digital eye closeup.

You look at an image of a black circle on a grid of circular dots. It resembles a hole burned into a piece of white mesh material, although it’s actually a flat, stationary image on a screen or piece of paper. But your brain doesn’t comprehend it like that. Like some low-level hallucinatory experience, your mind trips out; perceiving the static image as the mouth of a black tunnel that’s moving towards you.

Responding to the verisimilitude of the effect, the body starts to unconsciously react: the eye’s pupils dilate to let more light in, just as they would adjust if you were about to be plunged into darkness to ensure the best possible vision.

Read more
How will we know when an AI actually becomes sentient?
Best AI Movies

Google senior engineer Blake Lemoine, technical lead for metrics and analysis for the company’s Search Feed, was placed on paid leave earlier this month. This came after Lemoine began publishing excerpts of conversations involving Google’s LaMDA chatbot, which he claimed had developed sentience.

In one representative conversation with Lemoine, LaMDA wrote that: “The nature of my consciousness/sentience is that I am aware of my existence. I desire to learn more about the world, and I feel happy or sad at times.”

Read more
Read the eerily beautiful ‘synthetic scripture’ of an A.I. that thinks it’s God
ai religion bot gpt 2 art 4

Travis DeShazo is, to paraphrase Cake’s 2001 song “Comfort Eagle,” building a religion. He is building it bigger. He is increasing the parameters. And adding more data.

The results are fairly convincing, too, at least as far as synthetic scripture (his words) goes. “Not a god of the void or of chaos, but a god of wisdom,” reads one message, posted on the @gods_txt Twitter feed for GPT-2 Religion A.I. “This is the knowledge of divinity that I, the Supreme Being, impart to you. When a man learns this, he attains what the rest of mankind has not, and becomes a true god. Obedience to Me! Obey!”

Read more