Skip to main content

AI learns how to tackle new situations by studying how humans play games

nestor ai paying attention artificial intelligence
Image used with permission by copyright holder
If artificial intelligence is going to excel at driving cars or performing other complex tasks that we humans take for granted, then it needs to learn how to respond to unknown circumstances. That is the task of machine learning, which needs real-world examples to study.

So far, however, most data used to train machine-learning systems comes from virtual environments. A group of researchers, including a Microsoft Research scientist from the U.K., have set out to change that by using game replay data that can show an AI how humans tackle complex problems.

Recommended Videos

The researchers used Atari 2600 game replays to provide real-world data to a deep learning system that uses trial and error, or reinforcement learning (RL), to tackle new tasks in a previously unknown environment. The data used in the study represents what the researchers called the “largest and most diverse such data set” that has ever been publicly released.

The data was gathered by making a web-based Atari 2600 emulator, called the Atari Grand Challenge, available using the Javatari tool written in Javascript. The researchers used a form of gamified crowdsourcing, which leveraged people’s desire to play games in order to be helpful along with a reward mechanism that ranked each player’s performance.

Around 9.7 million frames or about 45 hours of gameplay time were collected and analyzed. Five games were used in creating the data based on their varying levels of difficulty and complexity: Video Pinball, Qbert, Space Invaders, Ms. Pac-Man, and Montezuma’s Revenge.

The results have been promising so far. By feeding information into the system like player actions taken during the games, in-game rewards, and current scores, the researchers were able to demonstrate the value of using this kind of data to train machine learning systems. Going forward, the researchers hope to use professional players to improve the data’s ability to train AI that is even better at responding to unknown situations.

Mark Coppock
Mark Coppock is a Freelance Writer at Digital Trends covering primarily laptop and other computing technologies. He has…
Qualcomm says its new chips are 4.5 times faster at AI than rivals
Two Qualcomm Snapdragon chips.

Qualcomm just announced two powerful new processors that excel at generative AI, one for laptops and the other for phones. As the potential applications for artificial intelligence continue to expand from text to images, video, and beyond, faster processing on your own device is becoming more important.

The Snapdragon X Elite is Qualcomm's exciting new laptop processor, boasting best-in-class CPU performance for Windows laptops and impressive GPU speed. For phones, the Snapdragon 8 Gen 3 blasts by the previous generation with 30% greater speed while drawing 20% less energy from your battery.

Read more
OpenAI’s new tool can spot fake AI images, but there’s a catch
OpenAI Dall-E 3 alpha test version image.

Images generated by artificial intelligence (AI) have been causing plenty of consternation in recent months, with people understandably worried that they could be used to spread misinformation and deceive the public. Now, ChatGPT maker OpenAI is apparently working on a tool that can detect AI-generated images with 99% accuracy.

According to Bloomberg, OpenAI’s tool is designed to root out user-made pictures created by its own Dall-E 3 image generator. Speaking at the Wall Street Journal’s Tech Live event, Mira Murati, chief technology officer at OpenAI, claimed the tool is “99% reliable.” While the tech is being tested internally, there’s no release date yet.

Read more
Most people distrust AI and want regulation, says new survey
A person's hand holding a smartphone. The smartphone is showing the website for the ChatGPT generative AI.

Most American adults do not trust artificial intelligence (AI) tools like ChatGPT and worry about their potential misuse, a new survey has found. It suggests that the frequent scandals surrounding AI-created malware and disinformation are taking their toll and that the public might be increasingly receptive to ideas of AI regulation.

The survey from the MITRE Corporation and the Harris Poll claims that just 39% of 2,063 U.S. adults polled believe that today’s AI tech is “safe and secure,” a drop of 9% from when the two firms conducted their last survey in November 2022.

Read more