Skip to main content

Microsoft plans to use Minecraft to test our future robotic overlords

ben wheatley minecraft free fire
Microsoft has announced plans to make the hugely popular video game Minecraft into a potent tool for AI researchers. By installing an open source software platform called AIX, which is set to be distributed in July, the game will act as a test bed that an AI entity can be taught to explore.

Something as simple as climbing a hill might seem like a basic AI research project, but creating a robot that can carry out the necessary movements is often prohibitively expensive. By limiting that motion to the virtual world of Minecraft, researchers can perform similar experiments without the associated costs.

Given that Minecraft uses a sandbox structure, it’s well-suited to research projects that teach AIs to make decisions about the world around them. In the game, these choices might relate to avoiding a fiery death in a pool of lava or staying inside at night to hide from nocturnal enemies, but the fundamentals could have real world applications.

Project lead Katja Hofmann is quoted as saying that the expected scope of the project “provides a way to take AI from where it is today up to human-level intelligence, which is where we want to be, in several decades time,” in a report from the BBC.

When Microsoft acquired development studio Mojang and its biggest release Minecraft for $2.5 billion in 2014, it was immediately clear that the company was looking for more than just rights to the game and its significant potential for merchandising revenue. For comparison, Disney’s purchase of Lucasfilm — which secured both Star Wars and Indiana Jones — was completed for $4 billion.

Given that Microsoft made such a significant financial investment, Minecraft was always destined to be implemented as more than just a video game product. Between a recent push to use the title in education, its constant presence as HoloLens briefings and this new application in AI research, it seems that the brand is being put to good use.

Brad Jones
Brad is an English-born writer currently splitting his time between Edinburgh and Pennsylvania. You can find him on Twitter…
Microsoft has a new way to keep ChatGPT ethical, but will it work?
Bing Chat shown on a laptop.

Microsoft caught a lot of flak when it shut down its artificial intelligence (AI) Ethics & Society team in March 2023. It wasn’t a good look given the near-simultaneous scandals engulfing AI, but the company has just laid out how it intends to keep its future efforts responsible and in check going forward.

In a post on Microsoft’s On the Issues blog, Natasha Crampton -- the Redmond firm’s Chief Responsible AI Officer -- explained that the ethics team was disbanded because “A single team or a single discipline tasked with responsible or ethical AI was not going to meet our objectives.”

Read more
Stop using generative-AI tools such as ChatGPT, Samsung orders staff
Samsung logo

Samsung has told staff to stop using generative AI tools such as ChatGPT and Bard over concerns that they pose a security risk, Bloomberg reported on Monday.

The move follows a string of embarrassing slip-ups last month when Samsung employees reportedly fed sensitive semiconductor-related data into ChatGPT on three occasions.

Read more
Even Microsoft thinks ChatGPT needs to be regulated — here’s why
A MacBook Pro on a desk with ChatGPT's website showing on its display.

Artificial intelligence (AI) chatbots have been taking the world by storm, with the capabilities of Microsoft’s ChatGPT causing wonderment and fear in almost equal measure. But in an intriguing twist, even Microsoft is now calling on governments to take action and regulate AI before things spin dangerously out of control.

The appeal was made by BSA, a trade group representing numerous business software companies, including Microsoft, Adobe, Dropbox, IBM, and Zoom. According to CNBC, the group is advocating for the US government to integrate rules governing the use of AI into national privacy legislation.

Read more