Skip to main content

This basic human skill is the next major milestone for A.I.

Remember the amazing, revelatory feeling when you first discovered the existence of cause and effect? That’s a trick question. Kids start learning the principle of causality from as early as eight months old, helping them to make rudimentary inferences about the world around them. But most of us don’t remember much before the age of around three or four, so the important lesson of “why” is something we simply take for granted.

It’s not only a crucial lesson for humans to learn, but also one that today’s artificial intelligence systems are pretty darn bad at. While modern A.I. is capable of beating human players at Go and driving cars on busy streets, this is not necessarily comparable with the kind of intelligence humans might use to master these abilities. That’s because humans — even small infants — possess the ability to generalize by applying knowledge from one domain to another. For A.I. to live up to its potential, this is something it also needs to be able to do.

“For instance, if the robot learned how to build a tower using some blocks, it may want to transfer these skills to building a bridge or even a house-like structure,” Ossama Ahmed, a master’s student at ETH Zurich in Switzerland, told Digital Trends. “One way to achieve this might be learning the causal relationships between the different environment variables. Or imagine that the TriFinger robot used in CausalWorld suddenly loses one finger due to a hardware malfunction. How can it still build the goal shape with only two fingers instead?”

CausalWorld video

A virtual training world for machines

CausalWorld is what Frederik Träuble, a Ph.D. student at the Max Planck Institute for Intelligent Systems in Germany, refers to as a “manipulation benchmark.” It’s a step toward advancing research so that robotic agents can better generalize various changes in an environment’s properties, such as the mass or shape of objects. For example, if a robot learns to pick up a particular object, we might reasonably expect that it can transfer this ability to heavier objects — so long as it understands the right causal relationship.

The kind of virtual training environment we’re used to hearing about in sci-fi movies is the one in, say, The Matrix: a virtual world in which rules don’t apply. In CausalWorld, in which researchers can systematically train and evaluate their methods in robotic environments, it’s just the opposite. It’s all about learning the rules — and applying them. Robot agents can be given tasks similar to the ones kids participate in when they play with blocks to do stacking, pushing and other cause-and-effect play. The researchers can intervene to test the robot’s generalization abilities as it learns. It’s basically a testing environment that will help evaluate how A.I. agents can generalize.

“Most of modern A.I. is based on statistical learning, which is all about extracting statistical information — for example, correlations — from data,” Bernhard Schölkopf, director of the Max Planck Institute, told Digital Trends. “This is great because it allows us to predict one quantity from others, but only as long as nothing changes. When you intervene in a system, then all bets are off. To make predictions in such cases, we need to go beyond statistical learning, towards causality. Ultimately, if future A.I. is to be about thinking in the sense of ‘acting in imagined spaces,’ then interventions are key, and thus causality needs to be taken into account.”

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Nvidia’s new voice A.I. sounds just like a real person
Nvidia Voice AI

The "uncanny valley" is often used to describe artificial intelligence (A.I.) mimicking human behavior. But Nvidia's new voice A.I. is much more realistic than anything we've ever heard before. Using a combination of A.I. and a human reference recording, the fake voice sounds almost identical to a real one.

All the Feels: NVIDIA Shares Expressive Speech Synthesis Research at Interspeech

Read more
Facial recognition tech for bears aims to keep humans safe
A brown bear in Hokkaido, Japan.

If bears could talk, they might voice privacy concerns. But their current inability to articulate thoughts means there isn’t much they can do about plans in Japan to use facial recognition to identify so-called "troublemakers" among its community.

With bears increasingly venturing into urban areas across Japan, and the number of bear attacks on the rise, the town of Shibetsu in the country’s northern prefecture of Hokkaido is hoping that artificial intelligence will help it to better manage the situation and keep people safe, the Mainichi Shimbun reported.

Read more
Nvidia lowers the barrier to entry into A.I. with Fleet Command and LaunchPad
laptop running Nvidia Fleet Command software.

Nvidia is expanding its artificial intelligence (A.I.) offerings as part of its continued effort to "democratize A.I." The company announced two new programs today that can help businesses of any size to train and deploy A.I. models without investing in infrastructure. The first is A.I. LaunchPad, which gives enterprises access to a stack of A.I. infrastructure and software, and the second is Fleet Command, which helps businesses deploy and manage the A.I. models they've trained.

At Computex 2021, Nvidia announced the Base Command platform that allows businesses to train A.I. models on Nvidia's DGX SuperPod supercomputer.  Fleet Command builds on this platform by allowing users to simulate A.I. models and deploy them across edge devices remotely. With an Nvidia-certified system, admins can now control the entire life cycle of A.I. training and edge deployment without the upfront cost.

Read more