Skip to main content

This groundbreaking new style of A.I. learns things in a totally different way

With very rare exceptions, every major advance in artificial intelligence this century has been the result of machine learning. As its name implies (and counter to the symbolic A.I. that characterized much of the first half of the field’s history), machine learning involves smart systems that don’t just follow rules but actually, well, learn.

Recommended Videos

But there’s a problem. Unlike even a small human child, machine learning needs to be shown large numbers of training examples before it can successfully recognize them. There’s no such thing as, say, seeing an object like a “doofer” (you don’t know what it is, but we bet you would remember it if you saw one) and, thereafter, being able to recognize every subsequent doofer you see.

If A.I. is going to live up to its potential, it’s important that it can learn this way. While the problem has yet to be solved, a new research paper from the University of Waterloo in Ontario describes a potential breakthrough process called LO-shot (or less-than-one shot) learning. This could enable machines to learn far more rapidly in the manner of humans. That would be useful for a wide range of reasons, but particularly scenarios in which large amounts of data do not exist for training.

The promise of less-than-one shot learning

“Our LO-shot learning paper theoretically explores the smallest possible number of samples that are needed to train machine learning models,” Ilia Sucholutsky, a Ph.D. student working on the project, told Digital Trends. “We found that models can actually learn to recognize more classes than the number of training examples they are given. We initially noticed this result empirically when working on our previous paper on soft-label dataset distillation, a method for generating tiny synthetic datasets that train models to the same performance as if they were trained on the original dataset. We found that we could train neural nets to recognize all 10 digits — zero to nine — after being trained on just five synthetic examples, less than one per digit. … We were really surprised by this, and it’s what led to us working on this LO-shot learning paper to try and theoretically understand what was going on.”

Sucholutsky stressed that this is still the early stages. The new paper shows that LO-shot learning is possible. The researchers must now develop the algorithms required to perform LO-shot learning. In the meantime, he said the team has received interest from researchers in areas as diverse as volcanology, medical imaging, and cybersecurity — all of whom could benefit from this kind of A.I. learning.

“I’m hoping that we’ll be able to start rolling out these new tools really soon, but I encourage other machine learning researchers to also start exploring this direction to speed that process up,” Sucholutsky said.

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
The Pixel 6, Pixel 6 Pro are coming with a new chip, A.I., and a 4x zoom camera
Google Pixel 6 colors.

The Google Pixel 6 and Google Pixel 6 Pro are coming in the fall, and they will both use Google’s first custom processor called the Tensor. Although the phones have not been fully revealed yet, Google has previewed the phones and given a hint of what we should expect from the design and the all-important camera. However, if you’re looking for something on the Google Pixel Fold, unfortunately, there’s silence on that for now.

What’s the Google Tensor? Google says the custom processor is designed to better manage the use of artificial intelligence (A.I.) and machine learning -- two Google strengths -- on the phones and to help bring new features to them. Most excitingly, the Tensor has been specially customized for Google’s computational photography technology that has made Pixel cameras such winners in the past.

Read more
Facial recognition tech for bears aims to keep humans safe
A brown bear in Hokkaido, Japan.

If bears could talk, they might voice privacy concerns. But their current inability to articulate thoughts means there isn’t much they can do about plans in Japan to use facial recognition to identify so-called "troublemakers" among its community.

With bears increasingly venturing into urban areas across Japan, and the number of bear attacks on the rise, the town of Shibetsu in the country’s northern prefecture of Hokkaido is hoping that artificial intelligence will help it to better manage the situation and keep people safe, the Mainichi Shimbun reported.

Read more
Nvidia lowers the barrier to entry into A.I. with Fleet Command and LaunchPad
laptop running Nvidia Fleet Command software.

Nvidia is expanding its artificial intelligence (A.I.) offerings as part of its continued effort to "democratize A.I." The company announced two new programs today that can help businesses of any size to train and deploy A.I. models without investing in infrastructure. The first is A.I. LaunchPad, which gives enterprises access to a stack of A.I. infrastructure and software, and the second is Fleet Command, which helps businesses deploy and manage the A.I. models they've trained.

At Computex 2021, Nvidia announced the Base Command platform that allows businesses to train A.I. models on Nvidia's DGX SuperPod supercomputer.  Fleet Command builds on this platform by allowing users to simulate A.I. models and deploy them across edge devices remotely. With an Nvidia-certified system, admins can now control the entire life cycle of A.I. training and edge deployment without the upfront cost.

Read more