Skip to main content

Kid-mounted cameras help A.I. learn to view the world through eyes of a child

Talk to any artificial intelligence researcher and they’ll tell you that, while A.I. may be capable of complex acts like driving cars and spotting tiny details on X-ray scans, they’re still way behind when it comes to the generalized abilities of even a 3-year-old kid. This is sometimes called Moravec’s paradox: That the seemingly hard stuff is easy for an A.I., while the seemingly easy stuff is hard.

Recommended Videos

But what if you could teach an A.I. to learn like a kid? And what kind of training data would you need to feed into a neural network to carry out the experiment? Researchers from New York University recently set out to test this hypothesis by using a dataset of video footage taken from head-mounted cameras worn regularly by kids during their first three years alive.

This SAYcam data was collected by psychologist Jess Sullivan and colleagues in a paper published earlier this year. The kids recorded their GoPro-style experiences for one to two hours per week as they went about their daily lives. The researchers recorded the footage to create a “large, naturalistic, longitudinal dataset of infant and child-perspective videos” for use by psychologists, linguists, and computer scientists.

Training an A.I. to view the world like a kid

The New York University researchers then took this video data and used it to train a neural network.

“The goal was to address a nature vs. nurture-type question,” Emin Orhan, lead researcher on the project, told in an email to Digital Trends. “Given this visual experience that children receive in their early development, can we learn high-level visual categories — such as table, chair, cat, car, etc. — using generic learning algorithms, or does this ability require some kind of innate knowledge in children that cannot be learned by applying generic learning methods to the early visual experience that children receive?”

The A.I. did show some learning by, for example, recognizing a cat that was frequently featured in the video. While the researchers didn’t create anything close to a kid version of Artificial General Intelligence, the research nonetheless highlights how certain visual features can be learned simply by watching naturalistic data. There’s still more work to be done, though.

“We found that, by and large, it is possible to learn pretty sophisticated high-level visual concepts in this way without assuming any innate knowledge,” Orhan explained. “But understanding precisely what these machine learning models trained with the headcam data are capable of doing, and what exactly is still missing in these models compared to the visual abilities of children, is still [a] work in progress.”

A paper describing the research is available to read online.

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Deep-learning A.I. is helping archaeologists translate ancient tablets
DeepScribe project 1

Deep-learning artificial intelligence is helping grapple with plenty of problems in the modern world. But it also has its part to play in helping solve some ancient problems as well -- such as assisting in the translation of 2,500-year-old clay tablet documents from Persia's Achaemenid Empire.

These tablets, which were discovered in modern-day Iran in 1933, have been studied by scholars for decades. However, they’ve found the translation process for the tablets -- which number in the tens of thousands -- to be laborious and prone to errors. A.I. technology can help.

Read more
A.I. translation tool sheds light on the secret language of mice
ai sheds light on mouse communication

Breaking the communication code

Ever wanted to know what animals are saying? Neuroscientists at the University of Delaware have taken a big leap forward in decoding the sounds made by one particular animal in a way that takes us a whole lot closer than anyone has gotten so far. The animal in question? The humble mouse.

Read more
Deep learning A.I. can imitate the distortion effects of iconic guitar gods
guitar_amp_in_anechoic_chamber_26-1-2020_photo_mikko_raskinen_006 1

Music making is increasingly digitized here in 2020, but some analog audio effects are still very difficult to reproduce in this way. One of those effects is the kind of screeching guitar distortion favored by rock gods everywhere. Up to now, these effects, which involve guitar amplifiers, have been next to impossible to re-create digitally.

That’s now changed thanks to the work of researchers in the department of signal processing and acoustics at Finland’s Aalto University. Using deep learning artificial intelligence (A.I.), they have created a neural network for guitar distortion modeling that, for the first time, can fool blind-test listeners into thinking it’s the genuine article. Think of it like a Turing Test, cranked all the way up to a Spınal Tap-style 11.

Read more