Facebook wants to build an army of robot assistants that can wait on us in all kinds of new ways. Well, kind of.
In fact, Facebook A.I. Research, the artificial intelligence research wing of the social networking titan, is hard at work on developing what it calls “Embodied A.I.” that will go way beyond the abilities of present-day voice interfaces like Siri, Alexa, or Google Assistant by carrying out tasks that allow them to operate in a physical environment. While most people think of A.I. agents as being disembodied chatbots, Facebook A.I. aims to change this by building systems that can perceive and act in the real world.
“We are still far from these capabilities, but you can imagine scenarios like asking a home robot ‘Can you go check if my laptop is on my desk? If so, bring it to me,’ or the robot hearing a thud coming from somewhere upstairs, and going to investigate where it is and what it is,” Kristen Grauman, Professor of Computer Science at the University of Texas at Austin, who also works as a research scientist at Facebook A.I. Research, told Digital Trends.
While Facebook’s end goal may be a way off, it has already made impressive progress. On Friday, Facebook showed off some new work it has been doing such as SoundSpaces, an audio simulation tool that can produce realistic audio rendering based on room geometry, materials, and more. This could be used to help future A.I. assistants understand how sound works in the physical world. Another tool is an indoor mapping system that could allow robots to better navigate unexplored terrain.
The next generation of smart assistants
To be clear, this research isn’t just about building physical robot versions of A.I. assistants. The bots Facebook is working on may also be able to sit on smart glasses (imagine a next-next-next gen version of Microsoft’s Clippy avatar), but with way more contextual smarts and understanding than present generation A.I. assistants.
For example, Facebook engineers want users to be able to ask questions like “where did I leave my keys?” or “what was that dessert we had at the restaurant on Friday night?” and receive accurate responses. That means researching and developing capabilities for embodied A.I. agents such as creating and storing memories, navigating from one place to another, understanding gravity and other reasoning about the world, planning next steps, and decoding dynamic human activities.
Facebook isn’t necessarily the first company you think of when it comes to A.I. assistants. It doesn’t presently have one as ubiquitous as those made by Apple, Amazon, or Google. But it — and its founder — have certainly explored this area before. Back in 2016, CEO Mark Zuckerberg announced that he was building an A.I. capable of running his home. It seems those ambitions have rubbed off on the other people at
“Facebook A.I. is a leader in many of the subfields that Embodied A.I. encompasses, spanning computer vision, language understanding, robotics, reinforcement learning, curiosity and self-supervision, and more,” Dhruv Batra, professor at the Georgia Tech College of Computing and research scientist at
Facebook’s collaborators on these various projects include the University of Texas at Austin, University of Illinois, Georgia Tech, and Oregon State.