Skip to main content

Google’s LaMDA is a smart language A.I. for better understanding conversation

LaMDA model
Google I/O
Sundar Pichai stands in front of a Google logo at Google I/O 2021.
This story is part of our complete Google I/O coverage

Artificial intelligence has made extraordinary advances when it comes to understanding words and even being able to translate them into other languages. Google has helped pave the way here with amazing tools like Google Translate and, recently, with its development of Transformer machine learning models. But language is tricky — and there’s still plenty more work to be done to build A.I. that truly understands us.

Language Model for Dialogue Applications

At Tuesday’s Google I/O, the search giant announced a significant advance in this area with a new language model it calls LaMDA. Short for Language Model for Dialogue Applications, it’s a sophisticated A.I. language tool that Google claims is superior when it comes to understanding context in conversation. As Google CEO Sundar Pichai noted, this might be intelligently parsing an exchange like “What’s the weather today?” “It’s starting to feel like summer. I might eat lunch outside.” That makes perfect sense as a human dialogue, but would befuddle many A.I. systems looking for more literal answers.

Recommended Videos

LaMDA has superior knowledge of learned concepts which it’s able to synthesize from its training data. Pichai noted that responses never follow the same path twice, so conversations feel less scripted and more responsively natural.

While it is still in research and development, Google is supposedly using it internally to explore novel interactions. During Google I/O, it demonstrated a couple of recent exchanges with a less formal, more casually conversational dialogue than the typical way we might interact with a chatbot tool such as Google Assistant. Slightly trippily, these were conversations with a bot pretending to be, variously, the dwarf planet Pluto and a paper airplane, answering questions about themselves. The demo was to show how the model is able to carry out in-depth conversations on any topic.

Eventually, LaMDA should result in Google A.I. tools that are better at following human conversations in terms of context. Pichai specifically called out Google Assistant and search as domains where this will be useful.

Building multimodal models

He also noted that the technology is being used to create multimodal models that can understand images, text, audio, and video. This could be used to, for instance, ask Google Maps to plan a road trip with beautiful mountain views, combining its knowledge of audio, text and images. It could also be used for superior video search. One other example might be asking to jump to the part of a video in which a lion roars at sunset.

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Smart Canvas supercharges Google Docs, Slides, and Sheets for collaboration
Google Smart Canvas

At the Google I/O developer conference, the company announced a new way of working in Google Workspace that puts online collaboration in the spotlight.

At its basics, it's a host of new features that connect some of the disparate parts of Google Docs, Sheets, and Slides into a unified project management tool.

Read more
How the USPS uses Nvidia GPUs and A.I. to track missing mail
A United States Postal Service USPS truck driving on a tree-lined street.

The United States Postal Service, or USPS, is relying on artificial intelligence-powered by Nvidia's EGX systems to track more than 100 million pieces of mail a day that goes through its network. The world's busiest postal service system is relying on GPU-accelerated A.I. systems to help solve the challenges of locating lost or missing packages and mail. Essentially, the USPS turned to A.I. to help it locate a "needle in a haystack."

To solve that challenge, USPS engineers created an edge A.I. system of servers that can scan and locate mail. They created algorithms for the system that were trained on 13 Nvidia DGX systems located at USPS data centers. Nvidia's DGX A100 systems, for reference, pack in five petaflops of compute power and cost just under $200,000. It is based on the same Ampere architecture found on Nvidia's consumer GeForce RTX 3000 series GPUs.

Read more
The BigSleep A.I. is like Google Image Search for pictures that don’t exist yet
Eternity

In case you’re wondering, the picture above is "an intricate drawing of eternity." But it’s not the work of a human artist; it’s the creation of BigSleep, the latest amazing example of generative artificial intelligence (A.I.) in action.

A bit like a visual version of text-generating A.I. model GPT-3, BigSleep is capable of taking any text prompt and visualizing an image to fit the words. That could be something esoteric like eternity, or it could be a bowl of cherries, or a beautiful house (the latter of which can be seen below.) Think of it like a Google Images search -- only for pictures that have never previously existed.
How BigSleep works
“At a high level, BigSleep works by combining two neural networks: BigGAN and CLIP,” Ryan Murdock, BigSleep’s 23-year-old creator, a student studying cognitive neuroscience at the University of Utah, told Digital Trends.

Read more