Skip to main content

Google Lens adds unprecedented intelligence to your smartphone camera

Google Lens computer vision technology lets you find context with what your camera sees.

Want to know the name of that flower or bird you encounter during your stroll through a park? Soon, Google Assistant will be able to tell you, using the camera and artificial intelligence.

Google jump-started its 2017 I/O conference around AI and machine learning, and one computer vision technology it highlighted is Google Lens, which lets the camera do more than just capture an image — it gives greater context around what it is that you’re seeing.

Recommended Videos

Coming to Google Assistant and Google Photos, the Google Lens technology can “understand what you’re looking at and help you take action,” Google CEO Sundar Pichai said during the keynote. For example, if you point the camera at a concert venue marquee, Google Assistant can tell you more about the performer, as well as play music, help you buy tickets to the show, and add it to your calendar, all within a single app.

When the camera’s pointed at an unfamiliar object, Google Assistant, through image recognition, can tell you what it is. Point it at a shop sign, and using location info, can give you meaningful information about the business. All this can be done through the “conversational” voice interaction the user has with Assistant.

“You can point your phone at it and we can automatically do the hard work for you,” Pichai said.

With Google Lens, your smartphone camera won’t just see what you see, but will also understand what you see to help you take action. #io17 pic.twitter.com/viOmWFjqk1

— Google (@Google) May 17, 2017

If you use Google’s Translate app, you have already seen how the technology works: Place a camera over some text and the app will translate it to a language you understand. In Google Assistant, Google Lens will take this further. In a demonstration, Google showed that Google Assistant not only will translate foreign text, but also display images of what the text is describing, to give more information.

In a demo, Scott Huffman, Google VP of engineering for Assistant, demoed how Google Lens within Google Assistant can translate the Japanese text of an image, but also give further context about what the word is.

Image recognition technology isn’t new, but Google Lens shows how advanced machine learning is becoming. Pichai said that as with its work on speech, Google is seeing great improvements in vision. The computer vision technology not only helps recognize what something is, but can even help repair or enhance an image. Took a blurry photo of the Eiffel Tower? Because the computer recognizes the object and knows what it’s suppose to look like, it can automatically enhance that image based on what is already knows.

“We can understand the attributes behind a photo,” Pichai said. “Our computer vision systems now are even better than humans at image recognition.”

No longer will you need to write down what’s in your vacation photos. Google VP Anil Sabharwal for Google Photos showed how Google Lens can recognize objects in a photo and bring up relevant information about it.

To make Lens effective at its job, Google is employing sophisticated computational architecture of Cloud Tensor Processing Unit (TPU) chipsets, to handle training and inference for its machine learning. Its second-generation TPU technology can handle 180 trillion floating point operations per second; 64 TPU boards in one super computer can handle 11.5 petaflops. With this computing power, new TPU can handle both training and inference simultaneously, which wasn’t possible in the past (the previous TPU could only handle inference work, but not the more complex training). Machine learning takes time, but this hardware will help accelerate the effort.

Google Lens will also power the next update of Google Photos. Image recognition is already used in Photos to recognize faces, places, and things to help with organization and search. With Google Lens, Google Photos can give you greater information about the things in your photos, like the name and description of a building; tapping on a phone number in a photo will place a call, pulling up more info on an artwork you saw in a museum, or even enter the Wi-Fi password automatically from a photo you took of the back of a Wi-Fi router.

Hate entering Wi-Fi network passwords? Snap a photo of the wireless settings, and Google Lens technology through Google Photos can automatically enter it for you.

Assistant and Photos will be the first apps to use Google Lens, but it will be rolled out into other apps. And with the announcement of support for Assistant in iOS, iPhone users will be able to utilize the Google Lens technology as well.

Les Shu
Former Digital Trends Contributor
I am formerly a senior editor at Digital Trends. I bring with me more than a decade of tech and lifestyle journalism…
Google, Apple, Samsung, and OnePlus camera test exposes poor performers
The Pixel 6, iPhone SE, Galaxy A53, and Nord 2T camera modules.

The Samsung Galaxy A53 5G has five cameras on the back, the OnePlus Nord 2T has three, the Google Pixel 6 has two, and the Apple iPhone SE (2022) has just one camera. Surely, due to these differences, it would be a very unfair fight to compare them all?

That's precisely what we wanted to find out! Let's take a look and see if having multiple cameras on your phone really makes a difference to photo quality, or if those extra sensors are little more than fluff.
The phones and cameras
It may look like the phones in the test aren’t really competing with each other, but if you’ve got somewhere around $500 or 500 British pounds to spend, then these will likely all show up in your search. We’re not going to compare the hardware here, and are instead going to concentrate only on camera performance. To do that, we should first look at the camera specs on each phone before we get to the photos.

Read more
Google just made it easier to change your breached passwords
Close-up of the rear Google Pixel 6 Pro's rear panel showing its cameras.

Google is finally making good on its promise to let Google Assistant help users come up with new passwords after they've been breached. With a few simple taps, the virtual assistant can take care of setting up a new password in seconds after notifying you that there's been an issue. The company announced the feature last year, but now the rollout has begun, as Android owners have started spotting the feature on their phones.

Notifications for breached passwords are nothing new; Google Chrome has been pushing them for years. But the process for setting up a new one if breached was a little too involved. Previously, it would require the device owner to navigate to the affected website on their own and reset the password manually. As spotted by Android Police's Max Weinbach on Twitter, the new Assistant update streamlines that process by bringing the user directly to the password settings of the site with the breached password.

Read more
10 years on, Google Glass is still a Google I/O high point
Sergey Brin demonstrating Google Glass on stage at Google I/O 2012.

The introduction of Google Glass during the Google I/O 2012 keynote presentation was Google at its feisty, unpredictable best. It marked the beginning of the groundbreaking wearable device's short, tumultuous life as a consumer product, and was truly representative of what made Google such an exciting company at the time.

But Google I/O 2012 also introduced another crucial piece of Google hardware, the Nexus 7. However, the modest little tablet's resulting life had a very different direction indeed. As the 10th anniversary of these products approaches, and on the cusp of Google I/O 2022, we celebrate them both in a pair of retrospectives, beginning with Google Glass.
Falling from the sky
“You have to want to be on the bleeding edge.”

Read more