Skip to main content

Pixel 2 owners get the first glimpse of Google Lens computer vision possibilities

Unveiled earlier in 2017 at Google I/O, the first public version of the artificially intelligent computer vision program Google Lens is now part of the new Google Pixel 2. During Wednesday’s October 4 event in San Fransisco, Google shared a preview of Lens that will ship inside the new Pixel 2 smartphone, with integration into both Google Photos and Google Assistant.

Google Lens is the tech giant’s computer vision software that collects information from a photograph to either save some time by skipping the typing or to learn something new about the things that we see around us. The tool effectively mixes Google search with a camera, and while the Pixel 2 only contains a preview of the feature, the platform already creates a few promising shortcuts.

Recommended Videos

During the event, Google’s Aparna Chennapragada shared how the new feature allows the smartphone’s camera to be used as a sort of keyboard. When taking a photo of something with text, like a flyer, Google Lens allows users to highlight text such as email addresses, phone numbers, websites, and street addresses and copy the information. The shortcut makes it easy to look up a location on Google Maps or call a phone number without typing it into the keyboard.

Besides serving as a visual shortcut to typing in long and unusual email addresses, Google Lens is also designed to help users understand the objects they see — starting with art and entertainment. Snapping a photo of a piece of art will lead to who the artist is and what else they painted. See a movie poster? Lens will tell you if the flick is worth watching or not. Snapping photos of album covers and book covers also lead you to more details on the work.

The preview inside the Pixel 2 is just a start for the computer vision software. When the software was first announced, Google listed a long number of possibilities, including translating text, getting more details on a business, reading Wi-Fi network settings or learning the name of that flower you just spotted.

Google’s computer vision also works with existing photos, powering a number of tools inside the native Google Photos app on the Pixel 2. Searching for specific objects, people and even famous landmarks is possible through the program’s auto-tagging feature.

Google Lens is based on machine learning — Google essentially used those millions of photos in their search results to train the computer to recognize what a specific object looks like. With enough photos, the program can learn to recognize what the Eiffel Tower looks like on a cloudy day, lit up at night or even blurred from camera shake to correctly identify what is in the photo.

Chennapragada said that Google Lens will continue to improve with use. For example, she said, Google’s voice recognition, at first, wouldn’t always recognize speech correctly, particularly with factors like accents. Now, after several years of development, Google voice has a 95 percent accuracy rate.

Google CEO Sudar Pichai said that the object recognition AI built by Google had a 39 percent accuracy rate. Using what’s called AutoML, which is essentially artificial intelligence building more AI programs, that accuracy rate has improved to 43 percent and is continuing to improve.

“This is why we are excited about the shift from mobile first to AI first, it’s radically rethinking how computers work,” Pichai said during the presentation. “Computers should adapt to how people live their life, rather than people adapting to computers.”

Google Lens will first be available in Pixel 2 by tapping the Lens icon inside both Google Photos and Google Assistant.

Hillary K. Grigonis
Hillary never planned on becoming a photographer—and then she was handed a camera at her first writing job and she's been…
The Google Pixel Fold looks incredible in its first major design leak
Leaked render of the Google Pixel Fold.

Ever since Samsung released the first Galaxy Fold, there have been rumors and speculation about a Google-made foldable phone. What would it look like? How much would it cost? Is a foldable Pixel something Google is even interested in?

Following months of tiny leaks and reports, we now have our biggest piece of Google Pixel Fold news to date. On November 14, FrontPageTech published numerous renders showcasing the Pixel Fold's design. And, in short, it looks pretty damn incredible.

Read more
This is the Pixel 2’s secret eye-scanning feature that never was
Google Pixel 2 XL - Best Android phones

Google's Pixel 2 was one of the best phones the company made, but it had the potential to be even cooler. The Internal Archive this weekend shared a prototype of an early Pixel 2 model that was equipped with iris-scanning technology. Google did not ultimately ship the Pixel 2 with an iris scanner, opting instead for its Pixel Imprint rear-mounted fingerprint sensor.

According to the Archive, the Pixel prototype here was a single-purpose one. It was dedicated nearly entirely to testing iris recognition. The front camera is gone, replaced by an infrared unit, and the rear camera lacks LEDs as well. Even the software loaded on the phone is an entirely basic version of Android with nothing but the bare necessities.

Read more
What Google needs to do to make the Pixel Watch 2 worth buying
The Google Pixel Watch's Big Time watch face.

The Google Pixel Watch is a missed opportunity, and while there’s nothing wrong with its simple minimalist design and basic features, we think Google could have done much better — especially given its price.

Although a sequel won’t arrive for a while yet, we’ve already got plenty of ideas about what it needs to do to make the Pixel Watch 2 worth buying. It's a long and lengthy list, but they're all things Google needs to seriously consider whenever the next Pixel Watch is ready for showtime.
Have two sizes

Read more