Skip to main content

Pinterest Labs aims to tackle the most challenging problems in AI

pinterest labs ai research engineering
What happens when you add machine learning to a database of 100 billion image-rich objects and ideas? Pinterest is already scratching the surface of a potential answer to that question with its AI-powered tools, including visual search and Pinterest Lens, but now it wants to dig deeper.

The company wants to join the ranks of industry giants Facebook and Google in accelerating the growth of artificial intelligence through open research and collaboration. To help it achieve that goal, it is launching a new group — dubbed “Pinterest Labs” — comprised of machine learning experts, whose investigations could help transform the way users discover ideas.

In the words of Pinterest chief scientist and Stanford associate professor Jure Leskovec: “As much as we’ve done, we still have far to go — most of Pinterest hasn’t been built yet.”

By working with the research community and universities — such as the Berkeley Artificial Intelligence Research Lab, University of California San Diego, and Stanford University  — Pinterest Labs is hoping to build the AI systems for some of its integral features. These include the “taste graph,” the technique used by the company to map the connections between pins, people, and boards in order to surface relevant ideas for users. The company is also hoping machine learning can help it to provide personalized recommendations faster.

Those interested in its work can keep up with the research group on its dedicated website, and by attending its public tech talks, the first of which took place on Tuesday at the company’s headquarters in San Francisco. Pinterest Labs will also share its findings with the academic community by publishing research papers and releasing its data to researchers.

Leskovec claims that Pinterest’s systems now rank more than 300 billion objects per day. In the last year, the platform has increased the number of recommendations it serves by 200 percent, while making them 30 percent more engaging.

Pinterest took a big leap in to machine learning with the launch of its Pinterest Lens tool at the start of this month. The machine learning system that powers Lens can recognize objects in photos, along with identifying their features, such as color, allowing users to snap images with their smartphone camera in order to discover and purchase related items on Pinterest.

Saqib Shah
Former Digital Trends Contributor
Saqib Shah is a Twitter addict and film fan with an obsessive interest in pop culture trends. In his spare time he can be…
Clever new A.I. system promises to train your dog while you’re away from home
finding rover facial recognition app dog face big eyes

One of the few good things about lockdown and working from home has been having more time to spend with pets. But when the world returns to normal, people are going to go back to the office, and in some cases that means leaving dogs at home for a large part of the day, hopefully with someone coming into your house to let them out at the midday point.

What if it was possible for an A.I. device, like a next-generation Amazon Echo, to give your pooch a dog-training class while you were away? That’s the basis for a project carried out by researchers at Colorado State University. Initially spotted by Chris Stokel-Walker, author of YouTubers:How YouTube Shook Up TV and Created a New Generation of Stars, and reported by New Scientist, the work involves a prototype device that’s able to give out canine commands, check to see if they’re being obeyed, and then provide a treat as a reward when they are.

Read more
To build a lifelike robotic hand, we first have to build a better robotic brain
Robot arm gripper

Our hands are like a bridge between the intentions laid out by the brain and the physical world, carrying out our wishes by letting us turn thoughts into actions. If robots are going to truly live up to their potential when it comes to interaction, it’s crucial that they therefore have some similar instrument at their disposal.

We know that roboticists are building some astonishingly intricate robot hands already. But they also need the smarts to control them -- being capable of properly gripping objects both according to their shape and their hardness or softness. You don’t want your future robot co-worker to crush your hand into gory mush when it shakes hands with you on its first day in the office.

Read more
This basic human skill is the next major milestone for A.I.
profile of head on computer chip artificial intelligence

Remember the amazing, revelatory feeling when you first discovered the existence of cause and effect? That’s a trick question. Kids start learning the principle of causality from as early as eight months old, helping them to make rudimentary inferences about the world around them. But most of us don’t remember much before the age of around three or four, so the important lesson of “why” is something we simply take for granted.

It’s not only a crucial lesson for humans to learn, but also one that today’s artificial intelligence systems are pretty darn bad at. While modern A.I. is capable of beating human players at Go and driving cars on busy streets, this is not necessarily comparable with the kind of intelligence humans might use to master these abilities. That’s because humans -- even small infants -- possess the ability to generalize by applying knowledge from one domain to another. For A.I. to live up to its potential, this is something it also needs to be able to do.

Read more