Skip to main content

Intel is using A.I. to build smell-o-vision chips

While smell-o-vision may be a long way from being ready for your PC, Intel is partnering with Cornell University to bring it closer to reality. Intel’s Loihi neuromorphic research chip, a a powerful electronic nose with a wide range of applications, can recognize dangerous chemicals in the air.

“In the future, portable electronic nose systems with neuromorphic chips could be used by doctors to diagnose diseases, by airport security to detect weapons and explosives, by police and border control to more easily find and seize narcotics, and even to create more effective at home smoke and carbon monoxide detectors,” Intel said in a press statement.

Recommended Videos

With machine learning, Loihi can recognize hazardous chemicals “in the presence of significant noise and occlusion,” Intel said, suggesting the chip can be used in the real world where smells — such as perfumes, food, and other odors — are often found in the same area as a harmful chemical. Machine learning trained Loihi to learn and identify each hazardous odor with just a single sample, and learning a new smell didn’t disrupt previously learned scents.

Intel Labs senior research scientist Nabil Imam who worked on the Loihi development team used the same computational principles of scent analysis as biological brains in humans and animals. The company worked with Cornell to analyze the brain’s electrical activity when animals smell odors, while Intel Labs scientists derived a set of algorithms to configure them to neuromorphic silicon.

There’s still plenty of work to be done on electronic noses. Like image detection in machine learning, olfactory learning requires similar smells to be categorized. Fruits with similar odors, for example, can be difficult for neuromorphic systems like to Loihi to identify.

“Imam and team took a dataset consisting of the activity of 72 chemical sensors in response to 10 gaseous substances (odors) circulating within a wind tunnel,” Intel detailed. “The sensor responses to the individual scents were transmitted to Loihi, where silicon circuits mimicked the circuitry of the brain underlying the sense of smell. The chip rapidly learned neural representations of each of the 10 smells, including acetone, ammonia and methane, and identified them even in the presence of strong background interferents.”

Intel claims Loihi can learn 10 different odors right now. In the future, robots equipped with electronic noses might be used to monitor the environment, and doctors could use these computerized olfactory systems for medical diagnosis in instances where diseases emit particular odors.

Chuong Nguyen
Silicon Valley-based technology reporter and Giants baseball fan who splits his time between Northern California and Southern…
Apple’s internal tests show Siri isn’t quite ready to beat ChatGPT
Apple Intelligence update on iPhone 15 Pro Max.

With the introduction of the new iPad Mini, Apple made it clear that a software experience brimming with AI is the way forward. And if that meant making the same kind of internal upgrades to a tablet that costs nearly half as much as its flagship phone, the company would still march forward.

However, its ambitions with Apple Intelligence lack competitive vigor, and even by Apple’s own standards, the experience hasn’t managed to wow users. On top of that, the staggered rollout of the most ambitious AI features — many of which are still in the future — has left enthusiasts with a bad impression.

Read more
AI-controlled robots can be jailbroken, and the results could be disastrous
The Figure 02 robot looking at its own hand

Researchers at Penn Engineering have reportedly uncovered previously unidentified security vulnerabilities in a number of AI-governed robotic platforms.

"Our work shows that, at this moment, large language models are just not safe enough when integrated with the physical world," George Pappas, UPS Foundation Professor of Transportation in Electrical and Systems Engineering, said in a statement.

Read more
Perplexity’s two new features take it beyond just a chatbot
An abstract image showing floating squares used for a Perplexity blog post.

Perplexity AI, makers of the popular chatbot by the same name, announced Thursday that it is rolling out a pair of new features that promise to give users more flexibility over the sorts of sources they employ: Internal Knowledge Search and Spaces.

"Today, we're launching Perplexity for Internal Search: one tool to search over both the web and your team's files with multi-step reasoning and code execution," Perplexity AI CEO Aravind Srinivas wrote on X (formerly Twitter). Previously, users were able to upload personal files for the AI to chew through and respond upon, the same way they could with Gemini, ChatGPT, or Copilot. With Internal Search, Perplexity will now dig through both those personal documents and the internet to infer its response.

Read more