Human brains are great at sorting through a barrage of sensory information – like discerning the smell of tomato sauce upon stepping into a busy restaurant – but artificial intelligence systems are challenged by large bursts of unregulated input.
Using the brain as a model, Cornell researchers from the Department of Psychology‘s Computational Physiology Lab and the Cornell University AI for Science Institute have developed a strategy for AI systems to process olfactory and other sensory data. Human (and other mammalian) brains efficiently organize unruly input from the outside world into reliable representations that we can understand, remember and use to make long-lasting connections. With these brain mechanisms as a guide, the researchers are designing low-energy, efficient robotic systems inspired by biology and useful for a wide range of potential applications.
“The brain performs amazing feats of cognition in real time and with startlingly low energy consumption. This includes sorting through lots of sensory information – often occluded, partially blocked, or degraded – to identify the information that matters, and interpret it based on contextual cues and prior experience,” said Thomas Cleland, professor of psychology in the College of Arts and Sciences (A&S). “In principle, artificial physical systems should be able to do the same, once we figure out how it works.”
In this study, which was published April 23 in Scientific Reports with Cleland as co-corresponding author, the researchers illuminate key aspects of how brains process sensory information, both to understand neural computation and to help build artificial devices with new capabilities.
A goal of this "neuromorphic design" is to make AI devices that are as efficient and low-power as the brain. It would be a tremendous advance in design, said postdoctoral researcher Roy Moyal, first author and co-corresponding author of the study, but there is a lot of work to do before it can be realized.
“The current state of the art in machine learning is based on very large foundation models that require a significant amount of processing power to train and work with. Instead, imagine being able to deploy lightweight, autonomous AI agents on small, made-for-purpose devices that could, for instance, detect hazardous materials. They would intelligently adapt to what they see in the field locally, quickly, without sending potentially sensitive data over a network,” said Moyal, whose long-term research goal is to develop brain-inspired neural networks that can beat modern AI solutions in their learning capabilities, energy efficiency and form factor.
To realize this vision, researchers need a more detailed understanding of how the olfactory system processes input.
The early olfactory system, Cleland said, includes the olfactory epithelium, which is the layer of neurons that sense chemicals in the nasal cavity; the olfactory bulb, a brain area to which these chemosensory neurons directly project; and multiple downstream areas in the brain that receive sensory information from the olfactory bulb and also communicate back with the bulb to shape its operations.
In this study, the Cornell researchers discovered exactly how the outer layers of the biological system – the olfactory epithelium and the outer layer of the olfactory bulb – perform computations that “create a firewall between the world and the brain,” Cleland said.
The barrage of sensory input – like the smells from a restaurant – need to be organized, constrained and massaged into a form that the deep bulb and downstream areas can deal with without breaking, Cleland said, and without losing any (or much) of the information that the sensory input provides.
The study focuses on these initial computations of the olfactory bulb, which itself has two computational layers. “We think of the bulb as the interface between the brain and the world for this sensory modality,” Cleland said. “The deep layer of the bulb is dynamically sophisticated and heavily devoted to learning about odors. But to operate in this way, it needs all its inputs to be well-behaved."
Artificial systems, too, need the complex sensory input from the world to be packaged and organized in a way that retains all critical information. This is true in set-ups where artificial systems use chemical sensors to “smell” as well as in systems that detect other sensory input. The researchers’ work on the olfactory system also has yielded theoretical insights regarding spike-phase coding in the brain – a method by which neurons transmit information by tightly regulating the timing of their communication pulses. This common energy-conservation strategy, it now is clear, also can be leveraged for stable learning and regularization in practical scenarios where data can be noisy and scarce.
“It suggests interesting parallels to recent work on quantization-aware training in machine learning and artificial intelligence that we are currently exploring. We've been hard at work implementing and optimizing a generalized algorithm based on these principles,” Moyal said.
“You don’t have to know anything about the world for this algorithm to work – it will regularize anything that its sensors can encode,” Cleland said. “But if you do happen to know something about the world, you can improve performance further.”
While this study is about olfactory bulb circuitry, its findings are not limited to olfaction, Cleland said. It’s a generic regularization mechanism for any sort of data that have a similar overall structure, making potential applications in robotics or other AI processing quite broad.
Contributors include Kyrus Mama ’21 and Matthew Einhorn, Computational Physiology Lab (A&S); and Ayon Borthakur, Indian Institute of Technology, Guwahati.
The study received support from the National Science Foundation, Intel, Teledyne, and the Eric and Wendy Schmidt AI in Science Postdoctoral Fellowship.