The phrase “positive reinforcement,” is something you hear more often in an article about child rearing than one about artificial intelligence. But according to Alice Parker, Dean’s Professor of Electrical Engineering in the Ming Hsieh Department of Electrical and Computer Engineering, a little positive reinforcement is just what our AI machines need. Parker has been building electronic circuits for over a decade to reverse-engineer the human brain to better understand how it works and ultimately build artificial systems that mimic it. Her most recent paper, co-authored with Ph.D. student Kun Yue and colleagues from UC Riverside, was just published in the journal Science Advances and takes an important step towards that ultimate goal.
The AI we rely on and read about today is modeled on traditional computers; it sees the world through the lens of binary zeros and ones. This is fine for making complex calculations but, according to Parker and Yue, we’re quickly approaching the limits of the size and complexity of problems we can solve with the platforms our AI exists on. “Since the initial deep learning revolution, the goals and progress of deep-learning based AI as we know it has been very slow,” Yue says. To reach its full potential, AI can’t simply think better—it must react and learn on its own to events in real time. And for that to happen, a massive shift in how we build AI in the first place must be conceived.
To address this problem, Parker and her colleagues are looking to the most accomplished learning system nature has ever created: the human brain. This is where positive reinforcement comes into play. Brains, unlike computers, are analog learners and biological memory has persistence. Analog signals can have multiple states (much like humans). While a binary AI built with similar types of nanotechnologies to achieve long-lasting memory might be able to understand something as good or bad, an analog brain can understand more deeply that a situation might be “very good,” “just okay,” “bad” or “very bad.” This field is called neuromorphic computing and it may just represent the future of artificial intelligence.