How AI and Neuroscience Are Coming Together to Benefit Both Disciplines (and Society).
Biomedical engineer Chethan Pandarinath develops prosthetics – but not just any prosthetics. That’s because the Emory University and Georgia Tech researcher’s goal is to enable those with paralyzed limbs to use those arms as if they were their own, via signals from their brain.
Pandarinath hopes to achieve this by analyzing brain activity recordings of paralyzed people to identify neuronic patterns corresponding to specific movements. In theory, these patterns could power artificial intelligence (AI) systems connected to prosthetic limbs, enabling similar movement control over what’s essentially a foreign object attached to the body.
If that sounds complicated, it’s because it is – hugely so, Pandarinath says in Nature. But it’s certainly not science fiction. And that’s mainly due to the growing convergence of AI and neuroscience.
AI and neuroscience: Two sides of a similar coin
Both disciplines, after all, share the same goal of understanding the secrets of human cognition. That means learning how the brain works, which, in turn, can inspire the design and development of artificial neural networks while reaffirming the validity of previously developed algorithms. Indeed, the continued improvement of modern AI and deep learning systems (in terms of accuracy, resource-intensiveness, and the ability to learn and adjust more effectively with less data) requires a fundamental understanding of the workings of the human brain.
One of the most potent connections between AI and neuroscience is the concept of reward-based learning, a focus of some computer science researchers since the 1980s. From an AI and computer science perspective, it’s easy to see why: Reward-based learning systems can learn on their own through built-in rewards and punishments systems. That means that instead of human instruction, these systems use a system of reward predictions that continually adjust based on experience.
These kinds of reward-based systems include temporal difference (TD) learning, a milestone approach developed in the late 1980s and early 1990s. TD learning is different from traditional learning methodologies in that it continuously samples the environment and adjusts its reward predictions based on those received versus expectations. In this way, TD learning matches expected vs. received rewards for each moment in time, continually adjusting expectations (and, by extension, improving algorithmic accuracy).
According to DeepMind, an AI research firm, the eureka moment permanently linking AI and neuroscience came just a few years later, in the mid-1990s. It was then that researchers noticed that neurons in the brain seemed to use similar reward predictions – by studying the brains of live animals, they were able to see that certain dopamine neurons fired if the animal received more or less reward than expected. They soon proposed that the human brain also uses a TD learning algorithm, a hypothesis that has since shown its validity through numerous other experiments.
Since then, most AI researchers have focused on deep reinforcement learning using advanced methodologies such as distributional reinforcement learning, allowing them to tackle ever-more complicated problems. Distributional reinforcement learning improves on traditional TD learning through its ability to predict a much broader spectrum of possible rewards. And a recent DeepMind research paper published in Nature goes even further, suggesting that the human brain also uses distributional reinforcement learning.
“We found that dopamine neurons in the brain were each tuned to different levels of pessimism or optimism,” the authors write. “If they were a choir, they wouldn’t all be singing the same note, but harmonizing – each with a consistent vocal register, like bass and soprano singers. In artificial reinforcement learning systems, this diverse tuning creates a richer training signal that greatly speeds learning in neural networks, and we speculate that the brain might use it for the same reason.”
How AI and neuroscience can make each other even better
There is still much we don’t know about the brain. Additionally, the vast majority of current AI algorithms also have several disadvantages over natural neural networks that researchers (so far) haven’t been able to overcome. These challenges include their insatiable hunger for large training datasets and the massive amounts of energy required to match even a toddler’s ability to correct mistakes. Deep learning algorithms are still elementary compared to the brain’s complex circuitry, while even the most robust current AI algorithms easily break when confronted with anything outside their express purpose. “Deep learning algorithms (often) require millions of training examples, whereas humans, especially kids, can pick up a new concept or motor skill with one shot,” illustrates neuroscientist and science writer Shelley Fan.
Despite this, however, the two disciplines – thanks in part to several partnerships and researchers with combined AI and neuroscience backgrounds – are increasingly inspiring and improving each other thanks to the shared framework of reinforcement learning. “Combining (deep learning) with brain-like innate structures could lead us towards machines that learn as quickly, flexibly, and intuitively as humans,” Fan says. Dr. Shimon Ullman of the Weizmann Institute of Science adds there are several ways neuroscience can improve AI and deep learning even further, including:
- The power of natural neurons. As mentioned earlier, current artificial neural networks are relatively simple compared to the complexities found in the human brain – including natural neurons, the power of which researchers are only just beginning to understand. Harnessing new insights into neurons’ functions and behaviors in the human brain could revolutionize artificial neural networks’ effectiveness.
- Circuit connectivity. Similarly, the connections within artificial neural networks and neuron layers are pretty simple compared to the brain’s intricate wiring. Uncovering deeper knowledge about how neurons interact with each other in the human brain could lead to more complex connections between neurons in artificial networks.
- Innate cognitive structures. By the time a baby is born, it already has several deep-seated concepts embedded within its brain, including the ability to recognize human hands or faces. By studying these innate structures, researchers could develop AI systems to solve complex problems with minimal to no training.
Because the two disciplines have a symbiotic relationship and are essentially solving similar problems from different angles, AI also impacts neuroscience profoundly. The ability of AI to analyze larger datasets than researchers would have thought possible even just a few years ago is a big one, but there are others, including using AI algorithms to reassess our ideas on how the brain works when performing complex tasks and movements. “If you can train a neural network to do it,” says Google Brain’s Dr. David Sussillo, “then perhaps you can understand how that network functions, and then use that to understand the biological data.”
This intrinsic relationship between AI and neuroscience is something that Emory University and Georgia Tech’s Pandarinath, the researcher developing AI-infused prosthetics, knows all too well. “The technology is coming full circle and being applied back to understand the brain,” he explains, which could help spur new treatments for various ailments. Maneesh Sahani, of the Gatsby Computational Neuroscience Unit at University College London, agrees. “We’re effectively studying the same thing,” he says.
“In the one case, we’re asking how to solve this learning problem mathematically so it can be implemented efficiently in a machine. In the other case, we’re looking at the sole existing proof that it can be solved — which is the brain.”