Neuroscience and Artificial Intelligence; How They are Perpetuating Each Other’s Progress

Perri Corsello
Predict
Published in
9 min readOct 11, 2018

--

Before I even delve into these, if not the most, existential topics known to modern man, let’s delve into how these topics relate to us as humans. So, neuroscience. Basically, the brain studying itself. Or how I view it, the universe studying itself. But that is an article in and of itself. Anyways, this brain we all possess is the most sophisticated, intricate and complex interconnected web of electricity and matter known to man. This 3-pound mass of gelatinous goo in between our ears has around 100 billion brain cells crammed in there. About 1/3 of those are electrically excitable neurons which have a resting charge of around -70 millivolts. The other 2/3 are what are referred to as glial cells or basically the housekeepers. These cells are not nearly as electrically excitable as neurons. Their job is to clean up debris, maintain proper pH and water levels, provide neurons with structural support and a long list of many other functions. This incredible complexity of the brain provides a blue print for the development of AI systems that are in turn used to quantify and help understand the inner workings of nature’s most sophisticated computer.

We may live in the matrix

Within all our cells are the equivalent to a computer program, running algorithms based on the feedback they receive from their surrounding environment. Biochemical cascades, or a specific series of chemical events that lead to a particular outcome, fuel this natural algorithmic behavior. Below is the biochemical cascade for the synthesis of sugar in our cells when we are lacking it, termed gluconeogenesis, which is ultimately used as our source of energy (top). And below it is Euclid’s algorithm used to identify the greatest common divisor of 2 numbers.

Notice how naturally occurring processes are algorithms just like mathematically-based algorithms. The same basic idea applies regardless. There are inputs and when certain conditions are met throughout the algorithm, a particular outcome is achieved. Therefore, nature i.e. neuroscience can provide inspiration for the generation of artificial algorithms that mimic natural processes.

Artificial neural networks

Convolutional neural networks (CNN) are essentially the synthetic version of our brains. They have the ability to learn, detect patterns and are designed in a similar manner relative to our brains. Basically, there is an input layer, filter, and an output payer. The input layer takes in the raw sensory input and passes this information on to a series of filters that process various patterns such as line orientation, shapes and colors. There can be many filter layers that process increasingly sophisticated environmental attributes. For example, there may be a filter that processes specific facial features such as eyes, ears, cheek bones, hair etc. These can get as specific as the developer’s desire, but nonetheless, results in a system that can learn and adapt from data presented to it, as opposed to a system that runs off predetermined code. This learning process requires very large amounts of data usually in the form of millions of images. Below is a general schematic as to how this system is laid out:

Our brains work in a similar manner where sensory input is received, communicated to neurons that process/interpret environment specific information and then pass the information on to the conscious mind. In the following paragraphs, I will discuss the various areas of AI research that are modeled off our biological brains and a basic explanation of how they work. Enjoy!

Deep Learning

Deep learning is a form of machine learning that involves multiple interacting, parallel information processing layers that use the output of the previous layer as input for the next layer and is the basis of CNN’s. This form of information processing is modeled after biological systems and is focused on detecting patterns within the environment.

The brain has the ability to process many things at the same time, termed parallel processing. This has provided inspiration for the development of parallel computing or computers that calculate many processes simultaneously as opposed to one calculation at a time, termed serial computing. Having a multi-layer processing architecture allows for the conceptualization of complex topics by means of a hierarchy of understanding or using a broad idea as a basis for understanding a specific concept. Each layer has a task, gets more specific as you move up the hierarchy, and may interact with other layers. The revolutionary thing about deep learning is that the AI system can learn on its own and does not need predetermined code to process information. It is a matter of repetition, trial and error, however, does require a very large amount of data to learn from. For example, 10 million images would be presented to the AI system, some with stop signs in them. After this training, an image of a stop sign would be presented and processed based on its characteristics in each filter layer, such as octagonal in shape, white around the edges, red in the middle, the letters “stop” in the center, and generates an educated guess as to what it could be. Maybe 93% chance of being a stop sign, 5% chance a yield sign and 2% chance that it is a cardinal in a tree based off past experience and exposure to stop sign images. This is similar to how our brains work and illustrates how biological mechanisms are used to design AI systems.

Continual learning

AI systems need a way to preserve past knowledge. This has proven to be difficult in many AI systems where old information is overwritten by new information. Once again, AI development has looked to neuroscience for guidance. Our brains prevent replacing old memories with new ones by inhibiting plasticity (change) in an area associated with the old memory while allowing greater degrees of change to occur in areas associated with the new memory. AI systems are modeled in a similar fashion where domains of neural networks identified as being associated with previous tasks are inhibited when learning new tasks, thus, tying them to the previous tasks and remaining unaffected by the learning of a new task/experience. This model permits learning multiple tasks whilst not having to increase network capacity, diverting resources to tasks with similar structure.

Attention

Neuroscience can also confirm the validity of an artificial process. Until recently, AI systems would process images by giving equal priority to each pixel in the image, however, the brain does not work this way. The brain has distinct areas dedicated to particular processes that interact and learn from each other. Our brains process entire scenes by giving priority to relevant information and tactically shifting attention throughout the scene in series, incrementally filling in the gaps and making inferences based on past experience whilst drawing information from other areas as needed. This allows for the identification of objects in a scene in the presence of visual noise. Therefore, current AI systems are designed in a similar manner where they process relevant pieces of an image, update their internal state representation and then move on to the next sample.

This attentional model has proven successful in reconstructing visual scenes based on past experiences (memory) where computer resources (attention) are diverted to parts of the AI’s “mental representation” of a scene it has been previously exposed to. AI systems have been surprisingly good at reconstructing these visual scenes from past experiences.

Memory

Memory is essential to the functioning of an intelligent being. There are many different types of memory present in humans and this is no different for AI systems. Reinforcement learning is the encoding of value and importance of information based on past experience. Episodic memory is the impression of an experience after a single exposure. This means that the experience was so meaningful that it was encoded the first time it was presented, just like our brains remember emotionally charged events or novel/highly rewarding experiences with increased accuracy. Obviously, emotional events cannot be encoded into an AI system…..yet, but novel/highly rewarding experiences that stand out can be. AI episodic memory systems store specific events i.e. actions and rewards/outcomes when playing a video game to influence future actions based on these past experiences. So, if an action in a video game was rewarded in the past, it will likely be repeated to acquire that same reward. This is how our brains work and is reflected in AI programming.

An important aspect to AI memory is the combination of deep learning and reinforcement learning where experiences are replayed while offline, recording meaningful memories based on success or failure. This is the equivalent to sleep for humans where memories are consolidated and stored in long term memory based on emotional or reward-based associations.

From AI to neuroscience

Up until this point, I have discussed the various ways that neuroscience has influenced AI systems. However, AI is also a very useful tool in deciphering the complex mechanisms that underly the electrical storm that is happening within our neural networks. When it comes down to it, computer scientists and neuroscientists are essentially asking the same questions and attempting to understand similar systems. They are both analyzing individual components, the calculations they do and how these components and calculations fit into the system as a whole. A neuron is to the brain as a transistor is to a computer chip. Therefore, various AI systems inevitably provide a quantitative window into the functioning of our own neurological systems.

For example, in a study done by Khaligh-Razavi & Kriegestorte (2014), they compared 37 image-processing CNN architectures to our biological brain’s representations of vision. They found that the AI system that most closely resembled our biological visual systems performed the best. Even though this AI system was far from performing as well as a human, it provides a glimpse as to how our underlying biological mechanisms for vision work. Since neuroscientists and psychologists still have only a vague understanding as to how these biological neural systems work, an AI representation of these systems that can provide concrete quantifiable data, such as the one just mentioned, is of great use.

But don’t take my word for it, ask Nikolaus Kriegeskorte, Professor of Psychology and Director of Cognitive Imaging at Columbia University in New York City. He says, “Current neural network models can perform this kind of task using only computations that biological neurons can perform. Moreover, these neural network models can predict to some extent how a neuron deep in the brain will respond to any image.” This further illustrates how neuroscience and AI technology are perpetuating each other’s advancement and will only continue to do so for years to come.

Conclusion

The advancement of AI and neuroscience is happening at a staggering rate. Likewise, the rate at which AI systems learn are exponential. If we do not approach this with extreme caution, AI advancement could easily get out of hand and unintended consequences may occur. Nonetheless, it is incredible to see how far both the fields of neuroscience and AI have come and how they influence each other to such great extents.

References

Cognitive Neuroscience Society. (2018, March 25). Dissecting artificial intelligence to better understand the human brain. Retrieved from https://www.sciencedaily.com/releases/2018/03/180325115759.htm

Glowatz, E. (2017, July 27). Where AI Meets Neuroscience: How The Human Brain Will Make Robots Smarter. Retrieved from https://www.starmind.com/blog/2017/7/27/where-ai-meets-neuroscience-how-the-human-brain-will-make-robots-smarter

Gregor, K., Danihelka, I., & Graves, A. (2015). DRAW: A Recurrent Neural Network For Image Generation. Google DeepMind. Retrieved from https://arxiv.org/pdf/1502.04623.pdf.

Hassabis, D., Kumaran, D., & Summerfield, C. (2017). Neuroscience-Inspired Artificial Intelligence. Neuron,95.

Khaligh-Razavi, S., & Kriegeskorte, N. (2014). Deep Supervised, but Not Unsupervised, Models May Explain IT Cortical Representation. PLoS Computational Biology,10(11). doi:10.1371/journal.pcbi.1003915

Zhou, B., Lapedriza, A., Khosla, A., & Oliva, A. (July 2017). Places: A 10 Million Image Database for Scene Recognition — IEEE Journals & Magazine. https://ieeexplore.ieee.org/document/79683

Originally published at www.datadriveninvestor.com on October 11, 2018.

--

--

Perri Corsello
Predict

Blockchain, cryptocurrency and neuroscience are life