Brain Mysteries Deciphered through AI’s Self-Learning Models

Our brains develop an intuitive understanding of the world, allowing us to make sense of the sensory information we receive. This intricate process, akin to self-supervised learning, initially developed for computer vision, plays a pivotal role in our perception. To shed light on this phenomenon, a team of researchers from MIT’s K. Lisa Yang Integrative Computational Neuroscience Center conducted a series of studies.

The researchers embarked on a journey to explore the parallels between self-supervised computational models and the functioning of the mammalian brain. Their investigations led to intriguing results, suggesting that these models can grasp the intricacies of the physical world, enabling them to make remarkably accurate predictions about real-world events. The implications of this research go beyond just artificial intelligence; they extend to a deeper understanding of the brain itself.

Aran Nayebi, a postdocat the ICoN Center and lead author of one of the studies, along with fellow researchers, delved into the world of self-supervised learning. They trained models to anticipate the future state of their surroundings using a vast dataset of everyday scenarios captured in naturalistic videos. This approach departed from the traditional method of training neural network models on isolated cognitive tasks. Instead, it harnessed the power of self-supervised learning to uncover a broader spectrum of cognitive functions.

Once the model was adequately trained, it was put to the test in a scenario known as “Mental-Pong,” reminiscent of the classic video game Pong. However, this version had a twist: the ball disappeared just before reaching the paddle, requiring the player to estimate its trajectory accurately. Surprisingly, the model exhibited a remarkable ability to predict the ball’s path, comparable to the way neurons in the mammalian brain simulate such trajectories, a cognitive phenomenon known as “mental simulation.”

Furthermore, the neural activation patterns observed within the model bore a striking resemblance to those in the brains of animals engaged in similar tasks, specifically within a region known as the dorsomedial frontal cortex. This level of alignment with biological data had not been achieved by any other computational model, signifying a significant breakthrough.

In a parallel study, led by MIT graduate student Mikail Khona, former senior research associate Rylan Schaeffer, and Professor Ila Fiete, the researchers delved into the realm of specialized neurons known as grid cells. These neurons, located in the entorhinal cortex, play a vital role in animal navigation, working alongside place cells in the hippocampus.

Place cells fire when an animal occupies a specific location, while grid cells fire specifically at the vertices of a triangular lattice. These grid cells form overlapping lattices of different sizes, enabling them to encode a vast number of positions with a relatively small number of cells.

Previous research had trained supervised neural networks to mimic grid cell function, focusing on path integration, where the animal’s next location is predicted based on its starting point and velocity. However, these models relied on having access to constant, absolute spatial information, which does not align with the reality faced by animals in the wild.

Inspired by the unique coding properties of the multiperiodic grid-cell code, the MIT team took a novel approach. They trained a contrastive self-supervised model to both perform path integration and represent space efficiently. The training dataset consisted of sequences of velocity inputs, and the model learned to differentiate positions based on their similarity or dissimilarity.

The model’s post-training evaluation revealed fascinating results. The activation patterns of the nodes within the model formed various lattice patterns with different periods, closely mirroring the patterns observed in grid cells within the brain. This work created an intriguing bridge between the mathematical properties of grid cell codes and the process of path integration, shedding light on why the brain possesses grid cells.

What excites the researchers most about these findings is the synthesis of mathematical analysis and synthetic modeling. It not only highlights the properties necessary for grid cell functioning but also offers a tantalizing glimpse into the inner workings of the brain, bringing us closer to the development of artificial systems that can truly emulate natural intelligence.

 

Source NeuroScienceNews

Author: Neurologica