In a groundbreaking study published in the journal Nature Machine Intelligence, researchers at the University of Cambridge have applied physical constraints to an artificial intelligence (AI) system, mimicking the developmental principles of human and animal brains. The system, using computational nodes instead of real neurons, displayed brain-like features when subjected to constraints akin to those experienced by biological brains.
Led by Jascha Achterberg and Danyal Akarca from the Medical Research Council Cognition and Brain Sciences Unit (MRC CBSU) at the University of Cambridge, the team created a simplified version of the brain and imposed physical constraints before assigning tasks to the system. This innovative approach could potentially lead to the development of more efficient AI systems and contribute to a better understanding of the human brain.
Unlike using real neurons, the researchers employed computational nodes due to their similar functions in input transformation and output production. These nodes, much like neurons, can connect with multiple others, facilitating the exchange of information. The imposed physical constraint involved assigning specific locations in a virtual space to each node, affecting their ability to communicate based on proximity.
The researchers tasked the system with a maze navigation assignment, similar to those given to animals like rats and monkeys in brain studies. Initially unfamiliar with the task, the system received feedback and gradually improved through repeated attempts. The physical constraint mirrored the challenges faced by neurons when forming connections across distances in the brain.
The implications of this research suggest the potential development of more efficient AI models. Many existing AI systems, such as OpenAI's Generative Pre-trained Transformer (GPT), require substantial computing power and electricity. The "spatially embedded AI system" developed in this study, albeit on a small scale, could be scaled to build larger, more efficient AI systems.
The technology may pave the way for constructing brain-inspired sparse models that operate with fewer parameters and neuronal connections, potentially offering energy-efficient solutions. Additionally, researchers foresee using this technology to gain insights into the functioning of the actual human brain. By studying phenomena in artificial models that are challenging to investigate in real brains, researchers aim to enhance our understanding of the brain's principles and functions. The team is now exploring two directions: making the model even more brainlike while maintaining simplicity and applying insights to larger-scale AI systems for energy-efficient processing.