‘Brain-like’ AI uses Chinese chips to run 100 times faster on ultra-long tasks

Arina Makeeva Avatar
Illustration

A groundbreaking development has emerged from China: a visionary team has unveiled what it is touting as the world’s first “brain-like” large language model, a significant leap in artificial intelligence technology. Developed by researchers at the Chinese Academy of Sciences’ Institute of Automation in Beijing, this innovative system, named SpikingBrain 1.0, is designed to operate with reduced energy consumption while achieving superior performance without relying on Nvidia chips.

The concept behind SpikingBrain 1.0 is rooted in mimicking the natural mechanisms of the human brain. Unlike conventional AI models, which activate extensive neural networks continuously, this revolutionary model selectively engages only the necessary neurons in response to specific inputs. This selective activation significantly saves power and accelerates response times, enabling the model to handle tasks with remarkable efficiency.

One of the most impressive claims surrounding SpikingBrain 1.0 is its ability to learn from a fraction of the training data traditionally necessary for similar systems, utilizing less than 2% of what mainstream AI models require. This efficiency is particularly evident when processing extensive texts, where SpikingBrain 1.0 reportedly operates up to 100 times faster than its conventional counterparts, as indicated by a non-peer-reviewed technical paper posted on arXiv, an open-access research repository.

Moreover, SpikingBrain 1.0 functions entirely within China’s domestic AI ecosystem, leveraging the MetaX chip platform instead of the widely used Nvidia GPU hardware. This development is of strategic significance, especially as the United States enforces tighter export controls on advanced AI chips, positioning this technology as a key player in the global AI landscape.

Li Guoqi, a lead researcher at the Institute of Automation, highlighted that this model represents a new frontier for AI development, specifically optimized for Chinese chips. He outlined the potential applications of SpikingBrain 1.0 to process extensive data sequences, such as legal documents, medical records, and scientific simulations, effectively signaling its versatility and relevance across multiple sectors.

In a bid to promote further exploration and use of this technology, Li’s team has open-sourced a smaller version of the model and made a more substantial version available online for public testing. On its demo site, the system introduces itself: “Hello! I’m SpikingBrain 1.0, or ‘Shunxi’, a brain-inspired AI model. I combine the way the human brain processes information with a spiking computation method, aiming to deliver powerful, reliable, and energy-efficient AI services entirely built on Chinese technology.”

In contrast to the prevalent AI models today, which often demand immense computing resources, SpikingBrain 1.0 unveils an energy-efficient alternative for model training and application. Companies typically lean on vast data centers filled with high-performance chips, leading to significant electricity and cooling expenses. Even after the initial training phase, these models continue to consume substantial resources, particularly during tasks that require extensive input or complicated responses.

SpikingBrain 1.0’s innovative approach diverges from this traditional methodology. By drawing on the selective processing capabilities of real neurons, this system doesn’t attempt to process every aspect of information simultaneously. Instead, it responds to provocations, yielding less power consumption while maintaining the ability to manage complex assignments, ultimately paralleling the effectiveness of human cognitive processes.

A remarkable feature of this AI model is its core technology known as “spiking computation,” which mirrors the brain’s tendency to send rapid bursts of signals only when stimulated. This event-driven mechanism inhibits unnecessary activation, allowing SpikingBrain 1.0 to remain quiet during inactive periods—reinforcing its energy efficiency and operational economy.

To substantiate their theories, the developmental team has created two iterations of SpikingBrain 1.0: one encompasses 7 billion parameters, while its larger version consists of 76 billion parameters. While the final stages of the research documentation are awaited, this ambitious project heralds an exciting advancement in AI technology that is anticipated to have far-reaching implications.

Leave a Reply

Your email address will not be published. Required fields are marked *