Google's latest innovation, HOPE, represents a significant leap forward in the pursuit of continual machine learning. Unveiled at NeurIPS 2025, this experimental AI model tackles a major challenge in the field: enabling AI systems to continuously learn and adapt without forgetting previously acquired knowledge, a phenomenon known as "catastrophic forgetting".
HOPE is built upon a novel "nested learning" paradigm developed by Google researchers. This approach treats a single AI model as a system of interconnected, multi-level learning problems that are optimized simultaneously, rather than as one continuous process. Google believes this framework offers a robust foundation for bridging the gap between the limited, "forgetting" nature of current Large Language Models (LLMs) and the remarkable continual learning abilities of the human brain.
Current LLMs, while adept at generating text, code, and various creative content, struggle with continual learning. Unlike humans, they cannot seamlessly integrate new information without compromising existing knowledge. This limitation hinders their ability to adapt to evolving environments and learn from experience in a truly human-like manner. HOPE seeks to address this fundamental challenge.
The HOPE architecture is a self-modifying recurrent model that leverages the principles of nested learning. It incorporates a continuum memory system (CMS), featuring multiple memory modules that update at different speeds, mirroring the layered memory systems found in the human brain. This design enables the model to remember and adapt more effectively over time, even as new data arrives. HOPE is also a variant of the Titans architecture, which prioritizes memories based on their surprisingness. HOPE can take advantage of unbounded levels of in-context learning and is augmented with CMS blocks to scale to larger context windows.
Google's HOPE boasts superior performance in language modeling and demonstrates better long-context memory management compared to existing state-of-the-art models. When tested on a diverse set of language modeling and common-sense reasoning tasks, HOPE showcased lower perplexity and higher accuracy than modern LLMs.
The development of HOPE is seen as a crucial step towards achieving Artificial General Intelligence (AGI), which is intelligence that surpasses human capabilities. Andrej Karpathy, a respected AI/ML research scientist and former Google DeepMind employee, noted last month that AGI is still about a decade away, primarily because AI systems lack continual learning abilities. He stated that current AI models "don't have continual learning," and "You can't just tell them something and they'll remember it". Google aims to bridge this gap with the Nested Learning framework and the HOPE model.
The researchers emphasize that Nested Learning could serve as the foundation for a new generation of AI systems that learn continuously, remember deeply, and adapt independently. By simulating neuroplasticity, HOPE represents a leap toward more dynamic, self-improving machines. The findings of the research were published in a paper titled "Nested Learning: The Illusion of Deep Learning Architectures" at NeurIPS 2025.











