Google has announced a novel experimental AI model named HOPE, marking a significant stride towards achieving continual and adaptive learning capabilities in machines. HOPE, which stands for "Hierarchical Objective-aware Parameter Evolution," tackles a core challenge in AI: enabling systems to learn continuously without forgetting previously acquired knowledge, a phenomenon known as "catastrophic forgetting".
The HOPE model is a proof-of-concept for an innovative approach called "Nested Learning". Unlike traditional AI models, which are typically trained as a single, continuous learning process, Nested Learning treats a single AI model as a collection of interconnected, multi-level learning problems optimized simultaneously. This framework allows the model to retain and build upon what it learns instead of discarding earlier knowledge. Google's researchers presented their findings in a paper titled “Nested Learning: The Illusion of Deep Learning Architectures,” at the NeurIPS 2025 conference.
Nested Learning, at its core, embraces a layered architecture. As the AI encounters new experiences, it creates additional "sub-models," linking previous memories to new insights. The model reorganizes its knowledge hierarchically instead of discarding old skills for new ones. Each layer reinforces the next, allowing for flexibility and rapid adaptation. This multi-level approach allows machine intelligence to preserve and reuse experiences from complex, changing environments.
According to Google, the Nested Learning framework addresses key limitations of modern large language models (LLMs), especially their inability to learn continually – a capability viewed as essential for developing artificial general intelligence (AGI) or human-like intelligence. The uniform and reusable structure, as well as the multi-time-scale update in the brain, are replicated in machine learning models through Nested Learning.
HOPE leverages multi-level, modular memory systems to emulate human-like adaptive learning. It employs a continuum memory system (CMS), where multiple memory modules update at different speeds—mirroring the layered memory systems in the human brain. This design enables it to remember and adapt more effectively over time, even as new data arrives. A key innovation is HOPE's self-referential process, which lets the model adapt and optimize its own memory management. HOPE supports unlimited nested learning layers, allowing for deeper and more resilient knowledge retention, unlike previous recurrent models that were limited to just two levels of memory updates.
Early results from Google's experiments with HOPE suggest that Nested Learning could provide a robust foundation for AI systems that evolve in a more human-like manner, continuously refining their understanding instead of restarting with each new dataset. HOPE has reportedly achieved lower perplexity (a measure of uncertainty) and higher accuracy on a range of standard language and reasoning benchmarks compared to current leading LLM models.
The potential applications of HOPE and Nested Learning span various fields, including robotics, medicine, and finance, where context changes rapidly. By enabling AI to function more like lifelong learners, this technology could revolutionize how machines adapt to complex and dynamic environments.
While HOPE is currently in the experimental stage, Google's research team sees it as a significant step toward AGI, offering a framework that mimics human-like learning and long-term memory retention.














