Husniya Salwa

and 2 more

Continual learning is a fundamental challenge in artificial intelligence (AI) that aims to enable models to learn from a continuous stream of data while retaining previously acquired knowledge. Unlike traditional machine learning, which operates in static environments, continual learning requires algorithms to adapt incrementally to new tasks and evolving data distributions without catastrophic forgetting. This capability is crucial for deploying AI systems in dynamic real-world applications, including robotics, healthcare, natural language processing, and cybersecurity. This survey provides a comprehensive overview of continual learning, covering its core principles, learning paradigms, and major approaches. We discuss key strategies for mitigating catastrophic forgetting, including replay-based methods, regularization techniques, and dynamic architectures. Additionally, we explore diverse application domains where continual learning plays a crucial role, emphasizing its significance in lifelong learning scenarios. Despite recent advancements, several challenges remain, such as scalability constraints, the lack of standardized benchmarks, and the need for biologically inspired learning mechanisms. We outline open research directions, including memory-efficient learning, adaptive self-supervised techniques, and fairness-aware continual learning. By addressing these challenges, the AI community can develop more robust and flexible models capable of learning continuously over time. Through this survey, we aim to provide a structured foundation for researchers and practitioners interested in continual learning, highlighting its potential to drive the next generation of adaptive AI systems. We discuss the broader impact of continual learning on artificial intelligence and outline future directions to advance the field further.  

Eveline Jessé

and 3 more

Continual learning, also known as lifelong learning, aims to enable artificial intelligence systems to learn from a continuous stream of data while retaining previously acquired knowledge. Unlike traditional machine learning approaches that assume static datasets and offline training, continual learning faces unique challenges such as catastrophic forgetting, knowledge transfer, and scalability. Over the years, a variety of strategies have been developed, including regularization-based methods, memory replay techniques, and dynamic architectural adaptations. This survey provides a comprehensive review of continual learning, covering fundamental concepts, major learning paradigms, and state-of-the-art methodologies. We discuss the primary challenges that hinder the deployment of continual learning models in real-world applications and examine existing solutions, highlighting their strengths and limitations. Furthermore, we explore diverse application domains, including computer vision, natural language processing, and robotics, where continual learning plays a pivotal role in enabling adaptive intelligence. Despite significant progress, several open problems remain unresolved, necessitating further research. We outline future directions such as neuroscience-inspired learning mechanisms, meta-learning for rapid adaptation, scalable and efficient model architectures, and task-agnostic continual learning. Additionally, we emphasize the importance of ethical considerations, fairness, and security in developing responsible lifelong learning systems. By addressing these challenges, continual learning has the potential to revolutionize artificial intelligence, enabling autonomous systems to learn in an open-ended and evolving manner. This survey serves as a reference for researchers and practitioners interested in advancing the field and building more robust and adaptable AI models.