Researchers are investigating a significant hurdle in machine learning known as “catastrophic forgetting,” a phenomenon where AI systems lose information from previous tasks while learning new ones.
Researchers at The Ohio State University are studying “catastrophic forgetting” in machine learning. AI systems lose information from previous tasks while learning new ones, hindering continuous learning like humans. Diverse tasks help AI systems remember information better than similar tasks. The research aims to advance lifelong learning in AI, mimicking human capabilities.
Continual learning is vital, but catastrophic forgetting poses a challenge. Artificial neural networks lose previous task information as they learn new ones, impacting AI’s prevalence in society. The study finds diverse tasks improve recall in neural networks, resembling human memory.
The findings have implications for dynamic AI systems that adapt and learn continuously. Enabling rapid scaling and adaptation to changing environments is crucial. Optimizing memory involves early teaching of dissimilar tasks to expand the network’s capacity for new information.
Understanding machine and human learning similarities advances AI. Intelligent machines that emulate human learning can revolutionize AI capabilities in various applications.
The study received support from the National Science Foundation and the Army Research Office.