Humans quickly grasp new concepts and apply them to related scenarios. But can machines do the same? In the late 1980s, philosophers Jerry Fodor and Zenon Pylyshyn argued that artificial neural networks, the driving force behind AI and machine learning, struggle with this capability, known as “compositional generalization.”
Over the years, scientists have worked to imbue neural networks, including those powering ChatGPT, with this skill, but success has been mixed, sparking ongoing debate.
Now, researchers from New York University and Spain’s Pompeu Fabra University have introduced a technique called Meta-learning for Compositionality (MLC), as published in Nature. MLC surpasses current methods and, in some cases, matches or outperforms human performance.
MLC focuses on training neural networks, such as those used in speech recognition and natural language processing, to improve their compositional generalization through practice. This stands in contrast to hoping it emerges naturally or designing specialized architectures.
Brenden Lake, an NYU assistant professor, says, “For 35 years, researchers have debated whether neural networks can achieve human-like systematic generalization. We have shown, for the first time, that a generic neural network can mimic or exceed human systematic generalization.”
In MLC, the network continually improves its skills over episodes. It receives a new word in each episode and must use it compositionally. To evaluate MLC’s effectiveness, human participants performed tasks identical to those of the neural network. MLC performed as well as, and sometimes better than, humans. It also outperformed ChatGPT and GPT-4, which still face challenges in compositional generalization.
“Large language models like ChatGPT still struggle with compositional generalization, though they have improved in recent years,” notes Marco Baroni from Pompeu Fabra University.
Source NeuroScienceNews