The rapid advancement of artificial intelligence (AI) has sparked discussions about the potential consciousness of AI systems. However, it’s crucial to recognize the intricate neurobiological processes that underlie human consciousness.
Modern AI, like ChatGPT, showcases impressive capabilities, often providing human-like responses. Yet, when we interact with AI, we consciously perceive the generated text.
The question arises: does the language model itself perceive our prompts, or does it function like a clever pattern-matching machine, lacking true consciousness?
Neuroscientists Jaan Aru, Matthew Larkum, and Mac Shine offer insights. They argue that despite appearances, AI systems like ChatGPT are likely not conscious. Several factors support this view:
- AI lacks the sensory depth and embodiment characteristic of human experience.
- AI architectures lack key features found in the human thalamocortical system,crucial for consciousness.
- The complex evolution and development that gave rise to conscious living organisms have no parallel in current AI systems.
While it’s tempting to attribute consciousness to AI, doing so underestimates the complexity of the neural mechanisms responsible for human consciousness.
Researchers still lack a consensus on the origins of consciousness in the human brain, but it’s clear that these mechanisms far surpass the capabilities of current language models. For example, biological neurons differ significantly from artificial neural network counterparts.
Biological neurons are dynamic, growing physical entities, whereas AI neurons are static lines of code. We have much to learn about the profound intricacies of consciousness.
Source NeuroScienceNews