Even when humans view AI-based assistants as tools, they still attribute some responsibility to them, according to a recent study.
Future AI systems might control autonomous vehicles independently, yet people perceive them as equally responsible as humans for autonomous traffic decisions. However, real-world AI assistants are far from having such autonomy.
In real-life situations, when things go right or wrong, who is accountable? Is it the human user or the AI assistant?
Louis Longin, a philosopher specializing in human-AI interaction, along with Dr. Bahador Bahrami and Prof. Ophelia Deroy, conducted a study with 940 participants. They evaluated how participants judged a human driver using a smart AI-powered verbal assistant, a smart AI-powered tactile assistant, or a non-AI navigation instrument. The results were intriguing.
Participants considered smart assistants as tools, but still held them partially responsible for drivers’ outcomes, especially for positive ones. This divergence in responsibility didn’t apply to non-AI instruments.
Surprisingly, the study found no difference in perception between language-based AI assistants and those using tactile feedback.
In summary, AI assistants are more than mere recommendation tools but still fall short of human standards. These findings are expected to influence the design and social discussions surrounding AI assistants, prompting organizations to consider their impact on social and moral norms.
Source NeuroScienceNews