AI’s Human-Like Features Impact Trust in Conversations

Reading Time: 2 minutes

A new study delves into how advanced AI systems affect our trust in the individuals we interact with. Researchers report a strong design perspective is driving the development of AI with increasingly human-like features

As artificial intelligence (AI) technology continues to advance, conversations with AI agents are becoming more human-like. However, this can cause confusion and distrust among humans, as it becomes increasingly difficult to distinguish between a human and a machine. In a study by Oskar Lindwall and Jonas Ivarsson, they explore the problem of trust and conversations with AI agents. They found that people often take a long time to realize they are interacting with an AI system, which can have negative consequences, particularly in situations where trust is essential, such as in a romantic relationship or in therapy.

The researchers propose that creating AI with well-functioning and eloquent voices that are still clearly synthetic could increase transparency and reduce confusion. This could help to build trust in AI systems, particularly in situations where it is crucial.

The Impact of Human-Like Voices in AI

The study by Lindwall and Ivarsson also highlights the impact of human-like voices in AI systems. They suggest that the development of AI with increasingly human-like features may be problematic, particularly when it is unclear who you are communicating with. Human-like voices can create a sense of intimacy and lead people to form impressions based on the voice alone, which can be misleading.

The researchers propose that AI should be developed with synthetic voices that are well-functioning and eloquent, but still clearly synthetic. This could reduce confusion and increase transparency, making it easier to identify when you are interacting with a computer.

The Importance of Joint Meaning-Making in Conversations

Finally, the study by Lindwall and Ivarsson emphasizes the importance of joint meaning-making in conversations. Communication involves not only deception but also relationship-building and joint meaning-making. The uncertainty of whether one is talking to a human or a computer can affect this aspect of communication.

While some situations, such as cognitive-behavioral therapy, may not be impacted by the use of AI, other forms of therapy that require more human connection may be negatively affected. This highlights the need for careful consideration of when and how AI is used in different contexts, taking into account the potential impact on joint meaning-making and relationship-building.

The study by Lindwall and Ivarsson highlights the need for greater transparency and clarity in conversations with AI agents. Developing AI with synthetic voices that are well-functioning and eloquent, but still clearly synthetic, could increase trust and reduce confusion. It is also important to consider the impact of AI on joint meaning-making and relationship-building, particularly in contexts where these are essential.

Neurologica
Author: Neurologica

Leave a Reply

Your email address will not be published. Required fields are marked *