Study Finds ChatGPT Almost Undetectable in Medical Advice

A recent study conducted by NYU Tandon School of Engineering and Grossman School of Medicine reveals that ChatGPT’s responses to healthcare-related queries are almost indistinguishable from those provided by humans. This suggests that chatbots have the potential to be valuable allies in healthcare providers’ communication with patients.

The research team presented ten patient questions and responses to 392 participants aged 18 and above. Half of the responses were generated by a human healthcare provider, and the other half came from ChatGPT. Participants were asked to identify the source of each response and rate their level of trust using a 5-point scale, ranging from completely untrustworthy to completely trustworthy.

The study found that people have limited ability to distinguish between chatbot and human-generated responses. On average, participants correctly identified chatbot responses 65.5% of the time and provider responses 65.1% of the time, with variations between 49.0% to 85.7% for different questions. Interestingly, these results remained consistent regardless of the demographic categories of the respondents.

Overall, the study revealed that participants generally had mild trust in chatbot responses, with an average score of 3.4. Trust levels were lower when the health-related task in question was more complex.

The highest trust ratings were given to logistical questions such as scheduling appointments and insurance inquiries, with an average score of 3.94. Preventative care topics like vaccines and cancer screenings received an average trust rating of 3.52. However, diagnostic and treatment advice obtained the lowest trust ratings, with scores of 2.90 and 2.89, respectively.

The researchers emphasize that chatbots could play a helpful role in patient-provider communication, particularly in administrative tasks and common chronic disease management. Nevertheless, further research is necessary, especially regarding chatbots taking on more clinical roles. Healthcare providers should approach chatbot-generated advice with caution and critical judgment due to the limitations and potential biases of AI models.

Source NeuroScienceNews

Author: Neurologica

Leave a Reply