Implementing AI in a healthcare setting requires careful consideration. The World Health Organization (WHO) emphasizes the importance of exercising caution when incorporating large language model tools (LLMs) generated by artificial intelligence into routine healthcare practices. Key concerns such as data biases and inadequate protection of patient health data underscore the need for stringent oversight to ensure the safe and ethical use of AI in healthcare.
Premature adoption of untested systems can lead to errors by healthcare professionals, potential harm to patients, and a erosion of trust in AI, thereby undermining the long-term benefits and global applicability of such technologies, as stated by the WHO.
AI has already made significant inroads into the healthcare industry and has the potential to save up to $360 billion annually. Healthcare organizations are increasingly leveraging AI for both administrative and clinical tasks, ranging from optimizing operating room schedules to interpreting medical scans. Some patients at Jefferson Health in Philadelphia are already turning to AI for diagnostic purposes instead of relying solely on physicians.
However, there are challenges that health systems must address if administrators choose to implement AI.
The WHO discovered that the data used to train these AI technologies may exhibit biases, resulting in misleading or inaccurate information that poses risks to health, equity, and inclusiveness.
If the data used for training fails to represent the diverse population of patients who will use the AI tool, it can introduce bias, as outlined in a 2020 report by the Government Accountability Office (GAO).
The GAO report emphasizes that “bias in data used to develop AI tools can compromise their safety and effectiveness for patients who differ, whether genetically or in terms of socioeconomic status, general health status, or other characteristics, from the population whose data were used to develop the tool.”
For instance, a study conducted by the University of Chicago in 2022, analyzing over 40,000 electronic health records (EHRs), found racial biases in the data. It revealed that healthcare providers were 2.5 times more likely to include negative descriptors in a Black patient’s EHR compared to a white patient.
As the use of AI becomes more prevalent in clinical settings, there is a heightened risk to patient privacy, as highlighted in the GAO report. LLMs may be trained using health data that patients have not consented to share, and these technologies may lack adequate safeguards to protect sensitive patient information, according to the WHO.
“While the WHO recognizes the potential benefits of leveraging technologies, including LLMs, to support healthcare professionals, patients, researchers, and scientists, there is a pressing need to exercise the same caution that is customary for any new technology consistently when it comes to LLMs,” warns the WHO.
Overall, careful consideration and diligent oversight are imperative when incorporating ChatGPT or similar AI technologies in healthcare settings.