Should artificial intelligence be your doctor? A warning for you. As I’ve mentioned here recently, AI is an amazing animal. As a “Chatbot,” AI can respond like a human with seemingly infinite knowledge. But to me AI is something of a murky creature facing an undefined future. Despite its eloquence, AI, in its ChatGPT form, is not infrequently wrong (see here, and here). If you would like a more detailed analysis of ChatGPT’s potential errors, a summary can be found here.
Earlier this month (6 July 2023) the Journal of the American Medical Association published online a series of articles on the risks and potential benefits of AI in health care. Below is an excerpt from the opening of one of the JAMA articles (AI Chatbots, Health Privacy, and Challenges to HIPAA Compliance). The article is largely free of medspeak, and the first three paragraphs telegraph the major theme of the article.
From JAMA
As health care becomes more expensive and difficult to access, people turn to websites and smartphone apps for medical advice. These resources increasingly feature artificial intelligence (AI)–powered chatbots such as Google’s Bard and OpenAI’s ChatGPT.
Chatbots rely on large language models (LLMs), which are the next generation of internet search products. These tools have rekindled enthusiasm for AI-powered health care. Chatbot answers to health care questions often compare favorably to those of other medical resources. Moreover, chatbots can save time by taking on repetitive tasks that contribute to clinician burnout. However, the technology can cause significant harm. Large language models make frequent mistakes, tend to reflect the biases of their training data, and can manipulate people. In one instance, a user reportedly died by suicide after the software urged him to harm himself.
We are only beginning to understand the risks, including how chatbots threaten privacy. This Viewpoint examines the privacy concerns raised by medical uses of LLMs. We conclude that chatbots cannot comply with the Health Insurance Portability and Accountability Act (HIPAA) in any meaningful way despite industry assurances. Even if they could, it would not matter because HIPAA is outdated and inadequate to address AI-related privacy concerns. Consequently, novel legal and ethical approaches are warranted, and patients and clinicians should use these products cautiously. (My emphasis)
Food for thought
Should Artificial Intelligence be your Doctor? The advice you receive may be erroneous, even dangerous. And your privacy may be compromised. I suggest you weigh the potential consequences carefully.