When people go into a search engine to seek information on just about anything these days, it’s not uncommon for an AI chatbot to pop up and give them an answer – sometimes a very thorough one with links to the sources for the information provided.
That’s fine for everyday questions, but what if you find out that your doctor is feeding your symptoms into an AI chatbot to get a potential diagnosis (or several) or to educate you about the possible side effects of a medication that they are prescribing?
You might wonder why you even need to bother paying for a doctor’s appointment when you could get the same information yourself. In fact, that’s what more and more people are doing. One poll by the Kaiser Family Foundation (KFF) found that one in six people use these chatbots on at least a monthly basis to get such information, and a third say they trust this information.
How do doctors use chatbots?
Just how common is physician use of public generative AI (genAI) tools to help make clinical decisions, even though that’s not what they’re designed to do? Multiple surveys have determined that a significant percentage of physicians have reported using this technology for things like:
- Clinical decision-making
- Checking drug interactions
- Diagnosis support
- Treatment planning
- Patient education
- Research
While getting information from a chatbot like ChatGPT is quick, easy and free, it’s not necessarily an accurate endeavor. Even though the output cites sources, sources themselves could be outdated or inaccurate. Further, the old “garbage in, garbage out” term applies. If a doctor neglects to input an important piece of information that gives the question the proper context, an answer could be dangerously inaccurate.
Another problem with entering questions into a genAI chatbot is that the information generated is typically in the public domain, which could be a Health Information Portability and Accountability Act (HIPAA) violation, even without a patient name or description included. With that said, it’s worth noting that some health care systems have AI systems that are specially configured to be HIPAA-compliant.
The use of a chatbot or any online search doesn’t relieve doctors of liability. As one medical school professor notes, “Whoever makes the clinical decision is the one who’s responsible. Even if they use ChatGPT or PubMed or Google or whatever, they’re liable for those decisions.” That’s important to remember if you or a loved one has suffered harm due to a doctor’s negligence or error.