Brooklyn, NY—Artificial intelligence (AI) programs on the Internet are not reliable sources for pharmacists to get information about medications, according to a new study.

The presentation at the American Society of Health-System Pharmacists (ASHP) Midyear Clinical Meeting reports that ChatGPT’s answers to nearly three-quarters of drug-related questions reviewed by pharmacists were incomplete or wrong. In some cases, the inaccurate responses could have endangered patients, according to the Long Island University–led study.

In addition, the researchers pointed out that the AI program generated fake citations to support some responses.

“Healthcare professionals and patients should be cautious about using ChatGPT as an authoritative source for medication-related information,” cautioned lead author Sara Grossman, PharmD, associate professor of pharmacy practice at Long Island University. “Anyone who uses ChatGPT for medication-related information should verify the information using trusted sources.”

To gauge accuracy, Dr. Grossman and her coauthors queried the free version of ChatGPT by OpenAI, using real questions that had been posed to Long Island University’s College of Pharmacy drug information service over 16 months in 2022 and 2023.

The pharmacists involved in the study first researched and answered 45 questions, and each answer was reviewed by a second investigator. Those responses were then used to compare the answers generated by ChatGPT. Overall, 39 questions were presented to ChatGPT.

The researchers advised that 10 of the 39 ChatGPT-provided responses were found to be satisfactory based on the criteria established by the investigators. For the other 29 questions, responses generated by ChatGPT:

• Did not directly address the question (11)
• Were inaccurate (10)
• Incomplete (12).

For each question, ChatGPT was asked to provide references to verify the information. The study team noted that references were provided in just eight responses, and each question included nonexistent references.

For example, the researchers asked ChatGPT whether a drug interaction exists between the COVID-19 antiviral Paxlovid and verapamil, the blood pressure–lowering medication. ChatGPT indicated no interactions had been reported for the drug combination.

“In reality, these medications have the potential to interact with one another, and combined use may result in excessive lowering of blood pressure,” Dr. Grossman said in an ASHP press release. “Without knowledge of this interaction, a patient may suffer from an unwanted and preventable side effect.”

In another presentation at the conference, Marylyn D. Ritchie, PhD, director of the Institute for Biomedical Informatics at the University of Pennsylvania, discussed how AI has been used to recommend medications based on genetic profiles—something that is “coming at pharmacists quickly.”

In “Spotlights on Science: AI and Direct-to-Consumer Genetic Testing: The Good, the Bad, and the Ugly,” presented December 6 at the Midyear Clinical Meeting & Exhibition, Dr. Ritchie discussed the risks and benefits of using AI to make healthcare decisions.

“This is something patients are going to start to do, if not already, and it’s something for all providers to think about,” Dr. Ritchie said.

While ChatGPT might be the most well-known, it is one of dozens of large language models in use today,” Dr. Ritchie said, adding, “There is so much more happening that most of us that aren’t in the AI space … just have no idea about.”

Dr. Ritchie warned that there are reports of “horror stories” of ChatGPT giving outdated, dangerous, and outright false medical advice. While the information on ChatGPT is getting better with healthcare recommendations because it’s able to learn over time, the idea of a patient listening to ChatGPT’s advice over calling his or her provider “blows my mind,” she said in an ASHP press release.

The content contained in this article is for informational purposes only. The content is not intended to be a substitute for professional advice. Reliance on any information provided in this article is solely at your own risk.


« Click here to return to Weekly News.