top of page
  • snitzoid

Do patients prefer a ChatGPT doctor or a real doctor (on line)?

So I literally asked Chat GPT about this? The AI bot is a fricken lawyer and honestly a big pussy.

ChatGPT Will See You Now: Doctors Using AI to Answer Patient Questions

Pilot program aims to see if AI will cut time that medical staff spend replying to online inquiries

By Nidhi Subbaraman, WSJ

Updated April 28, 2023 12:16 pm ET

Behind every physician’s medical advice is a wealth of knowledge, but soon, patients across the country might get advice from a different source: artificial intelligence.

In California and Wisconsin, OpenAI’s “GPT” generative artificial intelligence is reading patient messages and drafting responses from their doctors. The operation is part of a pilot program in which three health systems test if the AI will cut the time that medical staff spend replying to patients’ online inquiries.

UC San Diego Health and UW Health began testing the tool in April. Stanford Health Care aims to join the rollout early next week. Altogether, about two dozen healthcare staff are piloting this tool.

Marlene Millen, a primary care physician at UC San Diego Health who is helping lead the AI test, has been testing GPT in her inbox for about a week. Early AI-generated responses needed heavy editing, she said, and her team has been working to improve the replies. They are also adding a kind of bedside manner: If a patient mentioned returning from a trip, the draft could include a line that asked if their travels went well. “It gives the human touch that we would,” Dr. Millen said.

There is preliminary data that suggests AI could add value. ChatGPT scored better than real doctors at responding to patient queries posted online, according to a study published Friday in the journal JAMA Internal Medicine, in which a panel of doctors did blind evaluations of posts.

As many industries test ChatGPT as a business tool, hospital administrators and doctors are hopeful that the AI-assist will ease burnout among their staff, a problem that skyrocketed during the pandemic. The crush of messages and health-records management is a contributor, among administrative tasks, according to the American Medical Association.

Epic, the company based in Verona, Wis., that built the “MyChart” tool through which patients can message their healthcare providers, saw logins more than double from 106 million in the first quarter of 2020 to 260 million in the first quarter of 2023. Epic’s software enables hospitals to store patient records electronically.

Earlier this month, Epic and Microsoft announced that health systems would have access to OpenAI’s GPT through Epic’s software and Microsoft’s Azure cloud service. Microsoft has invested in OpenAI and is building artificial intelligence tools into its products. Hospitals are piloting GPT-3, a version of the large language model that is powering ChatGPT.

ChatGPT has mystified computer scientists for its skill in responding to medical queries—though it is known to make things up—including its ability to pass the U.S. Medical Licensing Exam. OpenAI’s language models haven’t been specifically trained on medical data sets, according to Eric Boyd, Microsoft’s corporate vice president of AI Platform, though medical studies and medical information were included in the vast data set that taught it to spot patterns.

“Doctors working with ChatGPT may be the best messenger,” said John Ayers, a computational epidemiologist at the University of California, San Diego, and an author of the JAMA study.

The AI pilot has some healthcare staff buzzing, said Dr. Millen. “Doctors are so burnt out that they are looking for any kind of hope.” That hospital system saw patient messages jump from 50,000 messages a month before the pandemic, to over 80,000 a month after, with more than 140,000 messages in some pandemic months, Dr. Millen said.

Doctors and their teams are struggling to accommodate the extra load, she said. “I don’t get time on my schedule. My staff is really busy, too.”

Now when Dr. Millen clicks on a message from a patient, the AI instantly displays a draft reply. In doing so, the AI consults information in the patient’s message as well as an abbreviated version of their electronic medical history, said Seth Hain, senior vice president of research and development at Epic. Medical data is protected in compliance with federal laws requiring patient privacy, he added.

UC San Diego Health began testing an AI tool in April. PHOTO: MIKE BLAKE/REUTERS

There is an option to start with the draft—and edit or send the message as-is, if it is correct—or start with a blank reply. The AI drafts reference the patient’s medical record when it proposes a response, for example, mentioning their existing medication or the last time they were seen by their physician. “It is helping us get it started,” she said, saving several seconds that would be spent pulling up the patient’s chart.

For now, the San Diego team has stopped the AI from answering any query that seeks medical advice. Similarly in Wisconsin, the 10 doctors at UW Health have enabled AI responses to a limited set of patient questions, including prescription requests and asks for documentation or paperwork, according to Chero Goswami, chief information officer at UW Health.

Administrators and doctors say the tool could be transformative—but only if it works. If the drafts require too much fact-checking or modification or demand too much time, then doctors will lose trust in it, said Patricia Garcia, a gastroenterologist at Stanford Health Care who is part of the team that aims to begin trialing GPT for messages next week. “They are only going to use this as long as it is making their life easier.”

According to one team of doctors, the version of ChatGPT used in the study is significantly better than physicians at answering medical questions posted online. In the new JAMA Internal Medicine study, the study authors scoured the Reddit forum r/askDocs, where people post questions about their health and illnesses. Medical providers verified by Reddit moderators post replies.

For the study, the authors pulled 195 questions and the responses from doctors that were posted in October to this forum. They then posed the medical question to ChatGPT, and logged the AI’s response.

A team of five medical professionals graded the AI responses for quality and empathy against those written by doctors on Reddit. Without knowing who wrote which, the evaluators gave “good” or “very good” ratings to four times as many ChatGPT responses as physician posts. Also, only 4.6% of doctors’ posts were graded as “empathetic” or “very empathetic,” compared with 45%—10 times as many—ChatGPT posts.

Christopher Longhurst, chief digital officer and chief medical officer at UC San Diego Health, who is an author on the study, said the data in the study persuaded him to give the AI pilot a try. “There is now research showing that this is going to help—well, let’s see if we can translate this into practice,” he said.

Write to Nidhi Subbaraman at

5 views0 comments

Recent Posts

See All


Post: Blog2_Post
bottom of page