Health AI Chatbots: What Delaware Residents Should Know Before Seeking Medical Advice

New AI chatbots designed to answer health questions are gaining popularity, with companies like OpenAI launching specialized medical versions. While these tools can help explain test results and prepare for doctor visits, medical experts warn they're not substitutes for professional care and raise privacy concerns.

DOVER (TV Delmarva) — As millions of Americans increasingly rely on artificial intelligence for guidance, technology companies are now launching specialized chatbots designed to address medical and health concerns.

This past January, OpenAI unveiled ChatGPT Health, a specialized version of its popular chatbot that can review medical records, fitness app data, and information from wearable devices to respond to health-related inquiries. The service currently has a waiting list for access. Meanwhile, Anthropic, another AI developer, provides comparable capabilities through its Claude chatbot for select users.

Both technology firms emphasize that their artificial intelligence systems, called large language models, are not meant to replace medical professionals and should never be used for diagnosing illnesses. The companies position these tools as aids for interpreting complex medical test results, preparing patients for medical appointments, or identifying significant health patterns within medical records and app data.

Medical professionals and researchers who have tested ChatGPT Health and similar technologies view them as potentially beneficial compared to current alternatives.

While AI systems aren’t flawless and may occasionally provide inaccurate information, they typically deliver more tailored and relevant responses than what patients might discover through internet searches.

“The alternative often is nothing, or the patient winging it,” explained Dr. Robert Wachter, a medical technology specialist at University of California, San Francisco. “And so I think that if you use these tools responsibly, I think you can get useful information.”

A key benefit of these newer chatbots is their ability to provide responses based on individual medical histories, including medication lists, patient age, and physician notes.

Wachter and other experts suggest that even without uploading medical records to AI systems, users should provide comprehensive details to receive better responses.

However, Wachter and colleagues emphasize certain situations require immediate medical care rather than chatbot consultation. Warning signs like difficulty breathing, chest discomfort, or severe headaches may indicate medical emergencies.

Even for non-urgent health concerns, both patients and physicians should maintain “a degree of healthy skepticism” when using AI programs, according to Dr. Lloyd Minor from Stanford University.

“If you’re talking about a major medical decision, or even a smaller decision about your health, you should never be relying just on what you’re getting out of a large language model,” stated Minor, who serves as dean of Stanford’s medical school.

While AI chatbots offer advantages when users share personal medical details, it’s crucial to recognize that information provided to AI companies lacks protection under federal privacy regulations that typically safeguard sensitive medical data.

The Health Insurance Portability and Accountability Act, or HIPAA, imposes penalties including fines and imprisonment for healthcare providers, hospitals, insurance companies, or medical services that inappropriately share medical records. However, this legislation doesn’t cover chatbot developers.

“When someone is uploading their medical chart into a large language model, that is very different than handing it to a new doctor,” Minor noted. “Consumers need to understand that they’re completely different privacy standards.”

OpenAI and Anthropic both claim they maintain health information separately from other data types and apply enhanced privacy safeguards. Neither company uses health data for training their AI models. Users must actively choose to share information and can withdraw access whenever they wish.

Despite growing enthusiasm for AI technology, independent research on these systems remains limited. Initial studies indicate programs like ChatGPT can perform well on advanced medical examinations but frequently struggle during real-world interactions.

A recent Oxford University study involving 1,300 participants discovered that individuals using AI chatbots to investigate hypothetical medical conditions didn’t make superior decisions compared to those using web searches or personal judgment.

When presented with detailed written medical scenarios, AI chatbots accurately identified underlying conditions 95% of the time.

“That was not the problem,” said lead researcher Adam Mahdi from the Oxford Internet Institute. “The place where things fell apart was during the interaction with the real participants.”

Mahdi’s research team identified multiple communication issues. Users frequently failed to provide chatbots with essential information needed to properly identify health problems. Additionally, AI systems often delivered mixed responses containing both accurate and inaccurate information, leaving users unable to differentiate between reliable and unreliable advice.

The 2024 study didn’t evaluate the most recent chatbot versions, including newer options like ChatGPT Health.

Wachter believes improving chatbots’ ability to ask follow-up questions and gather crucial details from users represents an important area for development.

“I think that’s when this will get really good, when the tools become a little bit more doctor-ish in the way they go back and forth” with patients, Wachter explained.

Currently, one method to increase confidence in AI-generated information involves consulting multiple chatbots, similar to seeking second medical opinions.

“I will sometimes put information into ChatGPT and information into Gemini,” Wachter said, referring to Google’s AI platform. “And when they both agree, I feel a little bit more secure that that’s the right answer.”

More from TV Delmarva Channel 33 News

  • Springlike Warm-Up Ahead for Delmarva; Showers Likely at Times

    A noticeable pattern change will bring a significant warm-up to the Delmarva Peninsula beginning mid-week and continuing through the upcoming weekend, along with several opportunities for showers. After recent colder conditions, the large-scale setup across the United States is shifting. Upper-level troughing will deepen over the western part of the country while strong ridging builds […]

  • Supreme Court Won’t Hear AI Copyright Case, Leaving Human Authorship Rule Intact

    The nation's highest court has refused to consider whether artificial intelligence can create copyrightable artwork. A Missouri computer scientist's legal challenge over his AI-generated image was rejected, maintaining the requirement that only humans can hold copyrights.

  • Agriculture Expert Shares Tips for Better Weed Control Systems

    A Corteva Agriscience expert is offering guidance to help farmers optimize their weed control strategies. Dr. Eric Scherder emphasizes the importance of comprehensive approaches to managing weeds and preventing resistance.

  • BASF Develops Revolutionary Soybean Protection Against Destructive Pest

    Chemical giant BASF has created Nemasphere, a groundbreaking genetic trait that protects soybeans from soybean cyst nematode, one of agriculture's most damaging pests. This marks the first genetic technology specifically engineered to combat this yield-robbing threat that costs farmers millions annually.