

OpenAI has launched ChatGPT Health, a dedicated feature within ChatGPT designed for health and wellness queries — a move that comes as an estimated 40 million people already ask the platform health-related questions every day.
To understand what this means for patients, Northwestern Now spoke with Dr. David Liebovitz, co-director of the Center for Medical Education in Data Science and Digital Health at Northwestern University Feinberg School of Medicine. Liebovitz has spent decades teaching clinical informatics and has served as chief medical information officer at two organisations where he implemented AI in clinical medicine.
Liebovitz frames the central question not as whether patients will use AI for health information — they already do — but whether they can be helped to do so more safely and effectively.
He points to the 21st Century Cures Act, which now requires health systems to give patients complete access to their medical records through standardised APIs that electronic health record vendors like Epic must provide. Tools like ChatGPT Health, he argues, can help patients actually make sense of that data.
"For essentially zero incremental cost, a patient can get help understanding lab results, preparing questions for appointments and identifying gaps in their care that might otherwise be missed," he said.
Also Read: AI outperforms conventional heart attack diagnosis: Study
Liebovitz also situates ChatGPT Health within a longer arc of patient safety failures.
More than 25 years after the landmark Institute of Medicine report To Err is Human documented tens of thousands of preventable deaths from diagnostic errors, the problem remains unsolved. AI tools that can review a patient's full medical history and flag potential concerns, he says, represent a meaningful improvement over patients arriving at appointments armed with decontextualised Google searches.
The key distinction, he notes, is that these tools synthesise information in context rather than generating alarm from isolated symptoms.
Liebovitz is clear-eyed about the risks, and says patients need to understand one critical fact: health data shared with ChatGPT is not protected by HIPAA.
Unlike conversations with a doctor or therapist, there is no legal privilege. That data could potentially be subpoenaed in litigation or accessed through other legal processes. For sensitive matters — particularly reproductive or mental health concerns — he says that is a real and serious consideration.
Liebovitz points to a privacy-preserving alternative that sidesteps these concerns entirely — running AI models locally on a patient's own device, with no data ever sent to a cloud server.
Modern smartphones, he notes, already have sufficient processing power to run capable language models. Apple's own work with Apple Intelligence validates that sophisticated AI can operate locally. Open-source models optimised for mobile hardware are improving rapidly.
"Within a year or two, a patient could have a highly capable health assistant running entirely on their phone, analyzing their downloaded medical records with complete privacy," he said — no subscription fees, no corporate servers, and no subpoena risk.