calender_icon.png 11 January, 2026 | 12:09 PM

CHATGPT health : How much can it be relied upon?

11-01-2026 12:00:00 AM

The feature was developed over two years in collaboration with more than 260 physicians from 60 countries to prioritize safety, clarity, and appropriate referrals to professionals. OpenAI repeatedly stresses that ChatGPT Health is NOT INTENDED for diagnosis or treatment and is designed to support, rather than replace consultations 

In a significant step into the healthcare space, OpenAI has officially launched ChatGPT Health, a dedicated feature within its popular AI chatbot that allows users to securely connect their medical records and wellness apps. Announced on January 7, 2026, the new experience enables ChatGPT to provide more contextualized responses to health-related queries, such as interpreting recent test results, preparing for doctor appointments, suggesting diet and workout adjustments, or analyzing insurance options based on personal health patterns.

The company reports that over 230 million people worldwide already turn to ChatGPT weekly for health and wellness advice, often using it as a first stop to decode lab reports, plan diets, or compare health options. ChatGPT Health formalizes this widespread habit by creating a separate, more secure space for such conversations, complete with enhanced privacy protections, including purpose-built encryption and isolation. OpenAI emphasizes that these health-specific interactions are not used to train its core AI models.

Users can integrate data from sources like Apple Health, MyFitnessPal, Function, and even electronic medical records (initially available in the U.S.), allowing the AI to ground its responses in an individual's actual health information. The feature was developed over two years in collaboration with more than 260 physicians from 60 countries to prioritize safety, clarity, and appropriate referrals to professionals. OpenAI repeatedly stresses that ChatGPT Health is NOT INTENDED for diagnosis or treatment and is designed to support, rather than replace consultations with doctors. It aims to empower users to better understand their health, navigate complex systems, and play a more active role in wellness management.

A senior consultant in critical care, acknowledged AI's usefulness in narrowing possibilities and guiding users toward professional help but warned against relying on it for final decisions or treatment. He shared a recent case where a patient risked complications by stopping medication based on incomplete understanding, underscoring that specialist judgment remains essential. Another health expert emphasized the difference between medical care and true healthcare, arguing that personalized healing requires understanding an individual's constitution, personality, lifestyle, and environment — elements AI cannot fully replicate through statistics alone. He highlighted the irreplaceable value of face-to-face interactions for emotional resonance and intuitive solutions.

A cyber security specialist expressed deep scepticism about data privacy, pointing to past breaches at OpenAI and the sensitivity of health information. He raised concerns over potential misuse by insurers, pharma companies, or through breaches, as well as AI issues like data hallucination (generating plausible but incorrect information) and biases that may not suit diverse populations, such as those in India versus Western contexts. A Clinical psychologist focused on mental health risks, noting that AI lacks ethical duty of care, legal accountability, and the ability to read nonverbal cues like body language or tone — critical in therapy. She cautioned that confident-sounding AI responses can be taken at face value, potentially exacerbating isolation or providing inadequate support.

From a business viewpoint, an investment expert suggested that while large models like ChatGPT set a new baseline, smaller health tech firms can still differentiate through India-specific contextualization (e.g., women's health, chronic diseases, or regional factors). However, they must address persistent challenges like hallucinations and work around the dominance of major players. The experts from various areas converged on a key consensus: AI can inform, educate, and bridge knowledge gaps — especially in resource-constrained settings like India — but it must remain a tool, not a substitute, for human clinical judgment, empathy, and accountability.

As ChatGPT Health rolls out initially to a limited group (with wider web and iOS access planned soon), the development marks a pivotal moment in AI's intersection with personal health. While it promises greater accessibility and empowerment, it also amplifies calls for robust safeguards, regulation, and clear boundaries to protect users from risks in one of life's most sensitive domains.