Before my view, here’s what’s actually been announced so far:
- What it is: a health-focused version of ChatGPT designed to answer medical and wellbeing questions, with extra safeguards and clearer boundaries around what it can and can’t do
- What it does: provides health information, symptom guidance and general advice, but not diagnoses or treatment decisions, synching with your personal health record to get more background on your past
- Where you can use it: currently launching in the US first
- How to access it: via ChatGPT, with a waiting list for early access and wider rollout expected over time
This is a big deal of course but it feels less like a big reveal and more like OpenAI acknowledging reality. People have been using ChatGPT for health questions for ages anyway, from low-level symptom checking to far more emotional, personal stuff. To some extent this just deepens and formalises behaviour that was already happening.
What’s more interesting is the depth and the privacy angle. This is integrating with your care record and I'm not sure I'd let my mum see that. And yet a lot of people seem surprisingly relaxed about sharing that with a nameless, faceless AI engine run by a US tech company.
Which makes the timing extra interesting when you look at what’s coming next. The NHS App is expected to introduce AI-powered features this spring. On paper, that should feel safer. NHS data governance and oversight are far tighter than anything a consumer tech platform operates under.
And yet, behaviourally, it may feel harder. Talking to 'the NHS' feels official, record-based, permanent. Talking to ChatGPT feels informal, even disposable, regardless of whether that’s actually true.
So there’s a strange tension emerging. People are already happy to hand over their thoughts and feelings to an AI with fuzzy boundaries. Whether they’ll feel as comfortable doing the same inside a formal health system is a very different question.
Find out more about ChatGPT Health.
