Artificial intelligence is rapidly reshaping how people access healthcare – but public confidence in the technology remains deeply divided, according to a major new study from King’s Health Partners, Responsible AI UK and the Policy Institute at King’s College London.
The research reveals that 15% of the public have already used AI chatbots for health advice instead of contacting a GP or another NHS service, while one in ten (10%) say they have used AI for mental health therapy or wellbeing support rather than seeing a trained professional.
The findings come at a pivotal moment, as the government accelerates the rollout of AI across the NHS and the National Commission into the Regulation of AI in Healthcare considers how oversight needs to evolve.
While AI tools are increasingly being used as a first port of call, the study raises serious concerns about safety and unintended consequences. Among those who sought health advice from AI:
- One in five (20%) say the technology did not encourage them to seek a professional opinion
- 21% say they decided against seeking professional healthcare advice because of something an AI chatbot said
These findings sit alongside recent evidence showing that AI chatbots misdiagnose up to 80% of early medical cases, heightening fears about reliance on unregulated tools.
Public opinion is sharply split on whether AI should be used in NHS clinical decision‑making, with 37% of people supporting its use and 38% opposing it.
However, opposition is more strongly felt. 15% strongly oppose AI being used in clinical decision‑making, compared with just 8% who strongly support it.
Opposition is highest among 18‑to‑24‑year‑olds, with nearly half (49%) against clinical AI use, compared with 36% of people aged 65 and over. Women are also significantly more opposed than men (46% vs 30%).
The study finds a significant gap between perception and reality. On average, the public believe 39% of GPs already use AI in clinical decision‑making, when the true figure is just 8%.
This mismatch risks fuelling mistrust and confusion as AI tools become more visible in healthcare settings.
Across the study, the public consistently call for stronger safeguards:
- 76% say AI tools used in patient care should be officially approved and regulated, even if this slows adoption
- Just 17% believe doctors should be free to choose AI tools without formal approval
This demand comes as critics — including the Nuffield Trust and Royal College of Physicians — have warned that the absence of a single regulatory framework has created a “wild west” of AI adoption in healthcare.
The dominant emotion associated with the NHS using AI for clinical tasks is anxiety about safety and accuracy, cited by 39% of respondents.
Overall, the public are twice as likely to report negative emotions (63%) as positive ones (28%), with women significantly more likely than men to feel anxious (46% vs 31%).
Across all clinical scenarios tested, the public place more trust in doctors than in AI. Trust is highest for psychological therapy, where 46% trust a doctor much more, compared with just 1% who trust AI more.
However, trust in doctors falls sharply when they are described as being at the end of a long and busy shift – while willingness to trust AI rises in every scenario.
If an NHS‑approved AI system disagreed with a doctor’s diagnosis, 55% want a second doctor to review the case, whilst just 7% would follow the AI’s advice alone.
If AI misses a health problem in a scan or X‑ray, the public are most likely to blame the treating doctor or healthcare professional (34%), or the NHS trust that deployed the tool (24%). Only 6% would primarily blame the company that developed the AI.
A consistent gender divide runs through attitudes to AI in healthcare. Women are:
- More likely to feel anxious about AI use
- More likely to oppose its use in clinical decision‑making
- More likely to want to be informed and given the option to opt out
Professor Graham Lord, Executive Director, King’s Health Partners, said:
“This research underlines the scale and pace at which AI is already shaping how people access healthcare. While the opportunities are significant, it also highlights concerns about safety and accountability.
“When something goes wrong with AI, responsibility is often placed on clinicians, even where they have limited control over how AI tools are introduced. To realise AI’s potential, we need greater transparency about what works, what is safe, how decisions are made, and how issues are handled - so staff and patients can feel confident in its use.”

The findings suggest public trust in AI is fragile, conditional and unevenly distributed. As adoption accelerates, the study concludes that clear regulation, transparency and patient choice will be essential if AI is to enhance – rather than undermine – confidence in NHS care.
Image credit: iStock
