false

  • News & opinion false false
  • News false false
  • 2025 false false
  • December false false
  • AI in primary care: experts warn of safety risks as tech outpaces regulation true true

/content/dam/corporate/images/news-and-opinion/news/2025/december/doctor-in-office.jpeg

Doctor sitting at desk on computer with stethoscope on desk

50%

AI in primary care: experts warn of safety risks as tech outpaces regulation

New research shows AI tools like ChatGPT and digital scribes are being used in GP clinics without proper safety checks.

17 December 2025

m-hero--style-center-wide cmp-teaser--std

1440.650.2x.jpeg 2880w, 1280.1280.jpeg 1280w, 800.361.2x.jpeg 1600w, 220.99.2x.jpeg 440w, 440.199.2x.jpeg 880w

false

From digital scribes to ChatGPT, artificial intelligence (AI) is rapidly entering GP clinics. New University of Sydney research warns that technology is racing ahead of safety checks, putting patients and health systems at risk.  

The study, published in The Lancet Primary Care, synthesised global evidence on how AI is being used in primary care using data from the United States, United Kingdom, Australia, several African nations, Latin America, Ireland and other regions. It found that AI tools such as ChatGPT, AI scribes and patient-facing apps are increasingly used for clinical queries, documentation and patient advice, yet most are being deployed without thorough evaluation or regulatory oversight. 

“Primary care is the backbone of health systems, providing accessible and continuous care,” said study lead Associate Professor Liliana Laranjo, Horizon Fellow at the Westmead Applied Research Centre. “AI can ease pressure on overstretched services, but without safeguards, we risk unintended consequences for patient safety and quality of care.”

GPs and patients turning to AI but evidence lags behind

Primary care is under strain worldwide, from workforce shortages to clinician burnout and rising healthcare complexity, all worsened by the COVID-19 pandemic. AI has been touted as a solution, with tools that save time by summarising consultations, automating administration and supporting decision-making.

In the UK, one in five GPs reported using generative AI in clinical practices in 2024. But the review found that most studies of AI in primary care are based on simulations rather than real-world trials, leaving critical gaps in effectiveness, safety and equity.

The number of GPs using generative AI in Australia is not reliably known but estimated at 40 percent.

“AI is already in our clinics, but without Australian data on how many GPs are using it or proper oversight, we’re flying blind on safety,” Associate Professor Laranjo said. 

While AI scribes and ambient listening technologies can reduce cognitive load and improve job satisfaction for GPs, they also carry risks like automation bias and loss of important social or biographical details in medical records. 

“Our study found that many GPs who use AI scribes don’t want to go back to typing. They say it speeds up consultations and lets them focus on patients, but these tools can miss vital personal details, and can introduce bias,” said Associate Professor Laranjo. 

For patients, symptom checkers and health apps promise convenience and personalised care, but their accuracy often varies, and many lack the capability for independent evaluation. 

“Generative models like ChatGPT can sound convincing but be factually wrong,” said Associate Professor Laranjo. “They often agree with users even when they’re mistaken, which is dangerous for patients and challenging for clinicians.”

Generative models like ChatGPT can sound convincing but be factually wrong. They often agree with users even when they’re mistaken, which is dangerous for patients and challenging for clinicians.

Associate Professor Liliana Laranjo

Faculty of Medicine and Health

Equity and environmental risks of AI

Experts warn that while AI promises faster diagnoses and personalised care, it can also deepen health gaps if bias creeps in. Dermatology tools, for example, often misdiagnose darker skin tones that are typically underrepresented in training datasets. 

Conversely, when designed well, the researchers say that AI can address inequities: one arthritis study doubled the number of Black patients eligible for knee replacement by using an algorithm trained on a diverse dataset, making it better able to predict patient-reported knee pain compared to the standard doctor x-ray interpretation. 

“Ignoring socioeconomic factors and universal design could turn AI in primary care from a breakthrough into a setback,” said Associate Professor Laranjo. 

Environmental costs are also huge. Training GPT-3, the version of ChatGPT released in 2020, emitted amounts of carbon dioxide equivalent to 188 flights between New York and San Francisco. Data centres now consume around 1 percent of global electricity, and in Ireland, data centres account for more than 20 percent of national electricity use. 

“AI’s environmental footprint is a challenge,” Associate Professor Laranjo said. “We need sustainable approaches that balance innovation with equity and planetary health.”

The researchers urge governments, clinicians and tech developers to prioritise:

  • robust evaluation and real-world monitoring of AI tools
  • regulatory frameworks that keep pace with innovation
  • education for clinicians and the public to improve AI literacy
  • bias mitigation strategies to ensure equity in healthcare
  • sustainable practices to reduce AI’s environmental impact.

“AI offers a chance to reimagine primary care, but innovation must not come at the expense of safety or equity,” Associate Professor Laranjo said. “We need partnerships across sectors to make sure AI benefits everyone – not just the tech-savvy or well-resourced.”

Research

Laranjo, Liliana, et al., ‘Artificial intelligence in primary care: innovation at a crossroads’ (The Lancet Primary Care, 2025)

DOI: 10.1016/j.lanprc.2025.100078

Declaration

The research was funded by NHMRC, the Sydney Horizon Fellowship, the Oxford–Reuben Clarendon Scholarship, the National Institute for Health and Care Research Applied Research Collaboration Northwest London, the NIHR North West London Patient Safety Research Collaboration, with infrastructure support from the NIHR Imperial Biomedical Research Centre.

_self

Read the research

h2

cmp-call-to-action--ochre

Manual Name : Associate Professor Liliana Laranjo

Manual Description : Horizon Fellow at the Westmead Applied Research Centre

Manual Address :

Manual Addition Info Title :

Manual Addition Info Content :

Manual Type : profile

alt

_self

Auto Type : contact

Auto Addition Title :

Auto Addition Content :

Auto Name : true

Auto Position : true

Auto Phone Number : false

Auto Mobile Number : false

Auto Email Address : false

Auto Address : false

UUID : 9af0a766-7c3b-49bd-bb46-9f2b572acb4e

Media contact

Manual Name : Emily Fraser

Manual Description : Assistant Media and PR Adviser

Manual Address :

Manual Addition Info Title :

Manual Addition Info Content :

Manual Type : contact

alt

_self

Auto Type : contact

Auto Addition Title :

Auto Addition Content :

Auto Name : true

Auto Position : true

Auto Phone Number : false

Auto Mobile Number : true

Auto Email Address : true

Auto Address : false

UUID : 861ef23f-6d5e-422e-984e-8a05df9c7b78