GPs Cautioned: AI Notes Risk Dangerous Errors

  • maskobus
  • Aug 10, 2025

The Rise of AI in General Practice

General practitioners (GPs) are increasingly turning to artificial intelligence (AI) tools to help manage their administrative tasks, particularly in the form of AI scribes that automatically generate medical notes from patient consultations. While these tools promise efficiency and improved documentation, they also come with significant risks that healthcare professionals must be aware of.

The Royal College of GPs has issued a warning about the potential for AI to misinterpret the nuances of conversations between doctors and patients. This can lead to inaccuracies or even fabricated information being recorded in medical notes, which could have serious consequences for patient care. The Medicines and Healthcare products Regulatory Agency (MHRA) has also highlighted the risk of “hallucinations” — when AI systems generate false or misleading data — and urges users to be vigilant.

To address these concerns, the MHRA is encouraging GPs to report any issues with AI scribes through its Yellow Card Scheme, a system typically used for reporting adverse reactions to medicines. This includes suspected inaccuracies in the generated notes, as reported by GP Online. The British Medical Association’s GP Committee has noted that the use of passive scribes in general practice is growing rapidly, with many practices adopting standalone systems or integrating them with other software tools.

Dr Phil Whitaker, a UK GP who recently moved to Canada, shared his experience of using an AI tool that failed to accurately capture the details of his conversations with patients. He found that the tool misinterpreted discussions about his move, recording incorrect information about patients relocating to Canada. Additionally, he discovered that the AI had documented examination findings and advice that he had not actually provided. While the company behind the tool advises users to review the output carefully, Dr Whitaker found that the time spent correcting errors outweighed any productivity benefits.

A recent case highlighted by Fortune illustrates the dangers of AI-generated medical records. A patient in London was mistakenly invited to a diabetic screening after an AI system falsely claimed he had diabetes and suspected heart disease. Despite such incidents, the MHRA has stated that its database currently contains no reports of adverse incidents directly linked to the use of AI scribes.

The Future of AI in the NHS

The UK government’s 10-Year Health Plan aims to accelerate the adoption of AI technologies, including AI scribes, by streamlining regulations. A new national procurement platform will be launched next year to support GP practices and NHS trusts in adopting new technology safely. Professor Kamila Hawthorne, chair of the RCGP, acknowledges the transformative potential of AI but emphasizes the need for careful regulation to ensure patient safety and data security.

However, public confidence in AI-driven healthcare services remains low. A poll conducted by Savanta for the Liberal Democrats revealed that fewer than one in three Britons are comfortable using AI features in the NHS App to diagnose their health issues. The figure is even lower among pensioners, with 60% expressing discomfort. Helen Morgan, the Liberal Democrat’s health spokesperson, warned that digitized services must not leave vulnerable groups behind and called for support for those less familiar with digital tools.

Health Secretary Wes Streeting has announced plans to revamp the NHS App as part of Labour’s 10-Year Health Plan, aiming to provide every patient with a “doctor in their pocket.” The app will use AI and patient medical records to offer instant answers and direct users to the most appropriate care. However, critics argue that reliance on AI could exclude older adults and those with limited digital literacy.

Dennis Reed of Silver Voices, an organization advocating for elderly Britons, expressed concern that the push for AI-driven healthcare might leave some individuals without access to timely care. He warned that for some, the “doctor in their pocket” could be “padlocked.”

Ensuring Safety and Accuracy

Despite the growing use of AI in general practice, there are ongoing concerns about data security, accuracy, and the potential for errors. The MHRA recommends that GPs only use AI tools that are registered medical devices and meet strict performance and safety standards. Recent guidance from the agency clarifies how AI technologies qualify as medical devices, and while this applies specifically to digital mental health, the principles extend to all digital health applications.

NHS England also encourages the use of registered medical devices in clinical settings. The MHRA has updated its Yellow Card Scheme to include a dedicated page for software and AI as medical devices, making it easier for GPs to report suspected issues. Earlier this year, the BMA advised practices to pause the use of AI scribes until they had completed data protection and safety checks and ensured that the tools met NHS standards.

As AI continues to play a larger role in healthcare, the balance between innovation and patient safety remains critical. While these tools can reduce administrative burdens and improve efficiency, their implementation must be closely regulated to prevent harm and build trust among both healthcare professionals and the public.

Related Post :

Leave a Reply

Your email address will not be published. Required fields are marked *