Growing Use of AI in General Practice: Benefits and Risks
General practitioners (GPs) are increasingly turning to artificial intelligence (AI) tools to assist with their daily tasks, particularly in the form of AI scribes. These tools listen to patient consultations and automatically generate summaries that are added to medical records. While this technology promises efficiency and improved documentation, it also raises concerns about accuracy and potential risks.
The Royal College of GPs has issued warnings about the use of AI in medical notes, emphasizing that these systems can misinterpret the nuances of conversations between doctors and patients. This could lead to serious consequences if incorrect information is recorded. The Medicines and Healthcare products Regulatory Agency (MHRA) has also highlighted the risk of “hallucination” in AI-generated content—where the system creates information that is not based on actual events. They urge users to be aware of this risk and for manufacturers to take steps to minimize its impact.
To address these concerns, the MHRA encourages GPs to report any issues with AI scribes through its Yellow Card Scheme. This reporting mechanism, typically used for adverse reactions to medicines, now includes a dedicated section for software and AI as medical devices. GPs are advised to report suspected inaccuracies or errors in AI-generated notes to ensure transparency and improve safety standards.
Despite the growing adoption of AI scribes, some professionals remain cautious. Dr. Phil Whitaker, a UK GP who recently moved to Canada, shared his experience of using an AI tool that misinterpreted conversations. He noted that the system incorrectly recorded details about patients moving to Canada and even documented findings from examinations he had not performed. According to him, the time spent reviewing and correcting the AI’s output often outweighs any productivity gains.
A recent case reported by Fortune illustrates the potential dangers of AI in healthcare. A patient in London was mistakenly invited to a diabetic screening after an AI-generated medical record falsely indicated he had diabetes and suspected heart disease. While such incidents are rare, they highlight the importance of careful oversight and validation of AI-generated content.
Although the MHRA has not yet received reports of adverse incidents related to AI scribes, the government’s 10-Year Health Plan aims to accelerate the adoption of AI technologies, including AI scribes. A new national procurement platform will be established next year to help GP practices and NHS trusts adopt these tools safely. The plan emphasizes streamlining regulations to support the integration of AI into healthcare.
Public Concerns About AI in Healthcare
While the government promotes the use of AI in healthcare, public confidence remains low. A recent poll revealed that fewer than one in three Britons are comfortable using AI features in the NHS App to diagnose their health issues. Among pensioners, the figure rises to 60%, with only 31% of respondents expressing comfort with the idea.
Health Secretary Wes Streeting announced plans to revamp the NHS App as part of Labour’s 10-Year Health Plan, aiming to provide every patient with a “doctor in their pocket.” However, concerns about trust and digital literacy persist. Helen Morgan, the Liberal Democrat’s health spokesperson, emphasized the need for support for those who may struggle with digitized services, ensuring that changes do not leave vulnerable groups behind.
Dennis Reed of Silver Voices, which advocates for elderly Britons, expressed skepticism about the effectiveness of the AI initiative. He warned that greater reliance on the app could exclude older people from accessing timely care. He described the concept of an “AI doctor” as “padlocked” for some individuals.
Balancing Innovation and Safety
Professor Kamila Hawthorne, chair of the RCGP, acknowledged the potential of AI to transform healthcare but stressed the need for careful regulation. She emphasized that while AI scribes can reduce administrative burdens and improve efficiency, they must be closely monitored to ensure accuracy and data security.
The MHRA recommends that GPs only use AI tools that are registered as medical devices and meet required safety standards. Recent guidance clarifies how AI technologies qualify as medical devices, with principles applicable across digital health applications. NHS England also encourages the use of registered medical devices in clinical settings.
As AI continues to shape the future of healthcare, striking a balance between innovation and patient safety remains crucial. GPs and healthcare professionals must remain vigilant, ensuring that AI-generated documentation is accurate before adding it to patient records. The ongoing dialogue between regulators, practitioners, and the public will play a key role in shaping the responsible use of AI in medicine.