top of page
Care learner by penta-06.png

You Could Be Breaking the Law : AI Chatbots in UK Care Work Must Comply with Clinical Safety Standards

Generative AI chatbots and other large language model tools are increasingly being used to support care workers, for example to draft care plans, answer questions or provide conversational support. While these tools promise to reduce administrative burden and improve efficiency, poorly supervised use can expose organisations and individual carers to significant legal liability.

Under UK law, digital tools used in health and social care must meet specific clinical risk and data protection standards. AI chatbots that go beyond simple transcription or administrative support may be classed as medical devices and must be registered with the Medicines and Healthcare products Regulatory Agency (MHRA).

This article summarises the regulatory framework, with particular focus on the DCB0160 clinical risk management standard and section 250 of the Health and Social Care Act 2012, and explains why using a chatbot without proper compliance may be unlawful.


The Legal Framework

Section 250 of the Health and Social Care Act 2012

The Health and Social Care Act 2012 empowers NHS England to publish information standards that apply to organisations providing health or adult social care services.

Section 250 states that any person or organisation to whom an information standard applies must comply with it. These standards may relate to the processing of information and may be issued in respect of any public body or provider of health or adult social care.

This provision gives NHS England the authority to mandate clinical safety standards such as DCB0129 and DCB0160.


DCB0160

Clinical Risk Management in the Deployment and Use of Health IT Systems

DCB0160 is a national standard prepared by NHS Digital’s clinical safety team and published under section 250 of the Act. It applies to organisations that deploy or use digital systems in health or social care.

The standard requires organisations to manage clinical risks, document safety evidence and appoint a suitably qualified Clinical Safety Officer.


Key requirements

Requirement

Summary

Clinical risk management plan and hazard log

Organisations must create a clinical risk management plan, maintain a hazard log to capture potential harms, and produce a structured clinical safety case demonstrating that risks have been mitigated.

Clinical Safety Officer (CSO)

A senior clinician, registered with a professional body and trained in digital clinical safety, must lead risk management activities and oversee the hazard log, safety case and ongoing monitoring.

Deployment planning

Before adopting an AI tool, providers must consider how it will be used, confirm the developer’s compliance with DCB0129, conduct a clinical risk assessment and retain evidence of effective risk management.

Ongoing monitoring and human oversight

AI tools must support, not replace, human decision making. Providers must review outputs, audit performance and maintain incident reporting processes.

DCB0160 applies to adopters of technology. DCB0129 applies to manufacturers. Both standards are legally mandated under the Health and Social Care Act 2012.


NHS England Guidance on AI Enabled Ambient Scribes

April 2025

In April 2025, NHS England and the MHRA published detailed guidance on ambient scribing products, including AI enabled tools that convert speech into structured clinical notes.

The guidance includes a rapid implementation framework that closely mirrors DCB0160 requirements.


Core expectations

  • Appointment of a Clinical Safety Officer and identification of risksProducts using generative AI introduce new hazards and must be clinically assessed.

  • Completion of DCB0160 documentation and a Data Protection Impact AssessmentOrganisations must produce a safety case, hazard log and monitoring framework.

  • Regulatory complianceProducts that generate summaries or recommend actions are likely to be medical devices. These must be registered with the MHRA, carry a UKCA or CE mark, and meet medical device regulations.

  • Data security and governanceProviders must comply with the NHS Data Security and Protection Toolkit and the UK GDPR.


June 2025 National Priority Notification

In June 2025, the National Chief Clinical Information Officer issued a priority notification instructing organisations to immediately stop using any ambient voice or AI technology that:

  • Lacked at least MHRA Class I registration

  • Had not completed DCB0160 and DPIA assessments

  • Failed to meet assurance standards such as DTAC or Cyber Essentials Plus

The notification warned that non-compliant tools could expose both organisations and clinicians to legal liability.


Why Using AI Chatbots Without Compliance May Be Unlawful


AI Chatbots May Be Regulated as Medical Devices

AI chatbots used in care settings range from basic administrative tools to systems that generate care plans or respond to clinical queries.

NHS England’s 2025 guidance distinguishes between:

  • Transcription only tools, which are not medical devices

  • Summarisation or decision support tools, which are considered higher functionality and must be registered with the MHRA

NHS England South East guidance confirms that summarisation tools must be registered as Class I medical devices, with Class IIa required where tools influence diagnosis or treatment decisions. A DCB0160 assessment and DPIA are mandatory before use.

Failure to complete these steps breaches both the Health and Social Care Act 2012 and UK medical device regulations.


Mandatory Clinical Risk Assessment and CSO Appointment

The Care Quality Commission states that deploying digital tools without meeting DCB0160 obligations is a regulatory breach.

CQC Mythbuster 109 confirms that providers must:

  • Assess how the technology will be deployed

  • Verify the developer’s DCB0129 compliance

  • Conduct a clinical risk assessment

  • Provide evidence of effective risk management

  • Appoint a trained Clinical Safety Officer


Data Protection and UK GDPR Risks

AI chatbots frequently process special category personal data.

The Digital Care Hub warns that entering personal information into free or unmanaged AI tools may result in data being stored outside organisational control, leading to UK GDPR breaches.

NHS guidance therefore requires:

  • Completion of a Data Protection Impact Assessment

  • Compliance with the NHS Data Security and Protection Toolkit

Failure to do so may result in enforcement action by the Information Commissioner’s Office and adverse findings by the CQC.


Liability and Professional Duty of Care

AI does not remove a provider’s duty of care.

NHS England guidance requires staff to review all AI outputs before acting. AI systems may generate errors, biases or hallucinated information.

Media reporting has highlighted care providers using generative AI to draft care plans, with researchers warning of confidentiality breaches and potential harm if inaccurate information is relied upon.

Where harm occurs, providers may be liable for negligence. The June 2025 notification explicitly states that non compliant AI use may expose both organisations and clinicians to legal action.


Ongoing Regulatory Review

In September 2024, the Department of Health and Social Care announced a formal review of standards DCB0129 and DCB0160 to ensure they remain effective in the context of AI and digital transformation.

This confirms that the regulatory landscape is evolving and that organisations must actively monitor compliance obligations.


Ethical and Responsible Use of AI

AI systems are non deterministic and may reflect biases present in training data. Ethical deployment requires:

  • Transparency and consentPatients must be informed when AI is used and how their data is processed.

  • Human oversightAI must support, not replace, professional judgement.

  • Bias monitoringSystems should be assessed for performance across different accents, dialects and patient groups.

  • Documentation and learningIncident reporting, hazard logs and safety case updates are essential to prevent harm.


Practical Steps for Care Providers

  1. Identify the chatbot’s functionalityDetermine whether it only supports administration or whether it provides summaries, recommendations or triage. If it influences diagnosis or treatment, treat it as a medical device.

  2. Verify supplier complianceObtain evidence of DCB0129 compliance, MHRA registration, DTAC approval and Cyber Essentials certification.

  3. Appoint a Clinical Safety Officer and complete DCB0160Maintain a clinical risk management plan, hazard log and safety case throughout the system lifecycle.

  4. Complete a DPIA and ensure DSPT complianceAssess data flows, risks and mitigation measures.

  5. Train staff and maintain oversightEmphasise review of AI outputs and establish incident reporting and audit processes.

  6. Strengthen governance and procurement controlsUse accredited frameworks, involve information governance early and clearly define liability in supplier contracts.


Conclusion

AI chatbots offer real opportunities to reduce administrative burden and improve efficiency in care settings. However, they also introduce clinical, legal and ethical risks that must be actively managed.

Under section 250 of the Health and Social Care Act 2012 and the DCB0160 standard, organisations deploying AI tools in health or social care must complete clinical safety assessments, appoint a Clinical Safety Officer and ensure developer compliance with DCB0129.

NHS England’s 2025 guidance and priority notifications make clear that non compliant AI tools must not be used. Failure to comply may expose organisations and clinicians to regulatory enforcement, civil liability and professional risk.

Responsible providers will embed clinical safety, data protection and human oversight at every stage of AI adoption. In care settings, cutting corners with AI does not just risk poor outcomes. It risks breaking the law.

 
 
 

Comments


bottom of page