18/02/2025

In a world first, DERM from Skin Analytics has been awarded the EU’s Class III CE marked medical device.[1] The autonomous skin cancer detection system uses AI without human oversight, and is now approved for clinical decisions in Europe. It is said to have achieved a 99.8% accuracy rate in ruling out cancer, outperforming human dermatologists who typically achieve an accuracy rating of 98.9%. What this means in practice is that Derm can now independently deem a patient not to have skin cancer, without requiring a doctor to review that conclusion. 

While this may come as welcome news for worried patients, at a time when there is a chronic shortage of Consultant Dermatologists dealing with an 82% increase in referrals to dermatology waiting lists between April 2021 and March 2024,[2] the development does raise a number of interesting issues. 

The standard of care: damned if you do, damned if you don’t

For some time now, I have been talking about the tipping point when AI is clearly demonstrated to have outstripped human performance and what that will mean for the standard of care in the context of clinical negligence and medical malpractice claims. Of course, there have already been numerous examples where AI has been shown to outperform us mere mortals – in detecting breast cancer,[3] in performing medical interviews,[4]  and in predicting age-related macular degeneration[5] – but so far this has been in the context of research studies rather than under the intense spotlight of scrutiny to accuracy, safety and effectiveness that is applied in the process of regulatory approval of a medical device.

This quantum leap forward raises the following question: could a provider of health services (whether that is an NHS Trust or a private provider) be criticised, even found liable, for a failure to employ AI? For it is no longer just a matter of it whether it is reasonable to use or be assisted by AI in arriving at clinical decisions, but also whether it is reasonable and logical not to utilise AI where it is outperforming humans. As the DERM case study vividly illustrates, as this technology gets better, quicker, more accurate, able to get through more images, more scans, and prioritise what clinicians should be looking at first, one can well foresee allegations being formulated that it is mandatory to have in place an AI system, which would have prioritised a patient sooner, or picked up an anomaly earlier – and the failure to do so is a failure to operate a safe system and/or breach of duty. 

But of course, the standard of care is not set in a vacuum.[6] Faced with such an unenviable dilemma, a court in setting the standard of care is likely to have to take into account a multiplicity of factors, including:

  • How widespread is the use of the particular piece of AI:  how many hospitals/Trusts are using AI vs how many are not?
  • What is the guidance from organisations such as the Royal Colleges, the GMC?
  • How likely is it or was it that harm would occur by either using or not using AI? And what is the magnitude of that potential harm?
  • How much does it cost to implement a particular piece of AI kit?
  • What is the context in which the AI is being deployed?

These are just the some issues that the courts are going to have to grapple with in establishing rules and standards around the use of AI in healthcare.  

Who is responsible when things go wrong? 

Until now, there has been a reasonably held opinion within the world of digital health and medtech that if AI clinical decision making tools do end up causing adverse outcomes and patient harm, it would probably not fundamentally alter the consultant led model of care that rests ultimate responsibility for treatment outcomes on the responsible consultant, or vicariously on the Trust or private provider. But surely this now needs a rethink in the situation where fully autonomous AI clinical decision making requires no human input or oversight. Justice, fairness and equity surely cannot attribute responsibility to individual clinicians, when clinicians have been taken out of the decision making loop. That is not to say there might not remain arguments around primary liability and providers’ duties to operate safe systems within their hospitals. But that is a quite different question. The point here is that the DERM case study has probably illustrated where the real battle lines will be drawn in an age of potential algorithmic harm: not between patients and doctors, but between patients and systems and providers and AI developers.   

Dan Morris, Partner and Digital Health Co-Lead

Our use of cookies

We use necessary cookies to make our site work. We'd also like to set optional analytics cookies to help us improve it. We won't set optional cookies unless you enable them. Using this tool will set a cookie on your device to remember your preferences. For more detailed information about the cookies we use, see our Cookies page.

Necessary cookies

Necessary cookies enable core functionality such as security, network management, and accessibility. You may disable these by changing your browser settings, but this may affect how the website functions.

Analytics cookies

We'd like to set Google Analytics cookies to help us to improve our website by collection and reporting information on how you use it. The cookies collect information in a way that does not directly identify anyone.
For more information on how these cookies work, please see our Cookies page.