17/04/2020

Technology is radically changing the way healthcare is delivered. Whether it’s patients consulting GPs through online primary care services, the use of clinically applicable artificial intelligence and deep learning in diagnosis and referral pathways, or robotic assisted surgery carried out across continents, rapid technological development is transforming the healthcare landscape. And it follows that the law needs to keep pace with these changes, particularly when things go wrong and questions of liability arise.

But is it doing so? The reality is, liability in the digital age is an emerging area with many complex facets. It’s a challenge for legal regimes to move at sufficient pace, but ultimately legislators and courts will wish to ensure an equitable distribution of loss, coherence in the law, and effective access to justice – complexity and blurred lines of responsibility should not result in victims of harm being left uncompensated for their losses.

Existing liability regimes

Broadly speaking, when a patient suffers harm while receiving healthcare the currently available avenues of redress are via tort, contract (both fault-based systems) or defective product laws such as the Consumer Protection Act 1987 (where strict liability can apply). In some situations, claimants might pursue an action under a combination of these regimes – for example, where a surgeon implants a prosthetic hip, questions might arise about:

(i) The proper surgical technique (which would be dealt with in tort or contract, depending on whether the operation was performed in the public or private sector) and

(ii) The safety of the hip implant itself (which would be dealt with under the CPA, as was the case in the Pinnacle metal-on-metal hip group litigation(1)).

Are the existing regimes fit for purpose in the digital age?

The English law of liability has of course developed over centuries and has had to contend with every conceivable technological development along the way, whether that be railways, air travel, organ transplantation, or the latest endovascular stenting grafts. It is often said that the great beauty and elegance of the common law, in particular, is its fluidity and ability to adapt to the mores and problems of each age. So in principle the law of liability should also be able to cope with emerging digital health technologies(2).

But such technologies undoubtedly represent a step change. Because of their complexity, opacity, ongoing selflearning and intelligent adaptability, as well as autonomy – it can sometimes seem almost impossible to determine why a harm has occurred and who should be held responsible for it. There are also questions about scale and replicability of harm. If an individual doctor interacting with an individual patient makes a negligent diagnosis or recommends the incorrect treatment, the harm (albeit sometimes catastrophic) will be limited to one individual. But the same cannot be said about an algorithm. The widely reported Twitter spat(3) between abylon and @DrMurphy11 exemplifies just this point. Dr Murphy4 raised concerns about the appropriateness of advice given by Babylon’s chatbot to a 67-year-old obese, diabetic patient who smoked and was presenting with central chest pain. The point being that if the triage algorithm is flawed, as claimed by Dr Murphy (and I here make no judgment about that) then the same advice could be widely replicated and cause harm to a large number of individuals. It is little wonder then that some have questioned the ability of existing legal frameworks to get to grips with these issues.

A modest proposal

Various proposals have been made which suggest modifications that might be made to the law of liability to mitigate some of the difficulties posed by emerging digital health technologies. For example, the European Commission(5) has made recommendations including that:

• Strict liability (where someone is responsible for the damage without being proved to be at fault) is an appropriate response to the risks posed by emerging digital technologies, if, for example, they are operated in non-private environments and may typically cause significant harm.

• Strict liability should lie with the person who is in control of the risk connected with the operation of emerging digital technologies and who benefits from their operation.

• The producer of a new health technology should be strictly liable for defects in emerging digital technologies even if those defects appear after the product was put into circulation, as long as the producer was still in control of updates to, or upgrades on, the technology.

• If it is proven that an emerging digital technology has caused harm, the burden of proving defect should be reversed if there are disproportionate difficulties or costs in establishing the relevant level of safety or proving that this level of safety has not been met.

• If harm is caused by autonomous technology used in a way functionally equivalent to people performing the task, the operator’s liability for making use of the technology should be the same as where people cause the harm.

• There should be a duty on producers to equip technology with means of recording information about the operation of the technology (logging by design), if such information is typically essential for establishing whether a risk of the technology materialised.

• If it is proven that an emerging digital technology caused harm, and liability therefore is conditional upon a person’s negligence, the burden of proving fault should be reversed if it would be disproportionately difficult and costly to establish the relevant standard of care and of proving that standard had been violated. This means the person causing the harm, will need to prove he or she didn’t cause the harm.

But are such recommendations premature? My own view is that until questions of liability arising from emerging digital health technologies are litigated through the courts in significant numbers, we potentially risk reforming too hastily. As I have said, courts have had to adopt to every major technological advance and the harms associated with these advances, from antibiotics to x-rays. Undoubtedly emerging technologies pose significant new challenges, but courts are frequently asked to apportion liability in situations where it seems almost impossible to delineate true responsibility for the harm. Each case, each product, each app, each arrangement will have to be assessed on its own merits and subjected to the same robust forensic analysis that is applied where any question of liability has to be determined. A modest proposal would be to wait and see how things develop.

 

1. Gee & Others v DePuy International Limited [2018] EWHC 1208 (QB)

2. As recognised by the European Commission in its report on Liability for Artificial Intelligence and Other Emerging Technologies (EU 2019)

3. See, e.g. digitalhealth.net/2020/02/babylon-health-twitter-critic-troll/

4. Who recently revealed his true identity to be David Watkins, an NHS Consultant Oncologist.

5. Supra note 2

 

First appeared in Health Investor April 2020 issue

 

Our use of cookies

We use necessary cookies to make our site work. We'd also like to set optional analytics cookies to help us improve it. We won't set optional cookies unless you enable them. Using this tool will set a cookie on your device to remember your preferences. For more detailed information about the cookies we use, see our Cookies page.

Necessary cookies

Necessary cookies enable core functionality such as security, network management, and accessibility. You may disable these by changing your browser settings, but this may affect how the website functions.

Analytics cookies

We'd like to set Google Analytics cookies to help us to improve our website by collection and reporting information on how you use it. The cookies collect information in a way that does not directly identify anyone.
For more information on how these cookies work, please see our Cookies page.