Advanced AI assistants (AAAs) have moved quickly from experimental tools to embedded parts of everyday life. They draft messages, manage calendars and tasks, offer wellness support, and increasingly influence – or even make – decisions for their users. As these systems become more personalised and capable, the stakes of their mistakes rise too.
In a new report for the Ada Lovelace Institute, AWO analyses whether current UK law offers meaningful protection against harms which could be caused by AAAs in four realistic scenarios: a wellness chatbot, a high-autonomy personal financial assistant, a legal advice tool deployed in a frontline advice setting, and an AI companion app. These scenarios reflect ways people already use AI systems today. Our client asked us: if something goes wrong, is the law equipped to prevent harm – or enable redress afterwards?
The picture is mixed, and in some areas rather stark. Some harms fall clearly outside existing frameworks. Others are theoretically covered by law, but in practice almost impossible for an individual to evidence or litigate. AAAs introduce distinctive challenges for transparency, causation, mitigation, and accountability. Their complexity, the opacity of model development, and the ways users interact with them all make it harder to understand how and why harm has occurred – a prerequisite for both regulation and redress.
Across the scenarios, the analysis shows that protections tend to hinge on threshold conditions that are often not met: for example, whether an AAA is marketed in a way that triggers consumer protection law; whether a regulated professional relationship is formed; or whether the tool’s outputs qualify as “automated decisions” under the UK GDPR. Where these thresholds aren't reached, large categories of harm may fall through the gaps. Even when they are reached, the realistic prospect of users obtaining redress is often low, given the practical barriers to civil claims and the evidential complexity involved.
The report also identifies harms that current legal frameworks simply do not recognise: emotional dependency, subtle shifts in autonomy or political views, or diffuse public-sphere effects. These harms are real, but the law as it stands does not treat them as actionable.
The analysis suggests that as things stand, relying on users spotting and pursuing individual claims is unlikely to meaningfully manage AAA-related risks. Whilst developments in the common law and improved access to justice might ameliorate this, the role of regulators — and the information available to them — is likely to be significant.
AAAs are rapidly becoming general-purpose tools used across domains historically governed by different legal regimes. This creates friction at the boundaries of existing frameworks, and exposes where traditional legal concepts do not map neatly onto model-driven, adaptive systems. Our report does not argue for specific reforms, but it does show where the assumptions underpinning current protections do not hold in an AAA-shaped world.
You can read the full analysis here. Please contact alex.lawrencearcher@awo.legal if you would like to discuss it.
The analysis was authored by Lucie Audibert and Alex Lawrence-Archer of AWO, and Radha Bhatt of Matrix Chambers.