AWO analysis shows gaps in effective protection from AI harms
The Ada Lovelace Institute commissioned AWO to analyse the UK’s current legal regime to test how effective it is at protecting individuals from AI harms
In March 2023 the UK Government published a White Paper setting out an approach to the regulation of artificial intelligence, and invited feedback through a consultation. In contrast to other approaches (such as the EU’s AI Act), the UK proposes no additional specific AI regulation, instead arguing that “AI is currently regulated through existing legal frameworks”.
AWO’s report ‘Effective protection against AI harms’ raises questions about the Government’s position; gaps exist, which prevent the effective protection of individuals against harms flowing from the use of AI tools.
Background
To help formulate their response to the consultation, the Ada Lovelace Institute asked AWO’s legal team to review three hypothetical scenarios in financial services, employment and the public sector which show how the increasing use of algorithmic tools could increase harms affecting individuals.
Examining AI harm scenarios against existing regulation
We assessed the level of effective protection against those harms by looking at:
- The existence of regulatory requirements to not use technology in harmful ways, and to consider and address harmful impacts in advance;
- The presence, powers and resources of regulators to enforce those requirements;
- Legal rights to redress where harms occur; and
- The likelihood that individuals will be able to evidence harms and enforce their rights in accessible forums.
Scenario 1 concerned a productivity and availability-scoring algorithm that sets warehouse workers’ shifts and pay. We found that effective protection from AI harms for such a tool is weak. Whilst there is some sector-specific regulation, it only makes certain practices unlawful relating to individuals with employment status and protected characteristics, failing to address the totality of the harm envisaged by the Scenario. A lack of a sector-specific regulation (and consequent reliance on the ICO and EHRC which have limited resources, information, and enforcement capacity) compound this, as does a lack of effective transparency about precisely how the tool works and may discriminate.
Scenario 2 concerned a biometric scoring algorithm used by a lender to determine creditworthiness based on speech patterns. We found that the UK GDPR and Equality Act offer strong protections where unfair treatment by the biometric score is related to particular characteristics like race, health and disability. But other arbitrary and unfair discrimination – such as on the basis of regional or class-linked accents – would be less well-protected. Whilst the key layer of protection comes from sector specific rules, a sector-focused regulator in the Financial Conduct Authority, and a sector-specific mechanism for redress which is accessible, the Financial Ombudsman Service. Even here however, the stumbling block that prevents effective protection from being complete is a lack of legally-mandated transparency for individuals in context, alerting them to the potential for unfair treatment.
Scenario 3 concerned a Chatbot used by a government agency to provide benefit advice which is sometimes incorrect. Our analysis of this scenario underlines the limitations of cross-cutting regulation where Equality Act protected characteristics are not directly engaged, and GDPR rights are only enforceable through the civil courts. Whilst important protection is provided through the existence of a voluntary government maladministration scheme, transparency remains a barrier to effective protection, particularly since all public sector guidance on providing transparency when using AI tools is voluntary.
Our analysis showed the central importance of cross-cutting laws that relate to the use of data and decision-making generally: the UK GDPR and the Equality Act 2010. Equally, they demonstrate common gaps in protection, including:
- The lack of legally mandated, meaningful, and in-context transparency that would alert individuals to the possible harm they face and allow them to evidence it;
- Gaps in regulation due to a lack of resources and access to information for the ICO and Equality and Human Rights Commission, combined with a lack of enforcement powers or a low (relative to peer regulators) use of the powers that do exist; and
- Individuals being required to enforce GDPR (and in some cases, Equality Act) rights through the civil courts, which is lengthy, expensive, risky, and off-putting for most ordinary people.
We conclude that collectively the Scenarios demonstrate that there are significant gaps in the effective protection from AI harms in the current regulatory regime, and that the Government’s proposed data protection reforms are set to make this worse.
If you’re interested in this topic or have any questions or comments on the analysis, please contact alex.lawrencearcher@awo.legal