UK approach to AI regulation

UK approach to AI regulation

In March 2021, the UK’s Digital Secretary, Oliver Dowden, announced the UK’s forthcoming National Artificial Intelligence (AI) Strategy. The Strategy, which is due to be published later this year, will seek to establish the UK as a global centre for the development, commercialisation and adoption of responsible AI. As part of the UK focus on it’s 10 tech priorities, the new AI Strategy will focus on: Growth of the economy through widespread use of AI technologies; Ethical, safe and trustworthy development of responsible AI; Resilience in the face of change through an emphasis on skills, talent and research & development (R&D).

Read below for an interview with AI Sustainability Center senior advisor, Jason Smith, for more information about the future of AI regulation in the UK.

What contributed to the development of the UK AI Strategy? What’s the background?
The UK’s AI Strategy stems from work originally done in the House of Lords. In April 2018, the House of Lords published its first report on AI in the UK, entitled ‘Ready, Willing & Able’. In December 2020, it published a follow-up report, which called on the Government to create a comprehensive AI Strategy. One of the recommendations was that the Centre for Data Ethics and Innovation (CDEI) create and publish national standards for the ethical development and deployment of AI. It will be interesting to see what role, if any, CDEI play in the new AI Strategy.

The announcement and development of the AI Strategy also takes up a recommendation by the UK AI Council (an expert committee advising the UK government on the AI ecosystem) set out in its AI Roadmap dated January 2021, which called for the development of such a strategy.

The aims of the AI Roadmap are twofold. First, the AI Roadmap states that it is necessary to “double-down” on recent investments that the UK has made in AI, in a call for continued funding of the area. The second principle underpinning the AI Roadmap advocates that support for AI should reflect the rapidity with which the science and technology in AI are developing, in order to be adaptable to disruption. The approach is one that seeks to ensure that the UK is at the forefront of integrating approaches to ethics, security and social impacts in the development of AI in coming decades. This is seen as a necessary step to foster “full confidence in AI across society.”

What is most likely to be the UK approach?
It’s fair to say that it is now generally accepted that AI needs to be regulated because it can make decisions which affect human lives and there needs to be confidence that those decisions are made safely, ethically and free from bias and discrimination. The debate now seems to focus on what the regulation of AI should look like and how it should work. Should it be restrictive even to the extent of prohibiting certain uses of AI from the outset with the aim of protecting consumers? Or should it be light touch to enable innovation but give consumers as much information as possible so they can understand how the AI works, and the data it is using and then potentially object to a decision made by the AI?

The EU has clearly gone for the first option in its draft AI regulations. It defines what ‘high-risk’ AI is and sets out a system for the registration of stand-alone high-risk AI applications in a public EU-wide database. AI providers must offer ‘meaningful information’ about systems and prepare conformity assessments.

It isn’t yet apparent which approach the UK will follow. The protection of consumers and the desire to prevent or correct any bias in an AI system will undoubtedly be important objectives for any government or regulator seeking to limit the potential harms of an AI system. The UK may well follow the approach suggested in the House of Lords December 2020 report which suggested different regulators would each address issues specific to their sector in coordination with each other, rather than adopt the EU’s cross-cutting approach.

Interestingly the US seems to be taking a different approach again. In the FTC’s April 2021 blog it calls on those building AI systems to build in ‘truth, fairness and equity’ from the start – almost an ‘ethical by design’ approach without prohibiting particular systems.

What does all this mean?
Whichever approach is taken they are all likely to have one thing in common – a requirement for an AI systems provider to be transparent about how the AI works and makes decisions and on what basis.

Organisations can prepare by adopting the following measures:

ensuring Boards and senior management are fully briefed on the use of AI and the data it is using;
thinking through how they explain publicly about the AI used in the business/products/services; and
ensuring their risk profile, systems and governance address the risks that AI brings and what they do if it falls within the EU’s ‘high-risk’ categorisation.
The conclusion from all of this work is that although regulatory frameworks are yet to be finalised (in the EU) or even formally defined (outside the EU) there can be no doubt that the regulation of the use and design of AI is heading our way.

Anna Felländer

Anna Felländer

It was in 2016 that Anna realised artificial intelligence technology (AI) was becoming the new general-purpose technology: a technology that would drastically impact the economy, businesses, people and society at-large. At the same time, she noticed that AI was also causing a negative externality — a new type of digital pollution. Consumers have opted in to receive the benefit of digitalization, but are simultaneously facing a dark cloud of bias, discrimination and lost autonomy that businesses needed to be held accountable for. In the traditional environmental sustainability model, organisations are held accountable for physical negative externalities, such as air or water pollution, by outraged consumers and sanctions handed down by regulators. Yet no one was holding technology companies accountable for the negative externalities — the digital pollution — of their AI technology. Regulators have had difficulties interpreting AI in order to appropriately regulate it and customers didn’t understand how their data was being used in the black box of AI algorithms.

Anna’s multidisciplinary research group at the Royal Institute of Technology was the origin to anch.AI. Anna founded anch.AI in 2018 to investigate the ethical, legal and societal ramifications of AI. The anch.AI platform is an insight engine with a unique methodology for screening, assessing, mitigating, auditing and reporting exposure to ethical risk in AI solutions. anch.AI believes that all organisations must conform to their ethical values and comply with existing and upcoming regulation in their AI solutions, creating innovations that humans can trust. It is an ethical insurance for companies and organisations.