For the first time, the EU Commission is proposing a targeted harmonization of national liability rules for AI, making it easier for victims of AI-related damage to get recourse and compensation. This is in-line with and will complement the directly effective AI Act proposal that currently being hammered-out by the EU regulatory machinery.
The scope covers, for example, breaches of privacy, or damages caused by safety issues. The new rules will, for instance, make it easier to obtain compensation if a job seeker has been discriminated in a recruitment process involving AI technology.
The Directive simplifies the legal process for victims of AI-related damage by introducing two important features. First, there is a ‘presumption of causality’ in circumstances where a relevant fault has been established and a causal link to the AI performance seems reasonably likely. Second, a right of access to evidence from companies and suppliers is stipulated where high-risk AI systems are involved.
This means a broader protection of victims who will have more access to information and will be alleviated of the burden of proof in relation to damages caused by AI systems.
The aim is to strike a balance between protecting consumers and fostering innovation, removing additional barriers for victims to access compensation, while laying down guarantees for the AI sector by introducing the right to fight a liability claim based on a presumption of causality.
For providers of AI systems, this means an increased need to ensure that a responsible AI compliance muscle is in place.
It was in 2016 that Anna realised artificial intelligence technology (AI) was becoming the new general-purpose technology: a technology that would drastically impact the economy, businesses, people and society at-large. At the same time, she noticed that AI was also causing a negative externality — a new type of digital pollution. Consumers have opted in to receive the benefit of digitalization, but are simultaneously facing a dark cloud of bias, discrimination and lost autonomy that businesses needed to be held accountable for. In the traditional environmental sustainability model, organisations are held accountable for physical negative externalities, such as air or water pollution, by outraged consumers and sanctions handed down by regulators. Yet no one was holding technology companies accountable for the negative externalities — the digital pollution — of their AI technology. Regulators have had difficulties interpreting AI in order to appropriately regulate it and customers didn’t understand how their data was being used in the black box of AI algorithms.
Anna’s multidisciplinary research group at the Royal Institute of Technology was the origin to anch.AI. Anna founded anch.AI in 2018 to investigate the ethical, legal and societal ramifications of AI. The anch.AI platform is an insight engine with a unique methodology for screening, assessing, mitigating, auditing and reporting exposure to ethical risk in AI solutions. anch.AI believes that all organisations must conform to their ethical values and comply with existing and upcoming regulation in their AI solutions, creating innovations that humans can trust. It is an ethical insurance for companies and organisations.