What is the EU AI Act?

What is the EU AI Act?

Navigating the requirements of the EU artificial intelligence act will be a massive task for organisations with high-risk systems. Although the awareness around the Act has been growing massively, we notice in general at anch.AI, that a lack of urgency around the Act still exists. We believe the watershed moment will be when the EU AI Act becomes actively implemented. Unfortunately, we currently see a lag in real action like how GDPR was not fully embraced proactively.

To foster more interest on the subject we have decided to provide the following guide for our community. We hope to raise further awareness and assist those new to the Act and its requirements.

Why should I care about the EU AI Act?
The need for Responsible AI across society is booming more than ever. AI systems such as Chat-GPT are showing massive benefits, disruptions, and potential for harm due to output which could harm safety and encourage misinformation and discrimination.

EU artificial intelligence act will be a watershed moment in a culture change needed for AI governance. Gone will be the days of AI development resigned to tech teams. Cross functional considerations on the legal, business and societal effects of building AI will be ever more present with the implementation of the Act. Companies and institutions that engage in AI projects without AI governance are taking big risks. The leadership of businesses and institutions should urgently see to that their stakeholder teams have the means for such long-term exercise.

What happens if I am not compliant with the EU AI Act?
You will lose out having robust and well-trained AI solution and AI governance within the internal organization; harming society and exposing you massive about of business and reputational risk.

Financially speaking, this can mean you end up with non-compliance penalties which hit the pockets of your company hard. Fines have been proposed to reach up to €30 million or 6% of global income. Submission of false or misleading documentation to regulators can also lead to fines.

How do I become compliant with the EU AI Act?
Requirements and obligations, including the need for conformity assessments will be entirely dependent on the risk category for the AI Solution under examination.

Thus, the first step will be to understand your risk category for a given solution. These core risk categories can range from prohibited risk, to high-risk, to limited risk, to minimal risk.

Attaining the high-risk status will result in your organisation needing to take on a massive extent of requirements and obligations detailed below. Luckily most AI projects fall into the minimal risk categories and are not subject to these requirements and obligations. However non-high-risk AI systems will be able to apply the requirements of high-risk systems voluntarily.

Some solutions cannot reach the status of being compliant due to prohibited risk. Outright prohibited solutions will be a clear threat to people’s safety, livelihoods and rights, because of the ‘unacceptable risk’ they create. Accordingly, it would be prohibited to place on the market, put into services or use in the EU:

These will include (will potential special exemption):

  • AI systems that deploy harmful manipulative ‘subliminal techniques’;
  • AI systems that exploit specific vulnerable groups (physical or mental disability);
  • AI systems used by public authorities, or on their behalf, for social scoring purposes;
  • Real-time remote biometric identification systems in publicly accessible spaces for law
  • enforcement purposes, except in a limited number of cases.

 

What is a Limited-Risk AI According to the EU AI Act?
Limited risk solutions highlighted include systems that interacts with humans (i.e. chatbots), emotion recognition systems, biometric categorization systems, and AI systems that generate or manipulate image, audio or video content (i.e. deepfakes).

What is a High-Risk AI According to the EU AI Act?
High risk AI systems are those that have a negative impact on people’s safety or their fundamental rights. The draft text distinguishes between two categories of high-risk AI systems.

AI systems used as a safety component of a product or as a product subject to Union legislation on the harmonisation of health and safety (e.g. toys, aviation, cars, medical devices, lifts);

The EU highlights the following high-risk AI systems used in eight specific areas (the list can be updated):

  • Biometric identification and categorization of natural persons;
  • Management and operation of critical infrastructure.
  • Education and vocational training.
  • Employment, management of workers and access to self-employment;
  • Access to and enjoyment of essential private services and public services and benefits;
  • Law enforcement;
  • Migration, asylum and border control management;
  • Justice and democratic processes.


What is the process to become compliant with the EU AI Act?
The process will be different depending on your risk level. Limited risk systems are subject to a limited set of transparency obligations.

For high-risk solutions the list of demands will be extensive and can currently be broken down into broad categories of requirements and obligations which must be performed under a conformity assessment to assurance that solutions do not contravene upon Union values. Many requirements should be embedded in the solution and enacted as early as possible to reduce risk and cost.

Monitoring and reporting obligations for providers of AI within this conformity assessment can be broken down into the following categories:

  • Requirements that the quality of data sets used to train, validate and test the AI systems; the data sets to be relevant, representative, free of errors and complete, while having ‘the appropriate statistical properties as regards the persons or groups of persons on which the high-risk AI systems is intended to be used
  • Requirements for technical documentation
  • Requirements for Record-keeping in the form of automatic recording of events
  • Requirements for Transparency and the provision of information to users
  • Requirements for Human oversight
  • Requirements for Robustness, accuracy and cybersecurity.


Is There an Easier way to Understand My AI Solution’s Current Status around the EU AI Act?
Yes! Our Anch.AI ethical AI Governance Platform provides a quick and easy assessment to gauge the risk level of any AI use case at any stage of development with our EU AI Act HealthCheck feature. All use cases screened also get a general overview and score based on the use case assessment of where each use-case stands for risk category and progress towards achieving EU AI Act requirements. Furthermore, we offer our EU Act Deep-Dive; A detailed assessment of how far specific requirements set in the draft regulations are being met. Recommendations are provided to meet the requirements which are not met.

When are the deadlines around the EU AI Act?:

March 2023 – expected plenary vote in Parliament.

Spring 2023 – trilogue to start led by Swedish presidency.

2023-24 – adoption, directly effective 20 days after publication.

2024-26 – grace period of requirements.

How does the EU AI Act differ from GDPR?:
GDPR spending in EU exceeded $16bn and the AI Act is far more extensive. The AI Act means that organizations need to consider ethical AI across all functions within the organization. Furthermore, robust and generalised AI Governance practices must be operationalised in means that do not slow down existing practices.

What kind of companies need to worry about the EU AI Act?:
All companies using or intending to use AI will need to take note; at the very least to act early and identify which use cases are lower vs high risk. However, several sectors have been noted as central to the high-risk category which should thus be placing extensive attention on the Act and it’s forthcoming requirements and obligations.

What type of platform can help me become compliant?
Article 9 of the Act specifies the need for standardised risk management provisions around AI. Most providers of high-risk AI will need support with these provisions through the outsourcing of support on AI Governance and risk management platforms. Such platforms support coordinated activities to direct and control an organisation regarding risks in areas of health, safety and the protection of fundamental rights. It will need to be done as soon as 24 months after the AI Act enters into force

What is special about the anch.AI platform in helping me become compliant?
We are an end-to-end AI Governance SaaS platform. The platform is a manifestation of the state-of-the-art anch.AI 5-step methodology. Our multidisciplinary research, published in Digital Society, governing AI ethical risks started in 2016. Almost 200 AI use cases has been screened for ethical risk exposure by using our risk management methodology. Through workflow and orchestration, the platform connects people, data and processes

Our platform supports organisations at any stage of maturity with AI governance, risk management for ethical AI, and prepare them for a suite of requirements across standards and regulations. The platform is the first to make cross-functional governance of AI feasible by linking business, legal and tech functions. Other AI platforms with Responsible AI and AI Governance features do not offer cross functional quantitative and qualitative risk management system requirements. For example, AI Governance must go above and beyond auditing machine learning pipelines to meet EU AI Act requirements of high-risk AI solutions

We quantitatively assess models and data through MLOps to provide bias and fairness risk assessment. Our qualitative approach is extensive cross-functional self-assessment screenings. We use mixed-method approaches of analysis and reporting through our mitigation modules to contextually identify and measure appropriate metrics. We measure and score responsibility of AI around our 8 defined risks, 4 defined fundamentals, and 4 defined pitfalls. We provide data-driven benchmarking through our insights engine to assist organisations in deciding acceptable risk for their context.

We perform mitigation based on prioritised and tailored recommendations generated by the assessment phase. These actions are sent to management platforms and the appropriate responsibilities. The status of these actions can be tracked, and completion of the actions aids in risk mitigation. Third party risks are managed through vendor auditing features permitted by the platform. We perform this process over any point of the AI life cycle with clear documentation and constant monitoring which is reported across business, legal and IT functions.

What’s Our Take-home Message?
Organisations need Responsible AI governance not just to comply to EU regulations, they need to stay true to their values and control that these are not violated in this new data-driven AI era. We believe EU AI Act will be a vital push towards the culture change needed for ensuring organisations reach fully responsible AI governance. The anch.AI platform will help in every step along the way.

Anna Felländer

Anna Felländer

It was in 2016 that Anna realised artificial intelligence technology (AI) was becoming the new general-purpose technology: a technology that would drastically impact the economy, businesses, people and society at-large. At the same time, she noticed that AI was also causing a negative externality — a new type of digital pollution. Consumers have opted in to receive the benefit of digitalization, but are simultaneously facing a dark cloud of bias, discrimination and lost autonomy that businesses needed to be held accountable for. In the traditional environmental sustainability model, organisations are held accountable for physical negative externalities, such as air or water pollution, by outraged consumers and sanctions handed down by regulators. Yet no one was holding technology companies accountable for the negative externalities — the digital pollution — of their AI technology. Regulators have had difficulties interpreting AI in order to appropriately regulate it and customers didn’t understand how their data was being used in the black box of AI algorithms.

Anna’s multidisciplinary research group at the Royal Institute of Technology was the origin to anch.AI. Anna founded anch.AI in 2018 to investigate the ethical, legal and societal ramifications of AI. The anch.AI platform is an insight engine with a unique methodology for screening, assessing, mitigating, auditing and reporting exposure to ethical risk in AI solutions. anch.AI believes that all organisations must conform to their ethical values and comply with existing and upcoming regulation in their AI solutions, creating innovations that humans can trust. It is an ethical insurance for companies and organisations.