Navigating the requirements of the EU artificial intelligence act will be a massive task for organisations with high-risk systems. Although the awareness around the Act has been growing massively, we notice in general at anch.AI, that a lack of urgency around the Act still exists. We believe the watershed moment will be when the EU AI Act becomes actively implemented. Unfortunately, we currently see a lag in real action like how GDPR was not fully embraced proactively.
Sweden took on the presidency of the EU Council, for its third time, on January 1, 2023. The presidency is tasked with progressing the EU Council’s work on EU legislation, ensure the continuity of the EU’s agenda and ensure that the legislative processes take place in an orderly manner and that Member States cooperate. To do so, the presidency must act in accordance with what is described as a so-called honest broker.
As the AI Act is smoothly progressing, one part of it (Article 9) specifies the need for risk management provisions. Providers of high-risk AI will need to enact these provisions on their own or outsource support through services such as the anch.AI platform. This will involve coordinated activities to direct and control an organisation with regard to risks in the area of health, safety and the protection of fundamental rights. It will need to be done as soon as 24 months after the AI Act enters into force.
There is an increased focus to move forward with the proposed EU regulation on responsible AI – the AI Act. The landmark proposal to regulate artificial intelligence in the EU following a risk-based approach is under discussion with the aim to move it through the plenary vote in the European Parliament and thereafter the trialogue discussion with the Commission and the Council. The scope of the broader definition of AI systems and the category of what should constitute high-risk systems within, are no doubt key parts of the regulation and in focus as the work progresses.