As the AI Act is smoothly progressing, one part of it (Article 9) specifies the need for risk management provisions. Providers of high-risk AI will need to enact these provisions on their own or outsource support through services such as the anch.AI platform. This will involve coordinated activities to direct and control an organisation with regard to risks in the area of health, safety and the protection of fundamental rights. It will need to be done as soon as 24 months after the AI Act enters into force.
More exact details needed for operationalising Article 9 remain lacking. However, researcher Jonas Schuett from the Centre for the Governance of AI has provided a comprehensive overview and practical suggestions for risk management in the scope of the Act.
Some key points of this article are summarised below:
1- Although compliance is not mandatory right now, organisations should find out now what awaits them.
2- Due to the regulatory demands and uncertainty within the risk landscape, many AI providers will realistically need to outsource parts of the risk management testing process. This is fine as long as the provider remains responsible for meeting the requirements.
3- In practice, providers should perform a first iteration of risk assessment and mitigation as early on in the development process as possible and, based on the findings of that iteration, decide how to proceed.
4- Low risk AI systems should also operationalise risk management voluntarily to not assume risk categories from project onsets and to avoid litigation and reputational risk.
5- Harmonised standards and common specifications on AI risk management are still needed in the area.
It was in 2016 that Anna realised artificial intelligence technology (AI) was becoming the new general-purpose technology: a technology that would drastically impact the economy, businesses, people and society at-large. At the same time, she noticed that AI was also causing a negative externality — a new type of digital pollution. Consumers have opted in to receive the benefit of digitalization, but are simultaneously facing a dark cloud of bias, discrimination and lost autonomy that businesses needed to be held accountable for. In the traditional environmental sustainability model, organisations are held accountable for physical negative externalities, such as air or water pollution, by outraged consumers and sanctions handed down by regulators. Yet no one was holding technology companies accountable for the negative externalities — the digital pollution — of their AI technology. Regulators have had difficulties interpreting AI in order to appropriately regulate it and customers didn’t understand how their data was being used in the black box of AI algorithms.
Anna’s multidisciplinary research group at the Royal Institute of Technology was the origin to anch.AI. Anna founded anch.AI in 2018 to investigate the ethical, legal and societal ramifications of AI. The anch.AI platform is an insight engine with a unique methodology for screening, assessing, mitigating, auditing and reporting exposure to ethical risk in AI solutions. anch.AI believes that all organisations must conform to their ethical values and comply with existing and upcoming regulation in their AI solutions, creating innovations that humans can trust. It is an ethical insurance for companies and organisations.