As the AI Act is smoothly progressing, one part of it (Article 9) specifies the need for risk management provisions. Providers of high-risk AI will need to enact these provisions on their own or outsource support through services such as the anch.AI platform. This will involve coordinated activities to direct and control an organisation with regard to risks in the area of health, safety and the protection of fundamental rights. It will need to be done as soon as 24 months after the AI Act enters into force.
More exact details needed for operationalising Article 9 remain lacking. However, researcher Jonas Schuett from the Centre for the Governance of AI has provided a comprehensive overview and practical suggestions for risk management in the scope of the Act.
Some key points of this article are summarised below:
1- Although compliance is not mandatory right now, organisations should find out now what awaits them.
2- Due to the regulatory demands and uncertainty within the risk landscape, many AI providers will realistically need to outsource parts of the risk management testing process. This is fine as long as the provider remains responsible for meeting the requirements.
3- In practice, providers should perform a first iteration of risk assessment and mitigation as early on in the development process as possible and, based on the findings of that iteration, decide how to proceed.
4- Low risk AI systems should also operationalise risk management voluntarily to not assume risk categories from project onsets and to avoid litigation and reputational risk.
5- Harmonised standards and common specifications on AI risk management are still needed in the area.