Sandboxes in Focus but Far from a Kids Game as the AI Regulation Progresses

22.10.05|Anna Felländer

There is an increased focus to move forward with the proposed EU regulation on responsible AI – the AI Act.

The landmark proposal to regulate artificial intelligence in the EU following a risk-based approach is under discussion with the aim to move it through the plenary vote in the European Parliament and thereafter the trialogue discussion with the Commission and the Council. The scope of the broader definition of AI systems and the category of what should constitute high-risk systems within, are no doubt key parts of the regulation and in focus as the work progresses.

On the Council side, the Czech presidency continues to present compromise texts attempting to, inter alia, narrow the scope of what should be considered high-risk AI systems. As part of this, a system will only qualify as high-risk if it has a major impact on decision-making. Meanwhile, the Committee on Legal Affairs (JURI) at the European Parliament adopted their opinion on the AI Act where they recommend the AI Board to be a powerful EU body with its own legal personality and strong involvement. The European Parliament’s co-rapporteurs, on their end, continues to find common ground, particularly in the area of sandboxes and AI test environments. In their latest compromise text, they stipulate that member states must establish at least one AI regulatory sandbox each which should be operational when the regulation enters into force. The text includes the possibility of setting up sandboxes at the regional or local level or jointly with other countries. The Commission would also be able to set up sandboxes in collaboration with the European Data Protection Supervisor or the member states.

Overall, the discussions have so far managed to progress on several less sensitive articles but the debate might heat up in the weeks to come. Of particular interest is the inclusion of open-source general-purpose AI (GPAI) systems. The proponents of such inclusion argues that it is needed in order to direct the innovation away from exploitative, harmful, and unsustainable practices. An opposing viewpoint is that such inclusion
would create legal liability for open-source GPAI models undermining their development and because of that further concentrate power over the future of AI in large technology companies.

Needless to say, there are thorny areas to address ahead. But rest assured, the EU regulatory machinery is in steady motion and the prestige is high to set the standards within algorithms and AI.