Sandboxes in Focus but Far from a Kids Game as the AI Regulation Progresses

Sandboxes in Focus but Far from a Kids Game as the AI Regulation Progresses

There is an increased focus to move forward with the proposed EU regulation on responsible AI – the AI Act. The landmark proposal to regulate artificial intelligence in the EU following a risk-based approach is under discussion with the aim to move it through the plenary vote in the European Parliament and thereafter the trialogue discussion with the Commission and the Council. The scope of the broader definition of AI systems and the category of what should constitute high-risk systems within, are no doubt key parts of the regulation and in focus as the work progresses.

On the Council side, the Czech presidency continues to present compromise texts attempting to, inter alia, narrow the scope of what should be considered high-risk AI systems. As part of this, a system will only qualify as high-risk if it has a major impact on decision-making. Meanwhile, the Committee on Legal Affairs (JURI) at the European Parliament adopted their opinion on the AI Act where they recommend the AI Board to be a powerful EU body with its own legal personality and strong involvement. The European Parliament’s co-rapporteurs, on their end, continues to find common ground, particularly in the area of sandboxes and AI test environments. In their latest compromise text, they stipulate that member states must establish at least one AI regulatory sandbox each which should be operational when the regulation enters into force. The text includes the possibility of setting up sandboxes at the regional or local level or jointly with other countries. The Commission would also be able to set up sandboxes in collaboration with the European Data Protection Supervisor or the member states.

Overall, the discussions have so far managed to progress on several less sensitive articles but the debate might heat up in the weeks to come. Of particular interest is the inclusion of open-source general-purpose AI (GPAI) systems. The proponents of such inclusion argues that it is needed in order to direct the innovation away from exploitative, harmful, and unsustainable practices. An opposing viewpoint is that such inclusion
would create legal liability for open-source GPAI models undermining their development and because of that further concentrate power over the future of AI in large technology companies.

Needless to say, there are thorny areas to address ahead. But rest assured, the EU regulatory machinery is in steady motion and the prestige is high to set the standards within algorithms and AI.

Anna Felländer

Anna Felländer

It was in 2016 that Anna realised artificial intelligence technology (AI) was becoming the new general-purpose technology: a technology that would drastically impact the economy, businesses, people and society at-large. At the same time, she noticed that AI was also causing a negative externality — a new type of digital pollution. Consumers have opted in to receive the benefit of digitalization, but are simultaneously facing a dark cloud of bias, discrimination and lost autonomy that businesses needed to be held accountable for. In the traditional environmental sustainability model, organisations are held accountable for physical negative externalities, such as air or water pollution, by outraged consumers and sanctions handed down by regulators. Yet no one was holding technology companies accountable for the negative externalities — the digital pollution — of their AI technology. Regulators have had difficulties interpreting AI in order to appropriately regulate it and customers didn’t understand how their data was being used in the black box of AI algorithms.

Anna’s multidisciplinary research group at the Royal Institute of Technology was the origin to anch.AI. Anna founded anch.AI in 2018 to investigate the ethical, legal and societal ramifications of AI. The anch.AI platform is an insight engine with a unique methodology for screening, assessing, mitigating, auditing and reporting exposure to ethical risk in AI solutions. anch.AI believes that all organisations must conform to their ethical values and comply with existing and upcoming regulation in their AI solutions, creating innovations that humans can trust. It is an ethical insurance for companies and organisations.