AI Sustainability Center affiliated researcher, Stefan Larsson, recently published a chapter in “The European Union and the Technology Shift.” Read below for more about Stefan’s research and his perspective on AI ethical guidelines as a governance tool.
This book, and your chapter in particular, is particularly timely and relevant as the EU announced their proposed AI legislation. In the EU proposal, AI is defined rather broadly. Can you explain the importance of defining AI?
It’s at the core for how broad the regulation will be, what activities it includes, with all of the possible implications following from that. The proposal looks pretty ambitious in its scoping in this regard. And, to be fair, there is an inevitable conceptual challenge with real-world consequences in framing the definition, largely relating to the leap from using “artificial intelligence” as being a dynamic and flexible research discipline to using “artificial intelligence” as a regulatory concept linked to bans, requirements and fines. The change of purpose with the concept is hard. Where research can be stimulated by fuzziness, law generally seeks clarity.
You write about “society-in-the-loop.” Can you explain this concept and how industry can incorporate “society-in-the-loop”?
“Society-in-the-loop” relates to how you frame a problem and what competences one should include. Somewhat linked to the definition of AI, and who has the competence to understand its applied challenges, I use Iyad Rahwan’s playful development of the “Human-in-the-Loop” concept in order to stress the need for a multidisciplinary approach on AI. To keep society “in-the-loop”. I do it in two main ways:
Ultimately, it’s a way to stress the multidisciplinary needs of understanding how AI interacts with human structures and social norms.
Over the last few years we have seen an increase in AI ethical guidelines. How can ethical guidelines steer the development of AI and how can it function as a key component of AI governance?
As I argue in the chapter, ethics guidelines are already steering the development in the sense that they guide strategies, legal proposals and sectoral awareness. For example, in a study we did last year on national AI strategies (see link below), it was clear that many member states’ strategic notions of AI was influenced by the EU-policy level’s focus on human-centricity and emphasis on trustworthiness, including ideas on transparency and fairness.
Given that the guidelines tend to be on a high-level, I’d encourage more sectoral and industrial initiatives on both standardisation and codes of ethics. Even if some sectors doesn’t necessarily deal with high-risk AI in the sense of the EU proposal, the consequences for a brand or possibly an entire market may be severe if they’d offer a service or product that would be perceived as discriminatory or unfair in another way.
But, of course, ethics guidelines are strong as value-based pointers, but weak on procedure as they tend to lack mechanisms on how to reinforce its own normative claims. Which, on the other hand, is one of the clearest strengths of law. So, part of the development includes calls for AI-focused regulation, which the EU proposal mentioned above is the earliest general example of. And this is very much influenced by the ethics guidelines prepared by the High-Level Expert Group on AI for the European Commission.
Ethics guidelines shows bits of where law is heading, and can guide in battling what challenges different sectors may have. So, it makes sense that some key aspects will be strengthened through law, while other aspects will depend on an awareness in industries like retail to develop code of ethics in order to jointly figure out what levels of transparency, for example, is feasible and needed.