The socio-legal relevance of artificial intelligence

The socio-legal relevance of artificial intelligence

This interesting report draws on socio-legal theory in relation to growing concerns over fairness, accountability and transparency of societally applied artificial intelligence (AI) and machine learning.

What is a socio-legal dilemma?
Basically, when you need a deeper understanding of a phenomenon than simply classifying it as legal/illegal, which is particularly important in times of technological change. For example, a decade ago, technological developments completely disrupted how copyrighted content could be distributed online – in essence creating a socio-legal dilemma in how an entire generation felt that the law was wrong. Before any legit market solutions were developed, there was sort of a battle between social and legal norms, questioning the underpinnings and historical notions of copyright, challenged by new possibilities.

In fact, socio-legal research both theorizes law’s role in society as well as studies it’s effects empirically. This makes it different from much legal research focusing the normative content of existing law. And a socio-legal dilemma of particular interest is when law is challenged in itself, for example as a result of how new technologies create new behavior or market conditions.

My main interests as a socio-legally informed researcher is to understand the relationship between law and new technologies, the consequences of their interplay, for society as a whole. Many of the methods and technologies we now call artificial intelligence present several particularly interesting challenges in relation to law and society that are of utter importance to address and mitigate from a legal, societal and market perspective, specifically relating to transparency, understanding automated decision-making and to ensure a trustworthy development and accountable market practices.

What are the main takeaways from this report?
In brief: I stress the need to include diverse competences to assess applied AI, that the methods need development, particularly in relation to law and notions of fairness. And, while I stress the importance of more transparency for so-called black-box applications, I also argue for an awareness of the multifaceted balancing of interests it brings.

I emphasize the need for a multidisciplinary take on how to understand the implications of AI as applied in society and on markets. The consequences and meanings of the interplay between potentially self-learning, highly efficient and increasingly autonomous technologies are particularly explosive from a fairness perspective, for example addressing challenges of how to ensure an accountable development.

I particularly focus a core question of the ethical and responsible AI debate: the multifaceted concept of transparency. Arguably, while one of the core challenges with applied AI is dealing with explainability and opaqueness of black box applications, AI transparency opens for a complex set of interests to be balanced. The benefits of each kind of application need to be weighted at a societal level to determine the most appropriate degree of transparency. The importance of transparency and explainability needs to be assessed in relation to stakes and needs posed in each context, which may mean that translations to ethical and legal requirements will be required.

Why is transparency in AI and data-driven technologies important for consumers?
A quick answer is trust. Arguably, the trustworthiness of AI as applied and interacting with consumers is of core importance as consumers increasingly interact with AI on a daily basis. Trust is one of the key components required for a general adoption of applied AI, effectively becoming a threshold for the adoption of AI.

What do you think will happen in the regulatory space regarding transparency?
It is a bit early to say how the EU commission will follow up on statements from the new president to put forward legislation on the human and ethical implication of AI. Its however easy to see that there has been a strong development on a global level to develop ethical principles for how to use and develop AI, and the most central of those – for example the one from The EU Commission’s High-Level Expert Group on AI – emphasize similar aspects: transparency, accountability, human-centricity, and continuous assessments.

At the same time, several key aspects are already regulated – such as anti-discrimination, data protection/privacy and aspects of transparency – and the question is more on how these regulations are practically implemented. In general, I think the step to be taken will have to focus more on process in relation to principle, more than principles alone. Just like ethics in general is strong on value, but weaker on implementation. There is still much to room for improved supervisory methods, and much will come to depend on how already present law will be interpreted in relation to applied versions of AI.

In terms of transparency, we already see movement in the standardization sector, and I think it’s fair to expect more demands on risk-based transparency approaches, that is, more calls for pre-assessments, development of standards and perhaps labeling regimes.

What would you like to see happening in the regulatory space regarding transparency?
Seeing the transparency issues of applied AI as a wide field, I would like more focus on the fact that some data-driven and highly automated markets are particularly opaque. Those pose a risk for consumer welfare, that I’d like to see better addressed both from a regulatory as well as an implementation perspective. People in general don’t understand how their data is collected, where it travels or how to deal with it. Numerous studies conclude that this is also worrying consumers. Given that contemporary AI-development is highly data-dependent, this is a core challenge, not the least from a trust perspective.

Given the asymmetric power-relations between consumers and some markets players, I think we need supervisory authorities with better tools to assess market behavior. That is, market supervision able to study and assess also automated technologies active on a large scale but with individual distribution. As in individualized and targeted products – if we are to reach more of balanced approach in those types of markets, I think it is the already present

supervisory authorities that needs better methods. Not necessarily to be able to strike with the hammer, but to see biased or discriminatory outcomes and mitigate them as early as possible. To counter “hyper-nudging” or individualized tools that do not take a sustainable and sound consumer welfare into account. Ultimately, to reinsure trustworthiness in data-driven markets.

In fact, our lives will increasingly be enabled and affected by different kinds of artificial intelligence and machine learning in the years to come, since these methods and technologies have already been proven to have great potential. This means that it becomes all the more important to strengthen fairness and trust in applied AI through well-advised notions of accountability and transparency in multidisciplinary research of socio-legal relevance.

How do you work with the AI Sustainability Center to help guide organizations in an unsecure regulatory landscape?
Much of current debate around guidelines on AI addresses a general level that not always simply can be translated to the specific contexts where companies and authorities seek to deploy new AI-systems or technologies. Some of the challenges are contextual, that is, they can only be figured out in relation to the specific field or use, why the assessment often needs to be done at a granular, company-specific or agency-specific level. Some of the legal boundaries will not easily be drawn, but require a sort of reassessment of established law in light of new methods or technologies. Needless to say, I’m particularly interested in the transparency and explainability challenges and their relation to trust. One risk that we try to assess is also of a normative kind, rather than one of optimization, and it deals with the risk of autonomous technologies reproducing not only the beneficial and desired but also the biased, skewed and discriminatory

About the report
The report draws on socio-legal theory in relation to growing concerns over fairness, accountability and transparency of societally applied artificial intelligence (AI) and machine learning. The purpose is to contribute to a broad socio-legal orientation by describing legal and normative challenges posed by applied AI. To do so, the article first analyses a set of problematic cases, e.g. image recognition based on gender-biased data-bases. It then presents seven aspects of transparency that may complement notions of explainable AI within computer scientific AI-research. The article finally discusses the normative mirroring effect of using human values and societal structures as training data for learning technologies and concludes by arguing for the need for a multidisciplinary approach in AI research, development and governance.

”This is a report of great importance, which should be read by anyone involved from a technical, commercial, legal or regulatory perspective in AI. Stefan Larsson highlights the importance of looking at AI from both a legal and social perspective simultaneously and not least the importance of asking the right questions. As is often the case, coming to a shared view on the problem is often, at least in early stages, more important than the solutions. Larsson’s report is a key contribution to coming to such shared view.” – David Frydlinger, Managing Partner, Cirio Law Firm

Read the complete report here.