The key challenge I believe many companies and institutions are facing is how to work practically with AI projects to ensure that the proper stakeholder teams are involved and work together in all stages of the process from development, through training and finally implementation. In other words, how to move from strategy and tactics to logistics and hands-on project management. We know that there are big risks involved and with the EU regulation around the corner, it will be important for all companies and institutions who want to harness the benefits of AI tools to build up the practical skills to ensure compliance and a thoughtful control. Key issues with AI relate to lack of transparency and difficulties in assessing and determining responsibilities. In many ways, compliance is a muscle you need to keep training and get better at and it will be crucial to have more stakeholders included and engaged.
I believe that inhouse lawyers and legal departments are up for a big change. We need to get closer to the other internal stakeholders and work together instead of hiding in the old silo style support that we are used to. Clearly AI is a bit of a challenge for a legally oriented mindset where logics and causality are key components. AI tools are complex and you need to accept that there are black-box elements and a need for different roles/perspectives to collaborate in the assessments. As a lawyer, you will need to accept that you’ll have less control and overview, at least initially.
I see a need to act fast and get ready. The EU regulation is, as you say, still in the works but similarly to the GDPR, the AI Act will have big consequences with far-reaching requirements and expectations from day 1 of its enactment. Companies and institutions that engage in AI projects without a compliance strategy/methodology are taking big risks. Heading down the wrong path will have grave consequences with heavy fines and irreparable harm to trust and trademarks. I believe it is important to see the compliance area as a joint project with ownership from several stakeholders – legal/compliance, IT and the business. The analogy to compliance as a muscle is valuable in this respect. The leadership of businesses and institutions should urgently see to that their stakeholder teams have the means for such long-term exercise. In the end, you want to ensure that you have a robust and well-trained responsible AI muscle within the internal organization.
It was in 2016 that Anna realised artificial intelligence technology (AI) was becoming the new general-purpose technology: a technology that would drastically impact the economy, businesses, people and society at-large. At the same time, she noticed that AI was also causing a negative externality — a new type of digital pollution. Consumers have opted in to receive the benefit of digitalization, but are simultaneously facing a dark cloud of bias, discrimination and lost autonomy that businesses needed to be held accountable for. In the traditional environmental sustainability model, organisations are held accountable for physical negative externalities, such as air or water pollution, by outraged consumers and sanctions handed down by regulators. Yet no one was holding technology companies accountable for the negative externalities — the digital pollution — of their AI technology. Regulators have had difficulties interpreting AI in order to appropriately regulate it and customers didn’t understand how their data was being used in the black box of AI algorithms.
Anna’s multidisciplinary research group at the Royal Institute of Technology was the origin to anch.AI. Anna founded anch.AI in 2018 to investigate the ethical, legal and societal ramifications of AI. The anch.AI platform is an insight engine with a unique methodology for screening, assessing, mitigating, auditing and reporting exposure to ethical risk in AI solutions. anch.AI believes that all organisations must conform to their ethical values and comply with existing and upcoming regulation in their AI solutions, creating innovations that humans can trust. It is an ethical insurance for companies and organisations.