The importance of AI ethics and opportunities for organizations as a result of AI legislation

The importance of AI ethics and opportunities for organizations as a result of AI legislation

We are proud to welcome Ingrid Stenmark to the AI Sustainability Center. With an extensive career in the telecommunications industry, Ingrid brings expertise in legal, risk, and compliance to ethical AI. Read below for more about Ingrid and her perspective on AI ethics and how the new AI legislation can bring significant opportunities for organizations.

You were part of the team that established AI ethical principles at Telia Company. Why do you think AI ethics is important?

AI will increasingly impact people’s lives and in many positive ways. There is, however, also the other side of the coin, whereby the use of AI can bring negative consequences. Discovering the pitfalls and establishing ethical guidance is crucial in order to fully capture the value of AI. As a person, I am also very purpose and values driven, so ethics is always top of mind.

What is your view on the EU proposal on AI legislation?

I welcome that the EU has taken a clear stance on trustworthy AI by setting boundaries from an ethical perspective. The risk based approach taken is sensible i.e. that the legal obligations target primarily use cases with unacceptable or high-risk and self regulation is encouraged for the remainder. At the same time, it’s also vital that the proposal is reviewed from a practical point of view to minimize bureaucracy and ensure it will not be too burdensome, especially for start-ups and other small to medium sized enterprises.

What is the opportunity for organizations? 

I see an opportunity for companies as well as the public sector to take advantage of the business benefits that come with using AI responsibly and in way that people trust. Get to know your AI risks by kick-starting an internal risk review and initiate work to define ethical principles. This will also get you prepared once the EU regulation comes into force.

Anna Felländer

Anna Felländer

It was in 2016 that Anna realised artificial intelligence technology (AI) was becoming the new general-purpose technology: a technology that would drastically impact the economy, businesses, people and society at-large. At the same time, she noticed that AI was also causing a negative externality — a new type of digital pollution. Consumers have opted in to receive the benefit of digitalization, but are simultaneously facing a dark cloud of bias, discrimination and lost autonomy that businesses needed to be held accountable for. In the traditional environmental sustainability model, organisations are held accountable for physical negative externalities, such as air or water pollution, by outraged consumers and sanctions handed down by regulators. Yet no one was holding technology companies accountable for the negative externalities — the digital pollution — of their AI technology. Regulators have had difficulties interpreting AI in order to appropriately regulate it and customers didn’t understand how their data was being used in the black box of AI algorithms.

Anna’s multidisciplinary research group at the Royal Institute of Technology was the origin to anch.AI. Anna founded anch.AI in 2018 to investigate the ethical, legal and societal ramifications of AI. The anch.AI platform is an insight engine with a unique methodology for screening, assessing, mitigating, auditing and reporting exposure to ethical risk in AI solutions. anch.AI believes that all organisations must conform to their ethical values and comply with existing and upcoming regulation in their AI solutions, creating innovations that humans can trust. It is an ethical insurance for companies and organisations.