The public and industry dialog around Artificial Intelligence (AI) has shifted considerably in the last couple of years as more companies are embracing the benefits, and applications have become more visible in daily lives. And at the same time, multiple examples of negative unintended consequences have emerged, together with a deeper understanding of the various factors that have contributed to the negative effects.
The early attention has been on the tech companies developing AI, but as the technologies mature and become more widely available, it is just as beholden on companies that would implement AI products to take a proactive approach to identifying and managing the risks entailed.
But why is AI so special?
AI is still evolving, and while there are many uses we cannot even imagine today, it is important to recognize where we are in the overall AI development. You can generally consider there to be “narrow AI” use cases, e.g. machines that learn, classify, sense and act to enhance human work, where the AI is trained for a specific task. So-called “general AI,” is about how machines will act and interact independently with a level of deep learning and reasoning that could mimic or exceed that of humans, and which lays further in the future. Regardless of how our lives as citizens will look when we eventually share them with general artificial intelligence, so-called narrow AI is already here.
Even with narrow AI, its role in decision making differentiates it from other technologies, and as such puts new demands on companies and individuals. At Telia, we are applying AI in various parts of our operations today, from customer service to understanding network performance, and in our Crowd Insights offerings that interpret mobility data for smart city applications.
To make sure we take on the benefits and challenges of the technology in an ethical and responsible way within our entire organization, we recently published nine guiding principles on what we call “Trusted AI.” The nine principles address and specify the following;
- Responsible and value-centric
- Rights respecting
- Safe and secure
- Transparent and explainable
- Fair and equal
- Continuous review and dialogue
As put by Mikael Karlsson, our VP and Chief Ethics and Compliance Officer, “Our customers can benefit from new and more agile services, raising their whole experience. We are also convinced that AI will help us in our commitment to actively work towards reaching the UN’s Sustainable Development Goals. But to get there, trust is vital. So, while we are embracing AI, we aspire to integrate ethical business practices into all parts of our business and strategy.”
I want to highlight three important pillars of the nine principles; value creation, rights–respecting, and continuous review.
Our first principle establishes that we are to be value-centric. This is the business approach, including that we want to contribute to a positive impact on society. How, for example, could our data contribute to smart cities?
Our third principle lays out our responsibility to respect human rights. In parallel with ethical considerations, based on values, there is the international framework of human rights. The UN Guiding Principles on Business and Human Rights provides tools and terminology for assessing human rights risks, and how to mitigate them, and this is integrated into our way of working with AI.
Our ninth principle encompasses the reality that we do not have all the answers, and that our principles need to be continuously discussed, tested and reviewed. Ethical standpoints and deliberations need to be raised in dialogue with different stakeholders.
As a founding partner of the AI Sustainability Center, we will have an optimal setting for our experts to exchange ideas, as well as concerns and challenges, in order to find fair solutions together with those from other companies, academia, and public sector.
To further elaborate understanding of these principles, a next step is to elaborate timely actions, an early priority is the review and further development of our product development processes. Our aim is to further develop the principles, based on industry best practice and dialogues with other stakeholders, e.g. within the AI Sustainability Center.
So, the guiding principles are the starting point, not the endpoint, to a longer journey where we see that multisector collaboration will continue to be vital to steer a responsible course. Important. Complex. Doable. With the right guidance and alliance now in place.
“We must free ourselves of the hope that the sea will ever rest. We must learn to sail in high winds.” -Aristotle Onassis