I believe in active decision making. In this data-driven AI era, this is more important than ever. Why? Because without an integrated technical, legal and societal approach, data driven technologies and AI might scale in a way that contradicts our values. For example it could lead to unintended privacy intrusion, discrimination, unethical nudging and social exclusion.
“Trust takes years to build, seconds to break, and forever to repair.”
All organizations need to address ethical considerations in their data driven solutions and AI applications in order to survive in the long run. Historically, data driven technologies and AI have been considered to be assets for the Tech department, but this is evolving to become a cross-functional imperative. There are a number of potential and real threats which highlight why an integrated technical, legal, ethical and societal approach to data driven technologies and AI is a must.
So, here are the top five reasons you should be worried about data driven ethical risks:
1. Maintain trust
We have all had privacy concerns when we share our private information for different services, and we expect that the organisations we share our data with are keeping it safe and do not misuse it. A breach of this data-trust will harm the organisation in the short and in the long term. An example of this is the Cambridge Analytica scandal where data from Facebook was harvested and used to profile and target US voters. Facebook found out about this but failed to alert its users and this hurts their credibility enormously. If mistrust of the users towards organisations increases there is a risk that the adoption of AI could slow down.
2. Have human control of Data and AI
I do not believe that we should fear the rise of Skynet, at least not in the next 100 years.
But I do believe that human values need to be at the core when deploying, adopting and governing all algorithms and the data used. Robust and human-centred systems, including almost all functions of an organisation are therefore essential. This highlights the importance of having a “human in the loop,” which means that the organization needs to be able to recognize when a reset of an AI-model is necessary, a data set is biased and when and where a human decision needs to be included.
3. Detect bias
Bias is normally a human trait, but when we let AI make the decisions we should be free from bias, right? No, not really. There are two kinds of AI bias to take into consideration: Bias in the Data and Bias introduced by the creators, or coders. By selecting the wrong features (input parameters) you can, for example, discriminate against certain people or populations. A classic example is where residential postal codes became the main deciding factor when it comes to whether a bank customer should get a financial loan or not, leading to income and racial biases. In another example, Amazon recently announced they were putting a one-year pause on allowing the police to use their facial recognition tool, in a major sign of the growing concerns that the technology may lead to unfair treatment of African-Americans.
4. Have deeper knowledge about the societal effects of data driven technologies and AI
With AI becoming increasingly integrated into today’s society, we need to make sure that AI ethics training and knowledge is on the rise as well. Without AI training it is hard to know where ethical and societal AI risks might already exist or might arise. Training about AI ethics is a must for everyone working with AI. Do you have that kind of training in your company?
“The only true wisdom is in knowing that you know nothing.” ― Socrates
5. Governance matters
AI should no longer be considered something only Tech departments own and implement in order to increase income and reduce costs, but rather as an organisational wide responsibility. Top executives need to be involved in setting the goals and governance around AI and the CEO is ultimately responsible for it, as pointed out in a recent paper by the Sondergaard group for example. With the increased use of AI, the need of AI governance increases as well and should be in line with existing governance principles and values.
How to get started in understanding AI ethical risks
As argued above these questions cannot wait and I highly recommend that you try a free-of charge Mini Risk Scanning Survey, as a quick health check on your AI and ethics, available here. This will give you an indication if you are on track towards building an AI sustainable organisation or not. The mini survey also includes a few tips on what to do in order to take your company to the next level towards AI Sustainability.
Depending on your result, you may also want to sign up to get an full AI Ethical Risk profile for your AI application(s).
If you have any questions or would like to discuss the AI Sustainability Center Ethical Risk Profiler, feel free to contact me at jakob@aisustainability.org or LinkedIn.
It was in 2016 that Anna realised artificial intelligence technology (AI) was becoming the new general-purpose technology: a technology that would drastically impact the economy, businesses, people and society at-large. At the same time, she noticed that AI was also causing a negative externality — a new type of digital pollution. Consumers have opted in to receive the benefit of digitalization, but are simultaneously facing a dark cloud of bias, discrimination and lost autonomy that businesses needed to be held accountable for. In the traditional environmental sustainability model, organisations are held accountable for physical negative externalities, such as air or water pollution, by outraged consumers and sanctions handed down by regulators. Yet no one was holding technology companies accountable for the negative externalities — the digital pollution — of their AI technology. Regulators have had difficulties interpreting AI in order to appropriately regulate it and customers didn’t understand how their data was being used in the black box of AI algorithms.
Anna’s multidisciplinary research group at the Royal Institute of Technology was the origin to anch.AI. Anna founded anch.AI in 2018 to investigate the ethical, legal and societal ramifications of AI. The anch.AI platform is an insight engine with a unique methodology for screening, assessing, mitigating, auditing and reporting exposure to ethical risk in AI solutions. anch.AI believes that all organisations must conform to their ethical values and comply with existing and upcoming regulation in their AI solutions, creating innovations that humans can trust. It is an ethical insurance for companies and organisations.