Debugging the AI gender bias bug

Debugging the AI gender bias bug

A friend of mine, Robin Hauser, told me that when she did research for her award winning film, CODE: Debugging the Gender Gap, she asked a lot of questions to every day people, and found that most people didn’t know what unconscious bias was, and perhaps more disturbing, 99% of people said that they themselves were not biased, but that they think others are. So what happens when we add AI to that equation? AI, or artificial intelligence, is advancing at a rapid pace, and many of us already interact with AI on a daily basis – often without even realizing it. AI is used for everything from evaluating credit risk scores to deciding what you see in your Instagram feed, or which candidates will be hired in recruitment.

I met Robin several years ago, when she was working on her film about unconscious bias. I have had the fortune to keep in contact with her over the years and follow her work, most recently on unconscious bias and the grip it holds on our social and professional lives.

A few months ago I was invited to speak at the launch of CHAIR, the Chalmers AI Research Center in Gothenburg. For some reason I googled Robin, and saw that she had started to speak about the magnified impact of AI and gender bias. Her recent Ted Talk resonated with me, and earlier this week we invited Robin to give the keynote at our General Insights seminar at the AI Sustainability Center.  Her talk was followed by a panel discussion which included the following speakers: Lena Ag, Director General, Swedish Gender Equality Agency, Tonima Afroze, software engingeer at Klarna, Anna Berggen, CIO at TRR, Sara Övreby, Public Policy and Government Relations Manager at Google. A few take aways from the event:

  • AI will improve our lives. But, as the technology relies on self-learning algorithms working on historical data sets, there is a risk that it may unintentionally enhance existing and historical gender biases.
  • There are no easy fixes, due to the lack of diverse data sets
  • There is definitely a need for better governance, codes of ethics, and testing, before rushing to launch a product.
  • Bias, discrimination, and other pitfalls of AI can’t be delegated to an “ethics board,” all parts of an organization need to be involved.
  • There is a need for more gender diversity – not just in the tech industry, but in tech education.
  • Companies and public sector organizations need to actively work to address these issues and to ensure that AI is not accelerating gender biases on the labor market.

Also this week, a new report by AI Now indicated that there is a diversity crisis in the AI sector across gender and race. Recent studies found only 18% of authors at leading AI conferences are women(i), and more than 80% of AI professors are men(ii). This disparity is extreme in the AI industry.

However, having spent the last 20 years in the tech sector, this disparity is really nothing new. The teams that designed the telecom systems of the last 30 years also had these challenges, and many companies have been struggling with improving diversity statistics.

What is new, however, is the self-learning self-propagating, and self-scaling nature of AI, which means that there is a real risk that bias and discrimination of all sorts can and will be amplified in ways that we have not even begun to understand.

It’s clear that data is increasingly revealing an unfair world, and we need to keep working on it, actively and proactively. It is crucial to find systems and processes to ensure algorithms are programmed, reviewed, monitored, and audited regularly, to ensure it isn’t bias – and doesn’t become biased over time. The effects of how AI is scaling in a broader societal perspective need much greater attention, that is in fact our mission at the AI Sustainability center, and why we think it is so important to provide organizations with an operational framework for assessing the pitfalls and risks, and finding strategies to address them.

And with that said, and with gender bias in focus, happy Mother’s Day.

References

(i) Element AI. (2019). Global AI Talent Report 2019. Retrieved from https://jfgagne.ai/talent-2019/.
(ii) AI Index 2018. (2018). Artificial Intelligence Index 2018. Retrieved from http://cdn.aiindex.org/2018/AI%20Index%202018%20Annual%20Report.pdf.

Anna Felländer

Anna Felländer

It was in 2016 that Anna realised artificial intelligence technology (AI) was becoming the new general-purpose technology: a technology that would drastically impact the economy, businesses, people and society at-large. At the same time, she noticed that AI was also causing a negative externality — a new type of digital pollution. Consumers have opted in to receive the benefit of digitalization, but are simultaneously facing a dark cloud of bias, discrimination and lost autonomy that businesses needed to be held accountable for. In the traditional environmental sustainability model, organisations are held accountable for physical negative externalities, such as air or water pollution, by outraged consumers and sanctions handed down by regulators. Yet no one was holding technology companies accountable for the negative externalities — the digital pollution — of their AI technology. Regulators have had difficulties interpreting AI in order to appropriately regulate it and customers didn’t understand how their data was being used in the black box of AI algorithms.

Anna’s multidisciplinary research group at the Royal Institute of Technology was the origin to anch.AI. Anna founded anch.AI in 2018 to investigate the ethical, legal and societal ramifications of AI. The anch.AI platform is an insight engine with a unique methodology for screening, assessing, mitigating, auditing and reporting exposure to ethical risk in AI solutions. anch.AI believes that all organisations must conform to their ethical values and comply with existing and upcoming regulation in their AI solutions, creating innovations that humans can trust. It is an ethical insurance for companies and organisations.