Gender equality in AI applications paves the way to a Swedish welfare model 3.0

Gender equality in AI applications paves the way to a Swedish welfare model 3.0

In 1919, Swedish women won the right to vote. Since then, Sweden has been a leading voice internationally in advocating for women’s rights. In the AI era this gender advancement is threatened by the possibility of bias and discrimination in AI models. Algorithms designed based on historic data risk unintentionally making decisions based on historically dominated male norms. But, once we awaken to coding gender equality, we can pave the way to transparency, trust, and inclusion in this data driven AI era – a Swedish welfare model 3.0.

Artificial intelligence (AI) will have a tremendous impact on whether or not we achieve gender equality. Today, AI is pioneering solutions to problems that have so far gone unsolved. For example, AI can be used in the entertainment industry to help identify gender bias in films. The technology can also be leveraged to increase women’s access to health care, such as Grace Health’s AI powered chatbot which provides answers to questions about women’s health. The benefits from an equality perspective are many. However, women are grossly underrepresented in the field of artificial intelligence. A study conducted 2018 found that only 22% of AI professionals globally are female. In addition, women make up less than 14% of AI researchers internationally. This gap can, and most likely will, have real impacts on what type of AI based products and services are developed, and for whom they are designed.

Consider an example with cars, which were originally designed for a male driver. It was later discovered that women were at higher risk of injury and death in an accident than men, due to the fact that most dummies used in automotive crash tests were designed to represent an average male body. How can this occur when half of the population is female? There is probably an array of root causes for this problem, but one of them is very likely the lack of representation among  the pool of car designers. Similarly, with a lack of diversity in the team developing AI solutions, the likelihood of subjectivity creeping into the development process increases. Bias from AI creators can result in discriminatory outcomes and have an impact on gender equality, for example if you design an AI solution with an average man inadvertently in mind as the user. If your team developing AI has a homogenous way of thinking, there is less likelihood of discovering and correcting for these types of biases.

Bias can also creep into algorithms from the data you are training them on. Algorithms are most often trained using historical data. Human are inherently biased, and as such those biases and social inequalities are reflected in data about the past. An AI system that is trained to, for example, match job seekers with vacancies will risk learning from past hiring data that men should be favored for senior leadership positions, since historical data shows that traditional male attributes are most common among senior leaders. Even if sensitive variables, such as gender, race or age, are removed, AI systems can still infer these variables in other ways.

There is also the global issue of the gender data gap. The vast majority of information collected, have been done so on men (read Caroline Criado-Perez’s book “Invisible Women: Exposing Data Bias in a World Designed for Men” for more information). From economic to health data, a world built and designed using information that has a gender gap means a world that ignores the needs of half its population.

All of this creates an immediate need for methods and tools that will help correct for these pitfalls of data driven technologies like AI. The awareness of the negative impact that AI and data (or lack thereof) can have on gender equality among researchers and industry professionals is growing. Many are realizing that key to unleashing the full potential of AI, is preserving the trust of stakeholders affected by the solutions. Yet, organizations are struggling when it comes to working in a solution oriented way. It is because of these challenges that the AI Sustainability Center has partnered with the Swedish Gender Equality Agency. Together, we will develop a tool that organizations can apply as a gender fairness assessment in their AI solutions. The result will be an immediate indication of whether they could potentially contribute to gender inequality when using or introducing AI solutions. A tool like this is fundamental to subsequently developing innovations that target, for example, specific bias problems in training data. At the end of the day, organizations will stand accountable for how their use of data and AI impacts people and society. And if we are to achieve full gender equality, we need to start acting now.

Sweden is best positioned to take the lead in enhancing gender equality in AI applications and creating trustworthy, transparent organizations. Creating Welfare 3.0 is a contemporary approach to democracy, inclusion, equality, fairness and privacy in this new data driven AI era is key to embrace the true value of AI.

Anna Felländer

Anna Felländer

It was in 2016 that Anna realised artificial intelligence technology (AI) was becoming the new general-purpose technology: a technology that would drastically impact the economy, businesses, people and society at-large. At the same time, she noticed that AI was also causing a negative externality — a new type of digital pollution. Consumers have opted in to receive the benefit of digitalization, but are simultaneously facing a dark cloud of bias, discrimination and lost autonomy that businesses needed to be held accountable for. In the traditional environmental sustainability model, organisations are held accountable for physical negative externalities, such as air or water pollution, by outraged consumers and sanctions handed down by regulators. Yet no one was holding technology companies accountable for the negative externalities — the digital pollution — of their AI technology. Regulators have had difficulties interpreting AI in order to appropriately regulate it and customers didn’t understand how their data was being used in the black box of AI algorithms.

Anna’s multidisciplinary research group at the Royal Institute of Technology was the origin to anch.AI. Anna founded anch.AI in 2018 to investigate the ethical, legal and societal ramifications of AI. The anch.AI platform is an insight engine with a unique methodology for screening, assessing, mitigating, auditing and reporting exposure to ethical risk in AI solutions. anch.AI believes that all organisations must conform to their ethical values and comply with existing and upcoming regulation in their AI solutions, creating innovations that humans can trust. It is an ethical insurance for companies and organisations.