Two years ago, the European Commision released their white paper, “White Paper on Artificial Intelligence: A European Approach to Excellence and Trust.” In it, the Commission called for standard requirements for the data sets that train AI systems in order to avoid bias, including gender bias. “Requirements to take reasonable measures aimed at ensuring that [the] use of AI systems does not lead to outcomes entailing prohibited discrimination. These requirements could entail in particular obligations to use data sets that are sufficiently representative, especially to ensure that all relevant dimensions of gender, ethnicity and other possible grounds of prohibited discrimination are appropriately reflected in those data sets.”
Bias can creep into algorithms through the historical data sets that they are trained on. Humans are inherently biased. Subsequently, our own personal biases and social gender inequalities are often reflected in data about the past. When this happens, the outcome can be negative or — even worse — deadly.
How gender bias creeps into AI models
Gender often plays a role in the development and application of AI. We know from that research that models that are skewed with more data from one gender are less accurate for the entire population. In a 2021 study, “Gender Bias in Artificial Intelligence: Severity Prediction at an Early Stage of COVID-19” the researchers wanted to know what model bias could occur when training an AI model that could predict patient severity in the early stage of coronavirus disease (COVID-19) using only one gender vs. a more diverse data set. They found that the gender-dependent AI model was less accurate compared to the unbiased, mixed gender model.
Biased algorithms also hurt womens’ careers. A study from UNESCO, the OCED and the Inter-American Development Bank found that because many resume scanning systems are built on historical job performance data in which men — specifically white men — performed the highest, the tools are inherently biased against women.
Data sets used to train AI models need to represent the populations in which they serve. Biased data sets that lean more male than female will train the algorithm to be better able to detect male-specific outcomes. For example, in a study assessing digital biomarkers for Parkinson’s disease, only 18.6% of the people in the data set were women. If an algorithm is then trained using this data set, that algorithm will be able to more accurately detect the symptoms that appear more often in men and less accurately be able to detect female-specific symptoms. This bias in the data leads to possibly less accurate detection of Parkinson’s symptoms and worse patient outcomes for women.
Ethical AI
The negative implications of gender-biased AI on the broader society cannot be understated. At anch.AI, we are focused on AI governance. We believe it’s so important that organizations not only identify these biases in algorithms, but take steps to mitigate these biases. That is why we created our Ethical AI Governance Platform, an AI prediction and recommendation tool that assesses what ethical AI risk(s) are exposed and presents next steps for mitigating them. With anch.AI, organizations can efficiently and quickly adopt responsible AI solutions, gain control over their ethical AI risks, all while upholding regulatory compliance and conformity to ethical principles. The outcome is stronger companies, technology and outcomes for everyone.
It was in 2016 that Anna realised artificial intelligence technology (AI) was becoming the new general-purpose technology: a technology that would drastically impact the economy, businesses, people and society at-large. At the same time, she noticed that AI was also causing a negative externality — a new type of digital pollution. Consumers have opted in to receive the benefit of digitalization, but are simultaneously facing a dark cloud of bias, discrimination and lost autonomy that businesses needed to be held accountable for. In the traditional environmental sustainability model, organisations are held accountable for physical negative externalities, such as air or water pollution, by outraged consumers and sanctions handed down by regulators. Yet no one was holding technology companies accountable for the negative externalities — the digital pollution — of their AI technology. Regulators have had difficulties interpreting AI in order to appropriately regulate it and customers didn’t understand how their data was being used in the black box of AI algorithms.
Anna’s multidisciplinary research group at the Royal Institute of Technology was the origin to anch.AI. Anna founded anch.AI in 2018 to investigate the ethical, legal and societal ramifications of AI. The anch.AI platform is an insight engine with a unique methodology for screening, assessing, mitigating, auditing and reporting exposure to ethical risk in AI solutions. anch.AI believes that all organisations must conform to their ethical values and comply with existing and upcoming regulation in their AI solutions, creating innovations that humans can trust. It is an ethical insurance for companies and organisations.