De-mystify tech silos for responsible AI in workforce management

De-mystify tech silos for responsible AI in workforce management

The Swedish Government has assigned the Discrimination Ombudsman, (DO), to start detecting discrimination that the use of AI can entail within the Swedish labor market. It is about time. Discrimination, social exclusion, and injustice can no longer be neglected due to organizations’ inability to govern AI cross-functionally. Using the anch.AI platform will help the DO map the unintended ethical AI risks.

In workplace and workforce management, the deployment of AI systems is considered a pioneering and groundbreaking arena as it fuels major changes and efficiency gains. Yet ungoverned, these AI solutions open the door to ethical and legal breaches leading to costly reputational harm and lack of trust.

In the world of work, historical data upon which AI systems are trained is usually clouded with non-diverse societal sets. Hence, data is inherently biased perpetuating the same old and sad realities of the job market. It is even more troublesome when all current efforts deployed in the non-digital world to mitigate bias and enhance diversity and inclusion, do not find echo in a digital world left skewed and distorted as per biased data and a non-diverse representation of people behind the AI systems.

However, all these warnings are nothing new. What concerning when AI and automated decision-making in work life are assumed to get rid of bias and unfairness. We all heard of the assumption that automated decision-making will simply be the solution to combat work life discriminations and injustice. An example here is the HireVue, a leading provider of hiring software based on algorithmic assessment, which while hammering home in its website its ability to “build a faster, fairer, friendlier hiring process”, was yet impelled, last year, to “kill off a controversial feature of its software: analyzing a person’s facial expressions in a video to discern certain characteristics.”, says a report by the  AI Incident Database.

All this adds weight to the point of view that AI risks must be tackled if we want to combat discrimination and injustice. AI and automated decision-making used in working life cannot be left to tech silos, it must be governed cross-functionally involving the legal and business areas.

In Sweden, the Government has now said in a press release that AI risks in the labor market require mapping and combating as it can lead to discrimination. As a result, the Swedish Government formally commissions the DO to “map the risks of discrimination that the use of AI and other automated decision-making can entail and to what extent and in which contexts employers can use such technical solutions” says Deputy Minister of Labor Johan Danielsson.

That is a good start. The anch.AI platform is a manifestation of the “state of the art” anch.AI methodology. Our multidisciplinary research on governing AI ethical risks started in 2016. Almost 200 AI use cases have been screened for ethical risk exposure by using our methodology. Many of them are within the space of AI in employment and recruitment. I welcome the work of the DO and are happy to share our insights to support the investigation.

Anna Felländer

Anna Felländer

It was in 2016 that Anna realised artificial intelligence technology (AI) was becoming the new general-purpose technology: a technology that would drastically impact the economy, businesses, people and society at-large. At the same time, she noticed that AI was also causing a negative externality — a new type of digital pollution. Consumers have opted in to receive the benefit of digitalization, but are simultaneously facing a dark cloud of bias, discrimination and lost autonomy that businesses needed to be held accountable for. In the traditional environmental sustainability model, organisations are held accountable for physical negative externalities, such as air or water pollution, by outraged consumers and sanctions handed down by regulators. Yet no one was holding technology companies accountable for the negative externalities — the digital pollution — of their AI technology. Regulators have had difficulties interpreting AI in order to appropriately regulate it and customers didn’t understand how their data was being used in the black box of AI algorithms.

Anna’s multidisciplinary research group at the Royal Institute of Technology was the origin to anch.AI. Anna founded anch.AI in 2018 to investigate the ethical, legal and societal ramifications of AI. The anch.AI platform is an insight engine with a unique methodology for screening, assessing, mitigating, auditing and reporting exposure to ethical risk in AI solutions. anch.AI believes that all organisations must conform to their ethical values and comply with existing and upcoming regulation in their AI solutions, creating innovations that humans can trust. It is an ethical insurance for companies and organisations.