Why AI Governance must go above and beyond auditing machine learning pipelines

Why AI Governance must go above and beyond auditing machine learning pipelines

The fast speed of artificial intelligence reception in business could be setting out toward some significant hindrances. AI administration and how others will screen and control the utilization of information in their AI stages are arising as critical obstacles. AI administration is a somewhat new idea, as computer-based intelligence itself is still just in the beginning phases of improvement, yet there are complexities arising which must be identified and governed with urgency.

Within current tech silos, typical AI pipelines involve in following development and governance steps: 

Model development and training  
Model assessment with built- in fairness modules 
ML model deployment 
Model monitoring 


For AI teams to create impactful models, a deeper analysis of the pipeline is required. The metrics used by AI developers do not show the full picture and only capture short-term trends. Traditional AI metrics are Accuracy, Loss, Confusion Matrix, AUC (Area Under ROC curve), Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and R Square. Looking at these metrics, AI practitioners might deploy the model to production; oblivious to the question whether this model will generalize well in the real world. What companies miss is the conceptual correctness of their AI model. An AI using racial background and appearance to predict the likelihood of a person committing crime is bizarre, unacceptable, and reputationally disastrous.  

In a study published in the journal Science in 2019 found that AI insights from Optum, a digital healthcare company were prompting medical professionals to pay more attention to white people than to black people to spot high-risk patients. Here comes the necessity to rope in human experts to review, validate, audit and build confidence in your AI use-cases. What needs to be highlighted is that for so many, this validation expertise needs to be outsourced to an independent AI Governance organization. An independent organization brings in the “one standard for all” approach along with the dense data-driven insights. Getting outsourced expertise also demystifies AI from the tech developers and brings forward ownership from business and compliance perspectives. Explainability, and fairness can be effective and attainable if proper context and knowledge is available. However, the expertise which must be acquired makes it impossible for small or ill-informed companies to audit their own model.  

This is what we do at anch.AI, we provide a one stop solution to fix your AI use-case and make it market ready. We provide personalized recommendations and steps to accelerate the acceptance of your use-case in the market. The tech innovation side is already developed matured beyond – to the point that organizations are overwhelmed and confused by the jungle and tangled web of AI governance tools. The anch.AI user journey and workflow is designed to guide the user to the appropriate actions, consolidating the risk management and AI governance requirements. https://www.brandeis.edu/teaching//chatgpt-ai/ethical-concerns.html, https://www.wired.com/story/bias-statistics-artificial-intelligence-healthcare/ 

Anna Felländer

Anna Felländer

It was in 2016 that Anna realised artificial intelligence technology (AI) was becoming the new general-purpose technology: a technology that would drastically impact the economy, businesses, people and society at-large. At the same time, she noticed that AI was also causing a negative externality — a new type of digital pollution. Consumers have opted in to receive the benefit of digitalization, but are simultaneously facing a dark cloud of bias, discrimination and lost autonomy that businesses needed to be held accountable for. In the traditional environmental sustainability model, organisations are held accountable for physical negative externalities, such as air or water pollution, by outraged consumers and sanctions handed down by regulators. Yet no one was holding technology companies accountable for the negative externalities — the digital pollution — of their AI technology. Regulators have had difficulties interpreting AI in order to appropriately regulate it and customers didn’t understand how their data was being used in the black box of AI algorithms.

Anna’s multidisciplinary research group at the Royal Institute of Technology was the origin to anch.AI. Anna founded anch.AI in 2018 to investigate the ethical, legal and societal ramifications of AI. The anch.AI platform is an insight engine with a unique methodology for screening, assessing, mitigating, auditing and reporting exposure to ethical risk in AI solutions. anch.AI believes that all organisations must conform to their ethical values and comply with existing and upcoming regulation in their AI solutions, creating innovations that humans can trust. It is an ethical insurance for companies and organisations.