Introducing the anch.AI Ethical AI Governance Platform

Introducing the anch.AI Ethical AI Governance Platform

Empowering organizations to manage regulatory and ethical risk of their AI

Over the last decade, artificial intelligence (AI) technology has transformed many sectors; from healthcare, to mobile applications and beyond. It creates enormous and undisputed values. But, ungoverned AI opens the door to costly ethical risks and legal breaches, such as unintended discrimmination, lost autonomy by unethical nudging, desinformation, privacy intrusion and social exclusion. Therefore, I believe all organizations must have an ethical filter on their AI solutions.

We live in a data-driven world where consumer convenience can be prioritized over consumer safety, privacy and human rights. Businesses are not always transparent with how their algorithms leverage consumer data, and consumers are not aware if their data is being shared or if the algorithm is biased against them.These AI solutions are often developed in an organizational silo without integrated technical, legal and business-related oversight, which opens the door to costly and damaging risk for the business.

Ethical AI is not just about mitigating legal and reputational risk, it’s the right thing to do to gain trust among stakeholders and clients. In order to align business-critical decisions and visualize ethical trade-offs, there needs to be an organizational collaboration. It’s an organizational orchestration between the technology teams, business and legal departments:

  • Legal needs to understand the AI solution and the business context, as well as the organizational values — where and how it will be used in scale and context. What are the legal requirements and how is this translated into code? 
  • Technology teams need to have the business context and the organizational values to understand the legal restrictions and how to translate that into code.
  • Business leaders need to inform and communicate the context of the solution and the organizational values to the technology and legal teams. 

 

We at anch.AI believe in human values in a data-driven world. Our approach is based on Nordic values of inclusiveness, diversity, gender equality, openness, transparency and accountability. That is why we are proud to launch our Ethical AI Governance Platform, an all-in-one risk assessment platform empowering organizations to manage regulatory and ethical risk of their AI.  

Our platform arms companies with what they need for joint responsibility across tech, legal and business teams, and visualizes and aligns ethical considerations and trade-offs — keeping them true to their own organizational values and regulation.

The Ethical AI Governance Platform screens AI solutions for ethical pitfalls and/or entire organizations for ethical AI maturity through extensive research-backed self-assessment questions that assess risk through various lenses. Based on the results of that assessment, we provide detailed dashboards with recommendations for businesses to avoid costly and damaging risks by:

  • Assess ethical vulnerabilities the organization and/or an AI use-case is exposed to, and if it might lead to reputational breaches or legal breaches.
  • Leverage mitigation tools based on specific risk exposure. 
  • Audit ethical AI performance on a continuous basis and receive maturity benchmarks.
  • Report on ethical AI performance to internal and external stakeholders
  • As a true SaaS product, the Ethical AI Governance Platform will be continually updated to meet new compliance, regulatory measures and standards. For example, additional functionalities such as gender screening, human rights assessments and the upcoming EU regulation on AI assessment are to come later this spring.

The Ethical AI Governance Platform has a clear purpose as an independent validation to help you accelerate ethical and responsible AI across your organization. We want to ensure that the future world of AI is also a world with human values at the core.

To get started using the platform visit anch.ai.

Anna Felländer

Anna Felländer

It was in 2016 that Anna realised artificial intelligence technology (AI) was becoming the new general-purpose technology: a technology that would drastically impact the economy, businesses, people and society at-large. At the same time, she noticed that AI was also causing a negative externality — a new type of digital pollution. Consumers have opted in to receive the benefit of digitalization, but are simultaneously facing a dark cloud of bias, discrimination and lost autonomy that businesses needed to be held accountable for. In the traditional environmental sustainability model, organisations are held accountable for physical negative externalities, such as air or water pollution, by outraged consumers and sanctions handed down by regulators. Yet no one was holding technology companies accountable for the negative externalities — the digital pollution — of their AI technology. Regulators have had difficulties interpreting AI in order to appropriately regulate it and customers didn’t understand how their data was being used in the black box of AI algorithms.

Anna’s multidisciplinary research group at the Royal Institute of Technology was the origin to anch.AI. Anna founded anch.AI in 2018 to investigate the ethical, legal and societal ramifications of AI. The anch.AI platform is an insight engine with a unique methodology for screening, assessing, mitigating, auditing and reporting exposure to ethical risk in AI solutions. anch.AI believes that all organisations must conform to their ethical values and comply with existing and upcoming regulation in their AI solutions, creating innovations that humans can trust. It is an ethical insurance for companies and organisations.