Agent for AI Governance - a quick fix?

Agent for AI Governance - a quick fix?

It is not a question of if but rather when AI Agents for AI Governanace will be presented on the market. Agentic AI seems to be the new black. But the pitfalls are many when it comes to Responsible AI Governance Agents. The main reason is that Responsible AI Governance demands a 360-degree view on risks and pitfalls - meaning an integrated technical, legal and business perspective. Also, the AI Agent cannot simply be trained on existing AI Governance tools and legal documents, the true value is founded in how these perspectives are interlinked and understood from different stakeholders perspectives.


The anch.AI AI Agent will build on anch.AI's unique IP, with almost 9 years of learning and experience, creating competitor lockout. Agentic AI Governance is not a quick fix. 

I have been engaged with Responsible AI since 2016, from an academic, organizational and policy perspective. In 2018 I founded anch.AI. We started as a research-based consultancy and in 2022 we launched our AI governance SaaS platform. We are seen as a pioneer in the market, screening more than 250 AI use cases for ethical and legal risks.
The explosion of GenAI together with the EU AI Act is fueling the need for organizations to have a 360-degree perspective of the risks associated with AI. A broad perspective of the risks and opportunities is critical to successful AI Governance.

anch.AI leads and differentiates in several ways particularly important in Agentic AI Governance:

1.    The methodology and multidisciplinary approach combine technical, legal, social, and business perspectives throughout the lifecycle of
a.    Assessment
b.    Audits
c.    Recommendations
d.    Reporting
Some key examples would include translating the recommendations of the technical team into regulatory considerations or reporting risk indicators to non-technical decision-makers. Also, helping the board and regulatory bodies to align the company's ethical values into the technical implementation. This approach has been supported by several years of government funding and broad cross-disciplinary use cases.
2.    A broad vetting and testing of the methodology through Sandbox trials incorporating over 50 organizations both public and private. AI startups, academia, policymakers, standardization agencies, and audit firms have all contributed their perspectives and specific use cases.
3.    The methodology is linked to profiles across legal, technical, and management oversight, and the data architecture captures the various perspectives and challenges.
This user journey can capture 500 million unique insights across the various key profiles. This tested methodology, rulesets, and risk analysis are the basis of the anch.AI unique IP and SaaS platform.
4.    Agentic AI and AI assistants should be based on Responsible AI and AI Act compliance where anch.AI offers unique capabilities
Agentic AI or an AI agent that is self-learning will be tomorrow's AI Governance tool, able to deliver real-time AI assistance. But, Responsible AI Governance cannot be fully automated. For example, "Human-in the Loop" is a requirement for high risk AI use cases according to the AI Act. The anch.AI IP with years of testing, AI Act in our DNA, broad use cases along with government and regulatory support will accelerate compliance and de-risk our Agentic AI implementation.

anch.AI is uniquely positioned to lead the market for Agentic AI Governance.

Anna Felländer

Anna Felländer

It was in 2016 that Anna realised artificial intelligence technology (AI) was becoming the new general-purpose technology: a technology that would drastically impact the economy, businesses, people and society at-large. At the same time, she noticed that AI was also causing a negative externality — a new type of digital pollution. Consumers have opted in to receive the benefit of digitalization, but are simultaneously facing a dark cloud of bias, discrimination and lost autonomy that businesses needed to be held accountable for. In the traditional environmental sustainability model, organisations are held accountable for physical negative externalities, such as air or water pollution, by outraged consumers and sanctions handed down by regulators. Yet no one was holding technology companies accountable for the negative externalities — the digital pollution — of their AI technology. Regulators have had difficulties interpreting AI in order to appropriately regulate it and customers didn’t understand how their data was being used in the black box of AI algorithms.

Anna’s multidisciplinary research group at the Royal Institute of Technology was the origin to anch.AI. Anna founded anch.AI in 2018 to investigate the ethical, legal and societal ramifications of AI. The anch.AI platform is an insight engine with a unique methodology for screening, assessing, mitigating, auditing and reporting exposure to ethical risk in AI solutions. anch.AI believes that all organisations must conform to their ethical values and comply with existing and upcoming regulation in their AI solutions, creating innovations that humans can trust. It is an ethical insurance for companies and organisations.