How anch.AI’s platform and methodology is achieving the NIST Risk Management Framework

How anch.AI’s platform and methodology is achieving the NIST Risk Management Framework

The U.S. Department of Commerce’s National Institute of Standards and Technology (“NIST”) has released its. AI Risk Management Framework 1.0 (“AI RMF”). At anch.AI we are thrilled to welcome this innovation.

The NIST Risk Management Framework can be seen as a huge US paradigm shift in AI Governance, like the EU AI Act, but now linked to standards and guidelines which are much needed for the domain. One can say a softer approach, but at anch.AI we find it as important to live up to AI Act and NIST requirements to avoid the costly harmful consequences of ungoverned AI. In the same way as with AI Act, a challenge for operationalising NIST will be to translate and align reports and assessments between tech, compliance, and business teams. At anch.AI we will help your organisation whether it is the NIST framework, AI Act or any other standard on responsible AI to accelerate responsible AI, for innovations that humans can trust. 

At this point we wish to highlight what we view as equivalent processes to how anch.AI currently operates. To understand how anch.AI and the NIST framework are complementary we have performed our first scoping into how the anch.AI platform and published risk management methodology brings to action the requirements set out in the AI RMF.

 

AI RMF STEP: GOVERN 

NIST Playbook description: 

“GOVERN is a continual and intrinsic requirement for effective AI risk management over an AI system’s lifespan and the organization’s hierarchy and enables the other four AI RMF functions. 

Govern function outcomes foster a culture of risk management within organizations designing, developing, deploying, or acquiring AI systems. 

Categories in this function interact with each other and with other functions but do not necessarily build on prior actions.” 

 

anch.AI Equivalent: 

Cross functional governance spans all operations on the anch.AI platform: legal, business and IT perspectives. anch.AI ensures functional requirements are documented, understood, assigned accountability, and executed. Diversity, inclusion, and equity and accounted for and included throughout risk management in the entire AI life cycle. Using the anch.AI platform on an organisational level is educational and formative in culture change needed to operationalise responsible AI.  

 

AI RMF STEP: MAP 

NIST Playbook description: 

“The MAP function establishes the context to frame risks related to an AI system. Without contextual knowledge, and awareness of risks within the identified contexts, risk management is difficult to perform. MAP is intended to enhance an organization’s ability to identify risks and broader contributing factors.” 

anch.AI Equivalent:  

We perform organisational and use-case set ups as our contextualising step: this allows the identification of purpose, relevant regulation, expectations, AI use, system and deployment settings, user types, and those affected ranging from individuals to society at large. We assist in setting benchmark risk tolerances based on our database which has previously recorded numerous screening and mitigation processes. 
 

AI RMF STEP: MEASURE 

NIST Playbook description:  

“The MEASURE function employs quantitative, qualitative, or mixed-method tools, techniques, and methodologies to analyze, assess, benchmark, and monitor AI risk and related impacts.” 

anch.AI Equivalent:  

We quantitatively assess models and data through MLOps to provide bias and fairness assessment. Our qualitative approach is extensive cross-functional self-assessment screenings. We use mixed-method approaches of analysis and reporting through our mitigation modules to contextually identify and measure appropriate metrics. We measure and score responsibility of AI around our 8 defined risks, 4 defined fundamentals, and 4 defined pitfalls. We provide data-driven benchmarking through our insights engine to assist organisations in deciding acceptable risk for their context. 
 

AI RMF STEP: MANAGE 

Playbook description: 

“The MANAGE function utilizes systematic documentation practices established in GOVERN, contextual information from MAP, and empirical information from MEASURE to treat identified risks and decrease the likelihood of system failures and negative impacts.”
 

anch.AI Equivalent: 

We perform mitigation based on prioritised and tailored recommendations generated by the assessment phase. These actions are sent to management platforms and the appropriate responsibilities. The status of these actions can be tracked, and completion of the actions aids in risk mitigation. Third party risks are managed through vendor auditing features permitted by the platform. We perform this process over any point of the AI life cycle with clear documentation and constant monitoring which is reported across business, legal and IT functions. 

 

To sum up, organisations need Responsible AI governance not just to comply to regulations or standards, they need to stay true to their values and control that these are not violated in this new data-driven AI era. We believe AI RMF will be a vital part for ensuring organisations reach fully responsible AI governance.

Anna Felländer

Anna Felländer

It was in 2016 that Anna realised artificial intelligence technology (AI) was becoming the new general-purpose technology: a technology that would drastically impact the economy, businesses, people and society at-large. At the same time, she noticed that AI was also causing a negative externality — a new type of digital pollution. Consumers have opted in to receive the benefit of digitalization, but are simultaneously facing a dark cloud of bias, discrimination and lost autonomy that businesses needed to be held accountable for. In the traditional environmental sustainability model, organisations are held accountable for physical negative externalities, such as air or water pollution, by outraged consumers and sanctions handed down by regulators. Yet no one was holding technology companies accountable for the negative externalities — the digital pollution — of their AI technology. Regulators have had difficulties interpreting AI in order to appropriately regulate it and customers didn’t understand how their data was being used in the black box of AI algorithms.

Anna’s multidisciplinary research group at the Royal Institute of Technology was the origin to anch.AI. Anna founded anch.AI in 2018 to investigate the ethical, legal and societal ramifications of AI. The anch.AI platform is an insight engine with a unique methodology for screening, assessing, mitigating, auditing and reporting exposure to ethical risk in AI solutions. anch.AI believes that all organisations must conform to their ethical values and comply with existing and upcoming regulation in their AI solutions, creating innovations that humans can trust. It is an ethical insurance for companies and organisations.