How anch.AI’s platform and methodology is achieving the NIST Risk Management Framework

23.02.20|Anna Felländer

The U.S. Department of Commerce’s National Institute of Standards and Technology (“NIST”) has released its. AI Risk Management Framework 1.0 (“AI RMF”). At anch.AI we are thrilled to welcome this innovation. 

The NIST Risk Management Framework can be seen as a huge US paradigm shift in AI Governance, like the EU AI Act, but now linked to standards and guidelines which are much needed for the domain. One can say a softer approach, but at anch.AI we find it as important to live up to AI Act and NIST requirements to avoid the costly harmful consequences of ungoverned AI. In the same way as with AI Act, a challenge for operationalising NIST will be to translate and align reports and assessments between tech, compliance, and business teams. At anch.AI we will help your organisation whether it is the NIST framework, AI Act or any other standard on responsible AI to accelerate responsible AI, for innovations that humans can trust. 

At this point we wish to highlight what we view as equivalent processes to how anch.AI currently operates. To understand how anch.AI and the NIST framework are complementary we have performed our first scoping into how the anch.AI platform and published risk management methodology brings to action the requirements set out in the AI RMF.



NIST Playbook description: 

“GOVERN is a continual and intrinsic requirement for effective AI risk management over an AI system’s lifespan and the organization’s hierarchy and enables the other four AI RMF functions. 

Govern function outcomes foster a culture of risk management within organizations designing, developing, deploying, or acquiring AI systems. 

Categories in this function interact with each other and with other functions but do not necessarily build on prior actions.” 


anch.AI Equivalent: 

Cross functional governance spans all operations on the anch.AI platform: legal, business and IT perspectives. anch.AI ensures functional requirements are documented, understood, assigned accountability, and executed. Diversity, inclusion, and equity and accounted for and included throughout risk management in the entire AI life cycle. Using the anch.AI platform on an organisational level is educational and formative in culture change needed to operationalise responsible AI.  



NIST Playbook description: 

“The MAP function establishes the context to frame risks related to an AI system. Without contextual knowledge, and awareness of risks within the identified contexts, risk management is difficult to perform. MAP is intended to enhance an organization’s ability to identify risks and broader contributing factors.” 

anch.AI Equivalent:  

We perform organisational and use-case set ups as our contextualising step: this allows the identification of purpose, relevant regulation, expectations, AI use, system and deployment settings, user types, and those affected ranging from individuals to society at large. We assist in setting benchmark risk tolerances based on our database which has previously recorded numerous screening and mitigation processes. 


NIST Playbook description:  

“The MEASURE function employs quantitative, qualitative, or mixed-method tools, techniques, and methodologies to analyze, assess, benchmark, and monitor AI risk and related impacts.” 

anch.AI Equivalent:  

We quantitatively assess models and data through MLOps to provide bias and fairness assessment. Our qualitative approach is extensive cross-functional self-assessment screenings. We use mixed-method approaches of analysis and reporting through our mitigation modules to contextually identify and measure appropriate metrics. We measure and score responsibility of AI around our 8 defined risks, 4 defined fundamentals, and 4 defined pitfalls. We provide data-driven benchmarking through our insights engine to assist organisations in deciding acceptable risk for their context. 


Playbook description: 

“The MANAGE function utilizes systematic documentation practices established in GOVERN, contextual information from MAP, and empirical information from MEASURE to treat identified risks and decrease the likelihood of system failures and negative impacts.”

anch.AI Equivalent: 

We perform mitigation based on prioritised and tailored recommendations generated by the assessment phase. These actions are sent to management platforms and the appropriate responsibilities. The status of these actions can be tracked, and completion of the actions aids in risk mitigation. Third party risks are managed through vendor auditing features permitted by the platform. We perform this process over any point of the AI life cycle with clear documentation and constant monitoring which is reported across business, legal and IT functions. 


To sum up, organisations need Responsible AI governance not just to comply to regulations or standards, they need to stay true to their values and control that these are not violated in this new data-driven AI era. We believe AI RMF will be a vital part for ensuring organisations reach fully responsible AI governance.