The results of your Screening & assessment provide you a risk profile of activated risk. Based on the degree of risk you will be assigned mitigation modules as soon as they are released.
The included modules can be used whether or not you are a technical user of the platform. Technical users can upload data and predictions to receive assessments and corrections of their AI models and data based on our standards backed by extensive research. Non-technical users receive a list of requirements for mitigation actions for their context which can be shared with their tech team.
Come back soon to use our data bias tool:
The data bias tool serves to educate, highlight, and remove sources of bias in your data which are activating your risks identified by your use-case scanning.
Come back soon to use our fairness tool:
The fairness tool provides you with fair AI metrics based on your context to establish anti-discrimination policy and thresholds. Those new to fair AI are educated about domain while being asked about questions to narrow down which metrics must be used in their AI domain.
Come back soon to use our explainability tool:
The explainability tool serves to educate and preview the best means to explain model predictions to your users based on your use-case context.