View your ethical AI performance, get recommendations and release the real power of AI. This is how it works!

01. Along the way, dashboards help illustrate your ethical AI performance from different perspectives. You can learn more about these multiple dimensions below right now, but you can also always come back to these explanations.

02. The charts show responses to the completed questionnaire(s). The percentage share of responses is categorized by color according to the following:

‘Yes’ (green) implies that the specific requirement that the question asks about is implemented.

‘In progress’ (yellow) means that there is documented progress of implementing the requirement, although not fully completed.

‘Not sure’ (orange) implies uncertainty.

‘No’ (red) is the most negative response and indicates potential risk exposure.

03. The charts of your ethical AI performance come with recommended actions, opportunities and monitoring suggestions. We recommend you to do a re-screen after the recommended actions are completed. In this way, you can follow your journey towards responsible AI adaption across your organization and report the progress to stakeholders.

*The performances and recommendations viewed on this platform are based on the responses from the questionnaires. anch.AI is not responsible for the accuracy of these responses and hence is not responsible for that the recommended actions will mitigate your exposure to ethical AI risks.

part one

Root causes of ethical AI risks

Below are the four root causes of ethical AI risks in data driven technology and AI solutions.


overuse of data

The AI application or solution could be overly intrusive (using too broad or too deep open data) or it could be used for unintended purposes by others.


Immature data/AI

Insufficient training of algorithms on datasets, as well as lack of representative data, could lead to incorrect and unethical recommendations.


Data bias

The data available is not an accurate reflection of reality or the preferred reality and may lead to incorrect and unethical recommendations.


Bias of the creator

Values and bias are intentionally or unintentionally programmed by the creator who may also lack knowledge/skills of how the solution could scale in a broader context.

part two

Requirements for ethical AI

Below are four requirements that all organizations working with data-driven technology and AI should have in place.



Establishment of policies, principles and/or protocols, and continuous monitoring of their proper implementations. Creating scalable control systems.



Being able to stand accountable and justify one’s decisions and actions to partners, users and others with whom the system interacts. Taking the responsibility for the full solution.



Being able to discover, trace and detect how and why a system made a particular decision or acted in a certain way, and, if a system causes harm, to discover the root cause. Being transparent the organization is towards stakeholders.



Ensuring that algorithmic decisions, as well as any data driving those decisions, can be explained to end users and other stakeholders in nontechnical terms. Ensuring the accurate level of explainability to the relevant stakeholders.

part three

Ethical AI risks

Eight Categories of risk developed from research perspectives that comply with Nordic values and are backed by global human rights legislation.

01. Privacy intrusion

AI and data driven solutions interfering with personal or sensitive data without regarding: consent of the individual or groups whose data is collected, how data is shared or stored, agreement of the law, or other legitimate needs to protect the best interests of an individual or groups. (Right to privacy). For example, a government health agency does not inform the public that their health data would be given to third party AI solutions to help manage medical equipment during the corona crisis. As a result individuals feel violated. You would risk a loss of reputation and trust in the related stakeholders.

02. Amplified discrimination

AI and data driven solutions which cause, facilitate, maintain, or increase prejudicial decisions or treatment and/or biases towards race, sex, or any other protected groups obliged to equal treatment. (Right to fair treatment) For example, an AI legal decision support system is built which is unfairly more likely to give harsher sentences to black ethnicity compared to whites who commit the same crimes and have the same/worse criminal history. Unfairly hash punishments to certain individuals due to their protected status.

03. Violation of autonomy and independent decision making

AI and data driven solutions which intentionally or unintentionally, and without consent, facilitate behavioural changes that manipulate independent decision making and social well-being. (Right to autonomy) For example, a gaming company implements AI which learns how to maximize the playtime and payments from young children through dopamine rewards (children become addicted to video games/anti social behaviour) – some children potentially affected for life in reduced social capabilities and acquired dopamine seeking behaviour. Lawsuits could occur against organisation stakeholders for intentional manipulation.

04. Social exclusion and segregation

AI and data driven solutions contributing to or maintaining an unfair denial of: resources, rights, goods, and ability to participate in normal relationships and activities, whether in economic, social, cultural, organisational or political arenas. (Right to inclusion) For example, larger shareholders of a company get exclusive access to financial AI system (better knowledge of when to buy/sell shares) – rich get richer in society and greater distrust of financial systems results.

05. Harm to Safety

AI and data driven solutions facilitating unwanted physical harms to an individual or organization stemming from underdeveloped AI, and attributed to negligence from an organisation. (Right to physical safety) For example, a self-driving care crashes and kills driver due to insufficiently trained AI driving aid – Individuals face serious direct physical harm and organisation stakeholders face massive reputation loses and capital through e.g. insurance claims.

06. Harm to Security of Information

AI and data driven solutions facilitating potential damage from unauthorized access of private data, due to faulty data protection and processing, or criminal activity. (Right to security of information) For example, an organisation did not keep their data in a secured database, resulting in security breach – Individuals face harm by having this data used against them for e.g. identity fraud, organisation stakeholders lose capital and reputation.

07. Misinformation and Disinformation

AI and data driven solutions which intentionally or unintentionally distribute information that has universally been declared as false and harmful to society. (Right to be informed) For example, consumer health device is released to predict heart health including heart defects. The user is not informed that the device has a high rate of error for not detecting certain defects and falsely believes they do not have a particular heart defect when they indeed do.

08. Prevention of access to public service

AI and data driven solutions contributing to or maintaining a denial of public social assistance and service (Right to public service access) For example, a solution is deployed without considerations for usability and clinical workflows. The result is that the solution is not used in practice due to the user not understanding the output or having sufficient time to use it. This denies access to a potentially vital service.

part four

Perspectives & Roles

AI ethical risks are shared across organizational functions. Joint accountability creates a frictionless high-way for trustworthy AI. The key is visualizing critical business decisions, ethical considerations and trade-off to an activated cross organizational core. Our organizational orchestration help you define that core and thereby conform to organizational values and comply to existing and upcoming regulations.

The three perspectives
The screening of a Use Case comes with different perspectives. Here, perspective means a viewpoint from one or more people with expert knowledge in a specific field or domain. A role is defined as the person(s) within a perspective who are responsible for the execution of a specific function. You are able to filter your result on these three perspectives:

Business includes the roles: product, outreach, responsibility and people.

Legal & Compliance area includes one role: legal.

IT includes two roles: data and technology.