RESEARCH

Interpretable deep-learning models to help achieve the Sustainable Development Goals

Interpretable deep-learning models to help achieve the Sustainable Development Goals

Artificial intelligence (AI) algorithms have the potential to support the Sustainable Development Goals (SDGs) of the United Nations (UN). However, one of the main limitations of current AI technology is the lack of interpretability of these models, which gives rise to concerns about trust and ethical use. In particular, decisions made by black-box AI models may not be accepted by governments or the general public, an issue that is being addressed by the ethical guidelines for ‘trustworthy AI’ by the European Commission.

In this article, Vinuesa, R and Sirmacek,B argue for the need for interpretable deep learning models, when using AI to achieve the SDGs.

Authors: Ricardo Vinuesa and Beril Sirmacek

Our Offering

Awareness Workshop
A thorough introduction to the upcoming EU AI Act and the risks of unregulated AI for organisations in your sector
AI Audit
One audit of your organisation and selected use case.
AI Governance
Unlimited audits and reporting on the anch.AI platform
Professional Services
anch.AI Platform