Interpretable deep-learning models to help achieve the Sustainable Development Goals
Artificial intelligence (AI) algorithms have the potential to support the Sustainable Development Goals (SDGs) of the United Nations (UN). However, one of the main limitations of current AI technology is the lack of interpretability of these models, which gives rise to concerns about trust and ethical use. In particular, decisions made by black-box AI models may not be accepted by governments or the general public, an issue that is being addressed by the ethical guidelines for ‘trustworthy AI’ by the European Commission.
In this article, Vinuesa, R and Sirmacek,B argue for the need for interpretable deep learning models, when using AI to achieve the SDGs.
Authors: Ricardo Vinuesa and Beril Sirmacek