Would you invest in a medical chatbot that advised a patient to kill themselves?

Would you invest in a medical chatbot that advised a patient to kill themselves?

The AI world is on the cusp of a paradigm shift. Over the past 2 years researchers have built ever bigger, more powerful AI systems (e.g. GPT3, DALL-E & BERT). The sheer scale of these systems have opened the door to more generalised artificial intelligence that can be applied to a number of different tasks. These systems can turn their hand to many different tasks and whose full capabilities can no longer be predicted with absolute certainty. The scale and adaptiveness of these systems could turn them into ‘foundation models’ with new start-ups utilising these systems to build out specific applications in health, law, finance etc.

That’s exactly what French start-up, Nabla, tried to do with an experiment using GPT-3 to build a medical chatbot. The results were less than ideal suggesting suicide was a good idea and to recycle to relieve stress. Of course we’re not suggesting that Nabla would have released the GPT-3 chatbot but the experiment shows that defects in these foundation models can be inherited by all the adapted models downstream carrying with it significant risks.

Implementing technologies without regard for their impact is risky and increasingly unacceptable to wider society. The business leaders of the future who purchase, build and deploy such systems are likely to face costly repercussions, both financial and reputational. It goes without question that business leaders hoping to shift their posture from hindsight to foresight need to better understand the types of risks they are taking on, their interdependencies, and their underlying causes. But what about investors? Shouldn’t investors also be analyzing the ethical AI risk of their portfolio companies? If companies need to understand their exposure both in the business-to-business and business-to-consumer contexts then so too should investors who have invested in a business.

Why should this be a concern for tech investors? According to the latest Edelman Trust Barometer 2021, trust in technology has declined. The benefits of a trusted business are crucial and can truly determine not only whether your product or service takes off but also its credibility. A wish for all investors. The decline in trust underlines the importance for companies to own their narrative and be transparent about the benefits as well as possible downsides of technologies. The data further proves the point: 54 percent of respondents to the Edelman Trust Barometer said that communicating the downsides of emerging technology would ultimately increase their trust. AI start-ups must take the time to think about the potential biases and harms that an algorithm might create, and proactively work to avoid these at every stage: data collection, creating the algorithm, and deploying the algorithm. A further important step that companies could take is to create a code of ethics regarding their application of technology.

Many of Koa’s clients, from employers to hospital systems, specifically remark on the importance of Koa’s ethics strategy to them, with many referring to the external audits as an important sign of its authenticity.

This positive reaction extends to Koa’s investors. Given the prominence of ethics in Koa’s strategy, all investors were quickly  able to understand its relevance to success, particularly in relation to using AI to drive personalisation. When coupled with the positive  feedback from clients, Koa’s investors continue to be highly supportive of its ethics strategy.

          – Oliver Smith, Strategy Director and Head of Ethics, Koa Health

Read more on how KOA built AI ethics into its product and strategy from inception

With growing tensions between government and businesses on data privacy, AI regulation and an erosion of trust between consumers and businesses, we can expect the long-term value of companies to be impacted by AI regulation and societal expectations of responsible tech behaviour. Investors are well placed to engage on this topic through the lens of stewardship and long-term sustainability. Artificial intelligence (AI) governance is an emerging yet critical ESG consideration in responsible investment.

Read more on how we need ESG criteria for AI investment

There are some important steps investors can take when evaluating investment in AI start-ups:

  • Does the start-up have a code of ethics or ethical AI framework?
  • Does the start-up have a set of best practices that includes consistent and open communication as well as immediate feedback about an algorithm’s output?

Investors should also consider investing in tools that can evaluate the ethical use of AI and show how an algorithm(s) is/are making decisions.

The forthcoming EU AI regulation presents an opportunity for EU investors to develop a sustainable investment theme around AI.  One of the recommendations of the High Level Expert Group on AI was for a European Coalition of AI Investors to be established that created investment criteria that take into account ethical AI guidelines. Such a coalition should be encouraged to take the lead on EU critical investments in high-risk areas such as health, machinery etc thereby ensuring these start-ups embed ethical and sustainable AI which match the EU’s regulatory approach to AI.

In middle of July 2021, the Chinese venture capitalist Tencent became co-owner of the Swedish medical app Doktor.se. This has caused nervousness among Swedish politicians and fear of conflicting approaches to integrity and democracy affecting Swedish welfare services. Human Rights needs to be ensured in all EU Investments, for responsible and sustainable scaling of tech-start-ups in general and AI-start-ups in particular.

Apart from these concerns, how will doktor.se customers react to the fact that taxpayers money ends up in the pockets of a Chinese technology investor? It might turn out that, due to ethical considerations, potential clients turn their back on doctor.se and what seemed to be a good investment does not give the expected return. EU investors need to be guided with ethical risks frameworks including tools to screen and assess the exposure to ethical risks in their existing portfolio companies and future investments.

Mid November, we are co-hosting a roundtable discussion together with EU parliamentarian Jörgen Warborn, the Swedish representative of the European Parliament’s Committee on Artificial Intelligence, regarding these issues and how to scale AI responsibly and sustainably.

Anna Felländer

Anna Felländer

It was in 2016 that Anna realised artificial intelligence technology (AI) was becoming the new general-purpose technology: a technology that would drastically impact the economy, businesses, people and society at-large. At the same time, she noticed that AI was also causing a negative externality — a new type of digital pollution. Consumers have opted in to receive the benefit of digitalization, but are simultaneously facing a dark cloud of bias, discrimination and lost autonomy that businesses needed to be held accountable for. In the traditional environmental sustainability model, organisations are held accountable for physical negative externalities, such as air or water pollution, by outraged consumers and sanctions handed down by regulators. Yet no one was holding technology companies accountable for the negative externalities — the digital pollution — of their AI technology. Regulators have had difficulties interpreting AI in order to appropriately regulate it and customers didn’t understand how their data was being used in the black box of AI algorithms.

Anna’s multidisciplinary research group at the Royal Institute of Technology was the origin to anch.AI. Anna founded anch.AI in 2018 to investigate the ethical, legal and societal ramifications of AI. The anch.AI platform is an insight engine with a unique methodology for screening, assessing, mitigating, auditing and reporting exposure to ethical risk in AI solutions. anch.AI believes that all organisations must conform to their ethical values and comply with existing and upcoming regulation in their AI solutions, creating innovations that humans can trust. It is an ethical insurance for companies and organisations.