Why “do no harm” must always guide the use of AI in healthcare

Why “do no harm” must always guide the use of AI in healthcare

Advanced technologies such as e.g. artificial intelligence (AI) create unprecedented opportunities and great expectation in healthcare. New interactions between man and machine arise. However, education and training at all levels in healthcare lag behind.

Today, new knowledge in the field of healthcare and medicine emerges so rapidly that when newly graduated doctors have completed their specialist training, much knowledge has already become obsolete. In order to ensure quality and patient safety and to benefit from the positive opportunities that AI-based technology creates, new insights, investments and focus must be at center stage. A new report from the Centre for Business and Policy Studies in Sweden (SNS) identifies a number of improvement measures for AI for the future of healthcare and medicine.

When new technologies replace old ones, physician training must ensure that these are used in a safe, effective and evidence-based manner. AI is often mentioned as a gray area in medicine and thus requires special attention in order to ensure the patients’ interests both in the short and long term.

The availability and trust in medical decision support based on AI is increasing, despite the fact that education and training is lagging behind. The initially restrictive attitude in the healthcare sector is now undergoing rapid expansion. Large amounts of data are considered the new “gold”. Therefore, there is a need for education, as well as critical scrutiny and discussion of the ethical implications regarding data management and AI.

When designing algorithms for decision support, measurements and references must be carefully defined in collaboration with the medical profession. Systematic monitoring of the algorithms in close collaboration with experts is necessary. Bias and confounders must be managed since the origin of data on which the algorithms are based can lead to erroneous interpretations and potential damage.

From an ethical perspective; understanding, insight and transparency regarding potentially self-learning and automated decision support tools are necessary before they can be commercialized and scaled up. It must also be defined what level of explainability and transparency that is required for the responsible physician in order to feel trust in relation to increasingly autonomous decision support tools. Transparency is necessary for regulation of commercial AI products in order for them to be able to receive appropriate learning feedback, but also to meet society’s need for accountability when new products lead to potential unwanted or unexpected outcomes. The matter of responsibility must be clarified. There must be no doubt about who should be held accountable when AI represents a third party beside the physician and the patient in healthcare.

A couple of examples highlighting the challenges to applied AI in the healthcare sector are apps for contraceptive purposes or digital doctors for “triage” aimed for public use. Regarding apps for contraception, the application allows the (female) user to register her body temperature on a regular basis, in order for the app to assess the likelihood of pregnancy based on changes in body temperature related to fertility periods in the menstrual cycle. According to recent media reports, the app received hundreds of complaints in connection to unwanted pregnancies, making the Swedish Medical Products Agency (MPA) scrutinize and review the app. A green light from the MPA was given and the review concluded when the user instruction was revised, stating the risk for pregnancy. A clear ethical dilemma may arise if the user is unable to understand or interpret the user instruction and product description. When it comes to digital doctors, the application allows for virtual and around-the-clock health check-ups via a chat bot. Even though there is great potential for efficiency and scalability with this type of platformisation of health care, there is a risk of bias and confounders in the data. The risk of under-diagnoses, misdiagnoses, and over-diagnoses are potentially serious matters in terms of accountability and ethics.

The large amounts of data required to develop AI create value conflicts between regulation and laws that protect the integrity and patient information – an important part of medical ethics – and the access to large amounts of patient data that the development of AI tools requires. We must also be able to ask the question and discuss how much consideration that should be given to patients when their information is released to train AI. A clash between the value of innovation on one hand and individual integrity on the other may evolve. Ethics has always been a central part of both medical science and health care practice. A new chapter in relation to new technology and AI is undoubtedly needed. Pure enthusiasm, bounty hunting and the notion of value creation must not challenge the interests of patient integrity. The principle primum non nocere, i.e. first of all, do no harm must always be a guiding principle.

Anna Felländer

Anna Felländer

It was in 2016 that Anna realised artificial intelligence technology (AI) was becoming the new general-purpose technology: a technology that would drastically impact the economy, businesses, people and society at-large. At the same time, she noticed that AI was also causing a negative externality — a new type of digital pollution. Consumers have opted in to receive the benefit of digitalization, but are simultaneously facing a dark cloud of bias, discrimination and lost autonomy that businesses needed to be held accountable for. In the traditional environmental sustainability model, organisations are held accountable for physical negative externalities, such as air or water pollution, by outraged consumers and sanctions handed down by regulators. Yet no one was holding technology companies accountable for the negative externalities — the digital pollution — of their AI technology. Regulators have had difficulties interpreting AI in order to appropriately regulate it and customers didn’t understand how their data was being used in the black box of AI algorithms.

Anna’s multidisciplinary research group at the Royal Institute of Technology was the origin to anch.AI. Anna founded anch.AI in 2018 to investigate the ethical, legal and societal ramifications of AI. The anch.AI platform is an insight engine with a unique methodology for screening, assessing, mitigating, auditing and reporting exposure to ethical risk in AI solutions. anch.AI believes that all organisations must conform to their ethical values and comply with existing and upcoming regulation in their AI solutions, creating innovations that humans can trust. It is an ethical insurance for companies and organisations.