In the rush toward tech tools to manage COVID-19, societal and ethical risks must be center stage

In the rush toward tech tools to manage COVID-19, societal and ethical risks must be center stage

We are in a defining moment. The global coronavirus pandemic has now affected three million people globally, and the world is desperately seeking ways to manage its toll on society. The speed and depth of the pandemic is forcing us to adopt drastic crisis management strategies. Using data-driven technologies, artificial intelligence (AI) and health tech applications are incredibly promising, especially when they are cross-fertilized. But low maturity and insufficient understanding of the ethical and societal impacts of these technologies pose risks to democracy and the right to privacy. We need to better understand the dangers of rushing toward these tech solutions without fully considering the societal and ethical implications.

Many are scrambling to find solutions and adequate responses that can save lives and ease suffering, track the spread of the virus, and find a way forward. While it is tempting to rush toward quick tech solutions, we need to think about the long-term threats and implications of the choices we make. We lack the tools to detect, measure, and govern how these tech solutions for COVID-19 are scaling in broader societal and ethical contexts. And, we can’t lose sight of potential threats to democracy and the right to privacy in deploying AI surveillance tools to fight the pandemic. Citizens need transparency in how their personal data is collected and used, and assurance that tech solutions which use a more privacy-intrusive surveillance approach to track the disease, are not normalized in post-crisis times.

Even before the emergence of the novel coronavirus that causes COVID-19, the field of digital health was a highly fragmented ecosystem. Multiple technologies demonstrate incredible promise and potential in the field of health. Smart phones can provide information via apps that help you learn about or track your own health data. Mobile location data can provide valuable information as to how a disease spreads, and location information and social media can be used for contact tracing. AI can help identify drugs that can cure or predict a disease, indicate the effectiveness of diagnosis, or track genetic data, similar to big data. Telemedicine enables doctor-patient consultations anywhere in the world. Blockchain (a growing list of records, called blocks, that are linked using cryptography) will help us keep track of medical records, supply chains, and payments. Along with these technologies’ promise, however, is the allure of data as the new gold which everyone wants to monetize. For example, in digital health, insurance companies are using data-driven technologies and AI without sufficiently considering and understanding ethical consequences. Furthermore, the tech giants are set up to maximize their profits and governments are set to act bold and fast.

The incentives to pursue these solutions clash with public skepticism and concerns about privacy protections. Four out of five Americans are worried that the pandemic will encourage government surveillance, according to a just-released survey from CyberNews. The survey also revealed 79 percent of Americans were either “worried” or “very worried” that any intrusive tracking measures enacted by the government would extend long after the coronavirus is defeated. Only 27 percent of those surveyed would give an app permission to track their location, and 65 percent said they would disapprove of the government collecting their data or using facial recognition to track their whereabouts.

Lack of governance and transparency will surely lead to an erosion of trust. Companies’ rush to develop technologies to track coronavirus infections is outpacing citizens’ willingness to use them. About half of Americans with smartphones say they’re probably or definitely unwilling to download apps being developed by Google and Apple to alert those nearby they came into contact with someone who is infected, according to a Washington Post-University of Maryland poll. That’s primarily because they don’t trust the tech companies to treat their data securely and privately.

We need to find ways to balance smart solutions with a surveillance economy. We must consider through an ethical and societal lens who is benefitting – it may not always be the patient, the nurse or the doctor. Being thoughtful about the potential ramifications is especially urgent with little to no supporting policy or regulatory frameworks. We need to be careful not to act impulsively and regret it later.

There are ways to approach this ethical dilemma responsibly. For example, researchers at Lund University in Sweden have launched an app (originally developed by doctors in the UK) to help map the spread of infection and increase knowledge of the coronavirus. It is called the COVID Symptom Tracker and it makes it possible for the public to report symptoms and thereby provide insights into the national health status. The free app is voluntary, does not collect personal data and the user’s location is based only on the first two digits of the postal code to protect the user’s identity. No GPS data is collected and the app does not in any way attempt to trace the user’s movements. Further, it is used for research, not commercial purposes.

Another example is Swedish telco company Telia Company, providing mobility and data insights to cities, with anonymization features designed to protect citizen privacy.  The solution can track where the disease is moving, but it is not privacy intrusive as the data is anonymized and aggregated and does not identify individuals.

So, what is the best way to use tech to fight COVID-19? There is no panacea, but these recommendations can be helpful in addressing this dilemma going forward.

  • Despite the obvious risks, like privacy intrusion, bias, and discrimination, companies and other developers should take active measures to protect and preserve privacy and should use and manage tools wisely.
  • Companies should be transparent and publicly state how they are – and aren’t –using the data they collect as part of their pandemic response. A higher level of transparency is a growing expectation from employees and consumers alike. Recent Digital advertising trends survey by Choozle.com found that 89 percent of consumers wish companies would take additional steps to protect their data. Governments should act swiftly to make these technologies available but ensure appropriate frameworks and compliance tools are in place to prevent misuse or overuse of data.
Anna Felländer

Anna Felländer

It was in 2016 that Anna realised artificial intelligence technology (AI) was becoming the new general-purpose technology: a technology that would drastically impact the economy, businesses, people and society at-large. At the same time, she noticed that AI was also causing a negative externality — a new type of digital pollution. Consumers have opted in to receive the benefit of digitalization, but are simultaneously facing a dark cloud of bias, discrimination and lost autonomy that businesses needed to be held accountable for. In the traditional environmental sustainability model, organisations are held accountable for physical negative externalities, such as air or water pollution, by outraged consumers and sanctions handed down by regulators. Yet no one was holding technology companies accountable for the negative externalities — the digital pollution — of their AI technology. Regulators have had difficulties interpreting AI in order to appropriately regulate it and customers didn’t understand how their data was being used in the black box of AI algorithms.

Anna’s multidisciplinary research group at the Royal Institute of Technology was the origin to anch.AI. Anna founded anch.AI in 2018 to investigate the ethical, legal and societal ramifications of AI. The anch.AI platform is an insight engine with a unique methodology for screening, assessing, mitigating, auditing and reporting exposure to ethical risk in AI solutions. anch.AI believes that all organisations must conform to their ethical values and comply with existing and upcoming regulation in their AI solutions, creating innovations that humans can trust. It is an ethical insurance for companies and organisations.