ChatGPT’s rapid entry – Urgent for the government to appoint an authority for responsible AI

ChatGPT’s rapid entry – Urgent for the government to appoint an authority for responsible AI

The recently rewritten AI solution, ChatGPT, challenges us in a completely new way. With the opportunities that the generative AI system creates, we are at the same time facing a historically important crossroads – should AI control humans or the other way around. Swedish business and Swedish authorities need an ethical approach to AI, but above all support with a risk assessment framework to guide the development of AI solutions in accordance with our values. In addition, the EU’s AI legislation is around the corner for responsible AI. It is urgent that the government appoints a responsible authority for ethical AI.

ChatGPT and other generative AI systems (1) create high expectations about a long-awaited democratization of AI. More people can take part and create new content produced by algorithms on historical data. The threshold for organizations to realize the value of the technology is lowered. New innovative business models arise with a focus on strengthened customer and citizen relations.

At the same time, individuals and society are exposed every day to unintended ethical and legal breaches as a result of unregulated AI solutions. One of the reasons is that AI models learn on historical data where there may be unwanted bias, which can lead to social exclusion and discrimination. Violations also occur, for example, when we individuals approve that data is used in a certain context without awareness into how algorithms can create insights that violate privacy when data is combined with other data. This is costly for companies and authorities, for example with withdrawn investments and reputational disasters. Recently, it was proven that the resignation of the government in the Netherlands in 2021 was due to an AI solution that falsely accused parents, who received parental insurance, of fraud. Furthermore, a test has been issued on a medical Chatbot, based on OpenAI’s GPT-3, with the result that the patient was recommended to kill themselve. (2)

Generative AI systems have a higher degree of ethical risk exposure than other AI solutions. One reason is that the use of generative AI systems often take place through APIs, which provides a low degree of transparency over the solution and a lack of control over its development. At the same time, the ethical and legal breaches occurs subtly and suddenly, too small to detect, but with the risk of exponential damage. An example is deep fakes, which in practice are often used to manipulate a face and a voice in a video or photo. The fact that generative AI models are often made available as APIs can make it easier for users to access and use the technology, which can increase the risk of abuse. API-based generative AI models are not inherently risky, but the way they are used and the data they are trained on can make them risky.

The EU ́s AI legislation, the AI Act, comes into force in 2024. The goal is for citizens and consumers to feel trust and for the EU’s value base, including human rights, to be maintained. The EU Commission proposes in the AI Act that member states must appoint or establish at least one supervisory authority that will be responsible for ensuring that the “necessary procedures are followed.” With the enormous potential of generative AI systems, not least in healthcare—e.g. to correctly diagnose and optimize the treatment of diseases, it is crucial that there are available risk assessment frameworks to relate to and that a responsible authority is appointed. AI ethical frameworks create a heightened innovation climate in all sectors. It is therefore urgent to appoint this responsible authority and that risk assessment frameworks are made available (3) so that Swedish business and Swedish authorities do not respond to the AI Act with the lack vigilance that characterized the GDPR.

1 Generative AI refers to artificial intelligence that can generate novel content, rather than simply analyzing or acting on existing data. Generative AI models produce text and images: blog posts, program code, poetry, and artwork. The software uses complex machine learning models to predict the next word based on previous word sequences, or the next image based on words describing previous images. In the shorter term, we see generative AI used to create marketing content, generate code, and in conversational applications such as chatbots.

2 https://www.artificialintelligence-news.com/2020/10/28/medical-chatbot-openai-gpt3-patient-kill- themselves/

3 T ex: https://anch.ai/publications/achieving-a-data-driven-risk-assessment-methodology-for-ethical-ai-2/

Anna Felländer

Anna Felländer

It was in 2016 that Anna realised artificial intelligence technology (AI) was becoming the new general-purpose technology: a technology that would drastically impact the economy, businesses, people and society at-large. At the same time, she noticed that AI was also causing a negative externality — a new type of digital pollution. Consumers have opted in to receive the benefit of digitalization, but are simultaneously facing a dark cloud of bias, discrimination and lost autonomy that businesses needed to be held accountable for. In the traditional environmental sustainability model, organisations are held accountable for physical negative externalities, such as air or water pollution, by outraged consumers and sanctions handed down by regulators. Yet no one was holding technology companies accountable for the negative externalities — the digital pollution — of their AI technology. Regulators have had difficulties interpreting AI in order to appropriately regulate it and customers didn’t understand how their data was being used in the black box of AI algorithms.

Anna’s multidisciplinary research group at the Royal Institute of Technology was the origin to anch.AI. Anna founded anch.AI in 2018 to investigate the ethical, legal and societal ramifications of AI. The anch.AI platform is an insight engine with a unique methodology for screening, assessing, mitigating, auditing and reporting exposure to ethical risk in AI solutions. anch.AI believes that all organisations must conform to their ethical values and comply with existing and upcoming regulation in their AI solutions, creating innovations that humans can trust. It is an ethical insurance for companies and organisations.