We at anch.AI, like many others, have taken note of the massive attention and paradigm shifts brought on by the success of Chat-GPT and other state-of-the-art Generative AI models. The need for Responsible AI across society is booming more than ever. AI systems such as Chat-GPT are showing massive benefits, disruptions, and potential for harm due to output which could harm safety and encourage misinformation and discrimination. To help understand and navigate the opportunities and potential pitfalls around Chat-GPT we have decided to share our knowledge in the form this guide to for our Responsible AI community.
Chat-GPT is a part of the field of AI known as Generative AI. Generative AI is a set of algorithms, capable of generating seemingly new, realistic content—such as text, images, or audio—from the training data. It creates new data based on its training, instead of simply categorizing or identifying data like other AI.
Chat-GPT is a special type of Generative AI broadly known as large language models (LLM) which has been fine-tuned using supervised and reinforcement learning techniques. Chat-GPT flexes its power by being able to answer a massive array of textual based tasked posed by its users from writing poetry to solving coding problems. It does this in a dialogue format which makes it possible for to answer follow-up questions, admit its mistakes and challenge incorrect premises. Various versions of Chat-GPT exist with the latest being GPT-4, which has a variety of novel offerings such as image comprehension. The earlier version, GPT-3, was trained on around 45 terabytes of text data—that’s about one million feet of bookshelf space—at an estimated cost of several million dollars.
Depending on the data domain you work with you may be interested in automatically generating content using other types of generative models. For example:
Chat-GPT offers us a window into the next step of AI, that being artificial general intelligence (AGI). AGI has long been promised as existing or ‘coming soon’ in the AI community and would be exhibited by an AI which possesses intelligence to understand or learn novel intellectual tasks in ways human beings can do.
Most used and widely known AI solutions are known as ‘narrow AI’, and not designed to learn a wide variety of goals, but rather trained to be successful at single use-case applications. However, we can see that recent versions of Chat-GPT are far from narrow — exhibiting many properties of advanced human intelligence, such as reasoning, problem solving, and thinking abstractly across a wide range of variety of domains, accurately achieving answers to questions it has never been asked before.
Utilizing Generative AI language models like Chat-GPT within an organization can provide a massive variety of lucrative business use cases for example:
There are unique inherent challenges of governing risks around generative AI applications which do not present its training data or AI model.
We highlight some of the following below:
EU artificial intelligence act will be a watershed moment in a culture change needed for AI governance.
How Chat-GPT will align itself with the EU AI Act is not fully yet known and we explore this continuously at anch.AI through tools such as our AI Act Governance Sandbox (https://anch.ai/anch-ais-eu-ai-act-sandbox/). The EU’s AI legislation, the AI Act, will likely enter into force next year, but it is already urgent that an investigation be launched, both to investigate whether OpenAI violates GDPR and how our human rights are handled in generative AI systems such as ChatGPT.
Requirements and obligations, including the need for conformity assessments will be entirely dependent on the risk category for the AI Solution under examination.
Thus, the first step will be to understand your risk category for a given solution. These core risk categories can range from prohibited risk, to high-risk, to limited risk, to minimal risk.
Our Anch.AI ethical AI Governance Platform provides a quick and easy assessment to gauge the risk level of any AI use case at any stage of development with our EU AI Act Health Check feature.
Read out full guide to navigating your EU AI Act risk level and requirements here: https://anch.ai/blog/what-is-the-european-union-ai-act/
We are an end-to-end AI Governance SaaS platform. The platform is a manifestation of the state-of-the-art anch.AI 5-step methodology. Our multidisciplinary research, published in Digital Society, governing AI ethical risks started in 2016. Almost 250 AI use cases has been screened for ethical risk exposure by using our risk management methodology. Through workflow and orchestration, the platform connects people, data and processes.
We are proud to announce that on April 18, 2023 we launched the first version of our Generative AI Assessment Module, which permits us to deep-dive on dimensions of risk pertinent to Generative AI use cases such as harm to safety and misinformation.
To get a taste of how we explore special dimensions of ChatGPT, here is a sample list of questions we analyse and challenge all users to answer as part of the module:
How do you seek feedback from users and the wider community to improve ChatGPT’s performance, safety, and usefulness?
What measures are in place to detect and mitigate potential adversarial attacks or malicious uses of ChatGPT, such as impersonation or misinformation campaigns?
How do you plan to improve the inclusiveness and accessibility of ChatGPT, considering different languages, dialects, cultures, and user needs?
These questions any many more, complemented with quantitative AI analysis help us fully analyse the risk landscape in this domain. For more about this module please contact: email@example.com
We at anch.AI are both thrilled and cautious about the recent amazing innovations produced in the Generative AI domain. We ask our community to participate in continued engagement. They can do so contributing to debate on the topic while learning how to risk manage their generative AI use cases on our platform within the context of EU’s AI legislation, the AI Act, which will likely enter into force next year. To those interested who are not yet participating we ask you to please join our free EU AI Act sandbox (https://anch.ai/anch-ais-eu-ai-act-sandbox/) and get a full assessment of your Generative AI or other AI you may have in your inventory.
It was in 2016 that Anna realised artificial intelligence technology (AI) was becoming the new general-purpose technology: a technology that would drastically impact the economy, businesses, people and society at-large. At the same time, she noticed that AI was also causing a negative externality — a new type of digital pollution. Consumers have opted in to receive the benefit of digitalization, but are simultaneously facing a dark cloud of bias, discrimination and lost autonomy that businesses needed to be held accountable for. In the traditional environmental sustainability model, organisations are held accountable for physical negative externalities, such as air or water pollution, by outraged consumers and sanctions handed down by regulators. Yet no one was holding technology companies accountable for the negative externalities — the digital pollution — of their AI technology. Regulators have had difficulties interpreting AI in order to appropriately regulate it and customers didn’t understand how their data was being used in the black box of AI algorithms.
Anna’s multidisciplinary research group at the Royal Institute of Technology was the origin to anch.AI. Anna founded anch.AI in 2018 to investigate the ethical, legal and societal ramifications of AI. The anch.AI platform is an insight engine with a unique methodology for screening, assessing, mitigating, auditing and reporting exposure to ethical risk in AI solutions. anch.AI believes that all organisations must conform to their ethical values and comply with existing and upcoming regulation in their AI solutions, creating innovations that humans can trust. It is an ethical insurance for companies and organisations.