A policy approach to handle ChatGPT in your organization

A policy approach to handle ChatGPT in your organization

ChatGPT has been hitting the news for its breakthrough ability to understand and generate text across a multitude of topics and can engage in conversations, answer questions, provide explanations, and more. More of such Generative AI algorithms and applications have been introduced in the market such as Synthesia, Copy.ai and GitHub Copilot. These tools are currently used independently by individuals in organizations. Also, ChatGPT’s API integrations permits organizations to train custom versions of ChatGPT and provide integrations with internal interfaces. Some organizations have banned the use of ChatGPT whereas some have left their employees without guardrails. An internal policy on how and when to use Generative AI applications and integrations is the best approach to maximize ROI while managing appropriate risk levels.

Due to its accessible nature to everyone, there are challenges of governing risks around generative AI applications. There are risks to privacy and security as the model is built on analysing user input and therefore pose a risk of personal data collection. It can also cause risk of bias and discrimination as the model trains from historic data and can inadvertently exhibit biased behaviour, including discriminatory responses or actions. ChatGPT also lacks contextual understanding and do not fully understand the nuances of natural language. Other than these risks, all models present the concerns of transparency and accountability. Furthermore, ChatGPT can generate faulty results, content in a regulatory grey zone or prone to both ethical and legal breaches.

Employees at any organization, are best off when guided on which context they should use ChatGPT for internal processes, when they need a critical approach to it and when it is not appropriate. Restrictions and protocols are needed for risk mitigation.

A solution is that organizations can use custom versions of ChatGPT with organizational specific knowledge and specified restrictions. This customization can be used to assist employees with their tasks that lessens the impact of certain risks - but only if the model is well trained and the restrictions are set appropriately. A customed and restricted generative AI model implemented on internal data can be used for the following purposes: Task automation: an AI chatbot can automate repetitive tasks, such as scheduling appointments, setting reminders, and sending notifications; Customer service: an AI chatbot can handle customer inquiries, provide product information, help with order tracking, and assist with returns and refunds; Information retrieval: an AI chatbot can retrieve information from databases, knowledge bases, and other sources within the organization; Training and onboarding: an AI chatbot can facilitate employee training and onboarding processes by providing training materials, quizzes, and assessments; Collaboration and communication: an AI chatbot can facilitate internal communication and collaboration by providing updates, notifications, and reminders to employees.

To securely implement Generative AI within an organization, it is crucial to define use case context, desired value and quantify risks. Ensure that the training data is sufficient, diverse, and legally compliant. A risk management monitoring approach is needed after deployment to constantly evaluate the performance and satisfaction of the model.

Anna Felländer

Anna Felländer

It was in 2016 that Anna realised artificial intelligence technology (AI) was becoming the new general-purpose technology: a technology that would drastically impact the economy, businesses, people and society at-large. At the same time, she noticed that AI was also causing a negative externality — a new type of digital pollution. Consumers have opted in to receive the benefit of digitalization, but are simultaneously facing a dark cloud of bias, discrimination and lost autonomy that businesses needed to be held accountable for. In the traditional environmental sustainability model, organisations are held accountable for physical negative externalities, such as air or water pollution, by outraged consumers and sanctions handed down by regulators. Yet no one was holding technology companies accountable for the negative externalities — the digital pollution — of their AI technology. Regulators have had difficulties interpreting AI in order to appropriately regulate it and customers didn’t understand how their data was being used in the black box of AI algorithms.

Anna’s multidisciplinary research group at the Royal Institute of Technology was the origin to anch.AI. Anna founded anch.AI in 2018 to investigate the ethical, legal and societal ramifications of AI. The anch.AI platform is an insight engine with a unique methodology for screening, assessing, mitigating, auditing and reporting exposure to ethical risk in AI solutions. anch.AI believes that all organisations must conform to their ethical values and comply with existing and upcoming regulation in their AI solutions, creating innovations that humans can trust. It is an ethical insurance for companies and organisations.