On 30 November 2022 ChatGPT was released by artificial intelligence (AI) research laboratory, OpenAI, marking a significant development in AI. Underpinning ChatGPT is OpenAI’s large language model, which is a method for training computers on a large quantity of information so that they can carry out general tasks, rather than being trained to carry out just one specific task. ChatGPT does this by being able to chat on any subject. Other similar chat bots have since been released, including Google’s Bard, Anthropic’s Claude and Microsoft’s Bing AI.
The significance of AI was recently summed up by Microsoft founder Bill Gates, in his 21 March 2023 blog GatesNotes: “The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other. Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it.”
The flurry of activity that ChatGPT and others has created prompted an open letter on 22 March 2023 to call for a six-month pause on AI systems that can compete with humans. The letter received a reported 30,000 signatures, including the likes of Elon Musk and Apple Co-Founder Steve Wozniak. At the G7 forum in Hiroshima on 20 May the topic of AI was discussed, with the G7 communication noting the importance of “trustworthy AI”, encouraging “the development and adoption of international technical standards” and calling for discussions by year end on “governance, safeguard of intellectual property rights including copy rights, promotion of transparency, response to foreign information manipulation, including disinformation, and responsible utilization of these technologies”.
While regulation in this space is still in a catch-up mode, there has been some recent guidance and discussion papers that can help guide businesses contemplating the use of ChatGPT and other AI.
UK guidance for Boards on using AI
The first of these is the United Kingdom’s Institute of Directors (UK IOD) Science, Innovation and Technology Expert Advisory Group guidance for boards on the ethical use of AI, which was published in March 2023. In releasing this paper, the UK IOD notes the importance of AI being on the board agenda and “considered seriously as part of the G in ESG (Environmental, Social and Governance) and the CSR (Corporate Social Responsibility) requirements”.
The paper sets out 12 principles for boards to consider. While it is written for the UK context (so references some of the recent UK and European regulations in this area), the principles relate to the New Zealand context too. In discussing each of the principles, the paper poses a number of questions that boards should be considering when contemplating AI. The key elements can be summarised as follows:
- monitor the regulatory environment;
- consider the business’ risk appetite to using AI;
- ensure the goals in using AI align with the business’ vision and values;
- follow a privacy and security by design approach to AI systems;
- have a way of detecting and addressing bias; and
- develop and use AI within an ethical framework.
There are a number of other points made in the guidance too, and we recommend boards read the full document. It is a short paper and easy to read; you can access it here.
We also note that there is a degree of commonality between the principles and other comments in the paper and the New Zealand (world first) algorithm charter (released in 2020), which sets out principles to be followed in using AI in Government. However, the UK IOD guidance is board focused. While the NZ IOD has not produced any similar guidance, it has commented on the topic in relation to a recent Court decision, which we discuss below.
New Zealand regulation and policy developments in AI
A key recent development is the Privacy Commissioner’s exploration of a Privacy Code of Practice for biometrics (read our article on this here) and on 25 May guidance was released on the expectations the Privacy Commissioner has when using generative AI, such as ChatGPT.
The guidance outlines key risks, similar to the points discussed in this article, and sets the following expectations in using such AI:
- explicit approval is given by senior leadership after considering risks and mitigations;
- consider whether an alternative (non-AI) tool could be used instead;
- conduct a privacy impact assessment;
- be transparent about the AI tool’s use, including how privacy risks are being addressed;
- develop procedures for compliance with the Privacy Act 2020’s requirements for using accurate information and dealing with access and correction requests;
- have human review outputs to mitigate the risk of inaccurate or biased information; and
- ensure that personal or confidential information is not retained or disclosed by the AI tool provider.
AI has also been before the Courts, in a case which illustrates several governance implications that boards should consider in the context of intellectual property. In Thaler v Commissioner of Patents [2023] NZHC 554, the High Court considered whether an AI called DABUS could be recognised as an inventor in a patent application. The invention involved was a new type of food container created by DABUS. In its decision released in March, the Court ruled that the definition of an inventor did not include AI and took the view that this was a matter for Parliament to deal with.
Following this decision, the NZ IOD commented on its governance impact for boards. In particular, the NZ IOD noted the need to monitor legislative and regulatory developments, potentially re-evaluate IP strategies and assess ethical and social implications. The commentary also noted that boards should consider promoting “a culture of innovation that embraces AI technology while still recognising and rewarding the contributions of human inventors”.
Also in March, the Artificial Intelligence Researchers Association put out a discussion paper entitled “ChatGPT & Large Language Models – What are the implications for policy makers”. This paper, accessible here, is useful as it briefly explains what large language models are and how they work, discusses various policy areas that will be impacted and makes 12 recommendations on AI, particularly in the context of its use in education. One comment that the authors make that will be welcomed by many is that, while there may be dislocation from AI, they expect a positive productivity shock too.
The paper includes some similar themes to the points mentioned above, including the risk of bias in AIs such as ChatGPT. There are two other themes to comment on. The first is on the topic of strategic autonomy. The paper notes that powerful tools are indispensable, and given the offshore location of AI service providers, ChatGPT (or similar AIs) are likely to be yet another technology that NZ Inc relies upon, but which is not based in NZ. This creates risk for businesses being locked into a particular provider and to a business’ ability to operate if access is disrupted.
The second is what the paper terms “eloquently argued nonsense”. This is where a large language model-based AI might simply make up a plausible answer to a question but be wrong. This is a key concern, which can create a variety of risks for businesses from making decisions based on incorrect information to releasing misleading statements.
Businesses will need to factor such matters into their risk assessments around the use AI and consider how to mitigate those risks, such as rules/processes to guard against inappropriate or risky use of AI.
Where to from here?
For now, we recommend that boards put AI on the agenda and develop their governance thinking on the use of AI and ensure the use of any AI complies with existing laws, particular the Privacy Act 2020. A privacy impact assessment can be a useful way to ensure privacy and data governance is addressed at the outset when adopting any new technology. It is much easier to get this right at the start than when the technology is deployed. In terms of the specific regulation of AI, it remains a case of keeping a careful eye on developments, both in New Zealand and abroad.
If you want to know more or would like us to assist with a privacy impact assessment or a more general legal compliance assessment for an AI technology, please reach out to a member of our corporate team.