On 22 April 2021, the European Commission proposed that the European Parliament and the Council of the European Union adopt the Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts (EU AI Proposal). The EU AI Proposal, if adopted, would become a new piece of legislation aimed at regulating high risk use of artificial intelligence (AI) in the European Union (EU).
While the EU AI Proposal still has a long way to go to progress through the Council of the European Union and the European Parliament, it is proposed legislation that anyone involved with or contemplating AI should know about. It also has wide application, meaning that New Zealand companies developing AI technologies could be subject to the EU AI Proposal.
In this article we discuss what the EU AI Proposal means for New Zealand companies, look at some recent guidance from the United Kingdom to help navigate AI risks, and also take a look at the latest report considering legal and policy issues for AI in New Zealand.
Key provisions of the EU AI Proposal
Broadly, the EU AI Proposal aims ‘to turn Europe into the global hub for trustworthy Artificial Intelligence’.
To do that, the EU AI Proposal has wide application. If adopted, it will regulate all AI across the EU. In addition, like GDPR (the EU’s data protection legislation), it also has extra-territorial impact. In terms of a New Zealand company, that company would be subject to the EU AI Proposal if its operation will involve making a system using AI available in the EU, it has an operation in the EU that uses the AI system or it provides the AI system and the output of its AI system is used in the EU.
So, what does the EU AI Proposal contemplate? It will impose a framework of harmonised rules for AI according to the risk a form of AI poses. There are three categories:
- ‘Unacceptable risk’, where the European Commission considers the risk to pose a clear threat to EU citizens;
- ‘High-risk’, which essentially refers to AI systems that may create an adverse impact on the safety, livelihood or fundamental human rights of people (though it is predicted this category will expand as necessary as AI technology continues to develop); and
- ‘Limited risk’, covering three types of AI, being AI that interacts with humans (such as chatbots), AI involving an emotion recognition system or a biometric categorisation system, and AI that creates deep fakes.
The EU AI Proposal sets out what types of AI falls into each category. For instance, unacceptable risk AI includes AI that uses “practices that have a significant potential to manipulate persons through subliminal techniques beyond their consciousness”. An example of high-risk AI is “AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons”.
Each of these categories then attracts particular restrictions or obligations:
- Unacceptable risk AI is prohibited.
- High-risk AI is subject to strict requirements, including requirements for establishing a risk management system, using prescribed data governance and management practices, maintaining technical documentation and meticulous records, ensuring there is an acceptable level of transparency and appropriate human oversight, and being designed to an appropriate level of accuracy, robustness and cybersecurity.
- Transparency obligations for limited risk AI, which require a person interacting with it to be informed that they are interacting with a machine.
AI that does not fall into these three categories (ie minimal risk AI) would not be subject to any requirements under the EU AI Proposal.
To ensure compliance, there are stiff penalties. Penalties for breach are to a maximum of 4% of a company’s worldwide turnover or €20,000,000, increasing in some cases (eg use of unacceptable risk AI) to 6% of a company’s worldwide turnover or €30,000,000.
However, there is still a way to go before the EU AI Proposal could become law. We will continue to monitor developments, but regardless of the final outcome, we anticipate its effect will be significant on not only the development of AI, but how individuals interact with it.
Developments in the UK
Usefully, some practical guidance on navigating AI risks is also available. On 20 July 2021 the Information Commissioner’s Office in the United Kingdom released a beta version of an AI and data protection toolkit. The toolkit is designed to help organisations in the UK navigate data protection issues when developing AI products. It outlines the various risks related to AI and sets out practical steps to help address those risks. Many of the risks are ones that have been highlighted in the work leading up to the EU AI Proposal.
While the toolkit is focused on data protection compliance in the UK, many of the practical steps are also useful to address AI risks generally. You can access information about the toolkit and the toolkit here. Just remember that the data protection references are to the UK and you may need to translate them to applicable provisions of the NZ Privacy Act.
New Zealand’s own response?
The definitive and wide reaching application of the EU AI Proposal could set a scene for how AI is treated in other countries, including New Zealand. However, at this stage, New Zealand remains largely in a policy consideration mode.
As we commented on last year, the New Zealand Government adopted (as a world first) an algorithm charter setting out principles to be following in using AI in Government. For more information on the algorithm charter, please check our earlier article available here.
The New Zealand Law Foundation has also just completed a three-year project to evaluate legal and policy implications of AI for New Zealand. It’s first report on ‘Government Use of Artificial Intelligence in New Zealand’ was released in May 2019 and warned against the use of unregulated AI algorithms by government. It’s second report, The Impact of Artificial Intelligence on Jobs and Work in New Zealand (Report), undertaken by researchers at the University of Otago, was released in May this year and considers the impacts of AI on work and employment. The Report can be accessed here.
While the Report has focused largely on the workforce, and not general AI application, there are some key recommendations heading in the same direction as the EU AI Proposal. In particular, the Report recognises the high level of risk associated with AI integration into processes as where biases are not accounted there would be a near inevitable outcome of entrenching longstanding discrimination.
The Report highlights various concerns surrounding the growing use of AI, and provides suggestions for addressing these, including 25 recommendations. These recommendations include considering enacting measures that would set equivalent transparency standards for ad targeting platforms; ensuring hiring tools include functionality for bias auditing; considering measures to protect against automation bias and decisional atrophy; Codes of Practice governing workplace surveillance and workplace robots; and bot disclosure (mirroring the transparency obligations imposed on limited risk AI under the EU AI Proposal).
What recommendations will be taken up remains to be seen, but given the issues being raised, we think it likely that some form of regulation will arise at some time in the future.
For now, we recommend that any contemplated use of AI is carefully considered, and in the absence of clear and specific guidance, a conservative approach should be taken.
How we can help
Lane Neave has a specialist technology law team. From start-ups and SMEs to large corporates, we are keen to work with you and your technology projects. If you want to understand how we can help your business, or you would like to discuss this article in the context of your business, please get in touch.
Click here for other Corporate Law articles.