The New Zealand Government has released its Algorithm Charter for Aotearoa New Zealand (Charter). The Charter is designed to act as a guideline for government agencies on the use of algorithms.
The Charter is the first of its kind, and aims to improve data transparency and accountability, particularly where algorithms are being used to process and interpret large amounts of data. The Charter has turned the 10 draft principles released in December 2019 into commitments across six focus areas, being transparency; partnership; people; data; privacy, ethics and human rights; and human oversight.
The term ‘algorithm’ covers a wide range of advanced analytical tools, from less advanced techniques that primarily streamline business processes, through to more complex systems, which can take on properties of machine learning as they make advanced calculations and predictions.
Algorithms are used to assist government decision making and service delivery, and can mitigate the risk that human biases will enter into the administration of government services. However, using algorithms to analyse data and inform decisions can also have the reverse effect, for example human bias could be perpetuated, or even amplified, by algorithms that are not designed and operated in thoughtful ways.
The Charter is not intended to be applied to every use of an algorithm by government. So, to assist government in determining if the Charter applies to an algorithm, a risk matrix approach is used. If the risk matrix indicates that use of a particular algorithm significantly impacts on people’s wellbeing, or “there is a high likelihood many people will suffer an unintended adverse impact”, then the Charter comes into play. By following the principles of the six focus areas of the Charter, the risk of adverse outcomes should be reduced.
While the Charter only applies to government, any business or organisation using algorithms should consider the Charter to help it reduce the risk that using the algorithm produces an adverse outcome. Businesses and organisations should also consider the Charter in conjunction with a new checklist on using artificial intelligence out of Europe.
Artificial intelligence – a checklist for developers
On 17 July 2020 the European Commission’s high level expert group released their Assessment List for Trustworthy Artificial Intelligence (Assessment List).
The Assessment List is based on Ethics Guidelines, released by the Commission’s expert group last year. The Ethics Guidelines introduced the concept of trustworthy AI, based on seven key principles:
-
- Human agency and oversight;
- Technical robustness and safety;
- Privacy and data governance;
- Transparency;
- Diversity, non-discrimination and fairness;
- Environmental and societal well-being; and
- Accountability
The Assessment List is provided via a web-based tool, which serves as an ethics checklist available for business and organisations to self-assess their adherence to the principles, as well as providing guidance for improvement.
While the European Union does not have any specific legislative instrument regulating the use and development of AI, the Assessment List can serve as a useful tool in the development and deployment of AI. Of course, it refers to European laws in some parts of the checklist, so New Zealand equivalents need to be considered (e.g. Privacy Act in place of GDPR references (the EU’s data protection law)), but the principles have general application.
How we can help
Lane Neave has a specialist technology and privacy law team. From start-ups and SMEs to large corporates, we are keen to work with you and your technology projects. If you want to understand how we can help your business, or you would like to discuss this article in the context of your business, please get in touch.
Click here for other Corporate Law articles.