If data breaches and retention periods were the overarching privacy themes of 2022, they have been joined in the headlines this year by biometrics, convergence and - unsurprisingly - AI. Buddle Findlay privacy specialists Allan Yeoman and Alex Chapman recently attended the IAPP (International Association of Privacy Professionals) ANZ Summit in Sydney, and report back on their key takeaways from two days of discussions from regulators, advisers, academics and other privacy professionals, including the New Zealand Privacy Commissioner and Deputy Privacy Commissioner.
Biometrics is a hot topic for the Privacy Commissioners in both New Zealand and Australia
Ongoing investigations by the Office of the Australian Information Commissioner into the use of facial recognition technology by Bunnings and Kmart are top of mind and there is a clear interest in holding organisations that adopt this technology (particularly in a retail context) to account. In New Zealand, the New Zealand Privacy Commissioner has recently announced that his office will be consulting on a draft Code of Practice for biometrics. The draft code will focus on proportionality, transparency and purpose, and will have three parts:
- A proportionality assessment to require agencies to carefully consider whether their reasons for using biometric technologies to analyse biometric information outweigh the privacy intrusion or risks.
- Transparency and notification requirements to place clear obligations on agencies to be open and transparent with individuals and the public about their collection and use of biometric information.
- Purpose limitations to restrict certain collection and use cases of biometric information.
These rules would apply when agencies collect biometrics to use in automated processes, like facial recognition technologies. It is expected that the code’s exposure draft will be available for review and comment in early 2024.
The New Zealand Privacy Commissioner wants privacy reform
He signalled four key areas of change he considered necessary to ensure that New Zealand's privacy laws remain consistent with international practice:
- A right to erasure or right to be forgotten. A right to erasure is the status quo in the European Union, United Kingdom and the State of California and has now been accepted in principle as part of the Australian Privacy Act reform process.
- The introduction of civil penalties. The Commissioner noted that privacy hits a "mahogany ceiling" in organisations and that a substantive penalty regime may be what's needed to get the c-suite engaged in privacy compliance.
- The requirement for documentation/accountability. This would require agencies to demonstrate and document why they are collecting personal information and what they are doing with it. While this has always been considered good practice (and reinforced in Office of the Privacy Commissioner guidance), the Commissioner's view is that stronger regulatory incentives are needed to ensure it's more widely implemented.
- Automated decision-making. The Commissioner wants to address the risks of bias and further disadvantaging marginalised groups that comes with automated decision-making, which would again close the gap between New Zealand's Privacy Act and privacy regulations in other jurisdictions.
What these changes might look like in practice or how they might be incorporated into the new Government's legislative agenda (or the existing Privacy Amendment Bill) is unclear, but we recommend organisations keep a "watching brief" on potential change.
Privacy works best in a team
A theme throughout the Summit was the idea of "convergence" between different teams within an organisation to manage privacy, data and AI related issues. This was particularly evident when discussing implementation of a consumer or customer data right and how organisations could ensure that they were able to manage legislative and operational change efficiently and effectively - in practice, this requires involvement from the privacy, data, IT and cyber security, financial regulation and payments teams. Ultimately, as organisations collect more and more data and as the regulatory regimes in which those organisations operate get more complex, privacy on its own is unlikely to be enough to manage risk.
If you haven’t already, it is time to get started on AI governance planning and policies
These were seen as a crucial for managing a rapidly evolving technology and changing regulatory landscape. Further, even if organisations are not planning on rolling out AI tools, research indicates that a significant number of employees are already using AI tools in their role anyway (with or without having told their employer). While we expect that ultimately some kind of legislative regime will be introduced to manage AI related risks in New Zealand, for the time being organisations should be thinking about what the risks for their data are in the context of AI tools and how they want to manage those risks from a legal, data/IT security and ethical perspective.
Know your data
While knowing what data you have, why you have it and where it is might be "data 101", a variety of different speakers reflected that the ability to engage with new technologies and regulatory regimes (eg AI and consumer data rights) would be dependent on good data mapping and data auditing practices. Of course, this all requires an internal appetite to invest time and effort - and that can be difficult to sell when the legislative regimes are unsettled.