Getting started with AI governance
AI is rapidly changing global business. The latest research suggests that nearly half of New Zealand companies are already using AI, and while the other half might not have adopted AI tools officially, it's likely that some staff are using public tools in their daily work and without full visibility from their employer.
In addition, targeted regulation of AI is coming thick and fast, with the EU being first off the blocks to introduce substantial regulation with the AI Act. Elsewhere, Australia, Canada and the United States are considering regulation, and China has already enacted legislation focused on generative AI.
In this transformative period, directors and senior leaders have an obligation to understand how their business is using AI and ensure it is appropriate. Similarly, organisations that adopt a coordinated and intentional approach to AI are best placed to unlock the benefits of this transformative technology, while successfully navigating the risks.
This article explains what AI governance is and gives some practical thoughts on how to get started.
What is AI governance?
Adopting 'AI governance' means having a set of frameworks and policies about an organisation's use of AI. This:
- Makes sure there is awareness - at an organisational level, rather than just in specific business units or functions - of what AI is being used and for what purposes
- Ensures investment in, and use of, AI is responsible, effective, aligns with the organisation's strategy and values, and complies with any legal obligations (eg privacy obligations)
- Maximises trust, which may be particularly helpful here as recent research by multinational market research and consulting firm IPSOS found that New Zealanders tend to be more nervous about AI than the rest of the world
- Enables an organisation to quickly pivot where technology changes and keep abreast with rapidly developing laws.
How do you establish AI governance?
AI governance is important for organisations of all shapes and sizes and, while there is not a one-size-fits-all approach, the following steps are a good starting point.
- Set the scene: Consider the markets that your business is operating in or the functions that your organisation performs and the state of the regulatory frameworks there. Build a map of what tools are already being used by the business and how. It will be more complex to retrofit a governance framework where AI is already quite embedded, but the sooner you get started the easier it will be.
- Establish a policy framework: Most organisations will start by building a policy framework that sets out the approach their organisation wants to take to AI at a principled level and a commitment towards responsible use of AI. This could include:
- where and how AI is used within the organisation, and any hard limits on use of AI. For example, an organisation might decide to prohibit the use of tools that share data or are trained on certain datasets. These rules should be designed to respond to regulatory change - for example, the AI Act in the EU recently banned certain very high-risk AI systems outright. The way in which that legislation has classified different types or uses of AI by risk might also provide a useful framework for some organisations (especially those looking at opportunities in the EU).
- a process for assessing AI tools before adoption. In New Zealand the Privacy Commissioner recommends conducting a privacy impact assessment where personal information may be involved. A separate risk assessment or impact assessment that concerns wider issues like IP, accuracy and transparency, is also a good idea.
- how the organisation will keep track of (and periodically review) what tools are in use.
- appropriate links to other relevant internal policies, like HR, IT/data security and privacy policies.
- a process for regular review and update.
- Lead from the front: We think it's important to have monitoring and oversight from senior leadership and the board. This develops trust, keeps things coordinated, and ensures that directors have the information needed to effectively discharge their duties. It also ensures that understanding of AI use within the company isn't limited to certain business functions (eg technology, sales or customer management) - it's important that there is cross-functional involvement in decision-making and governance across the business, regardless of how and where the AI is being used.
- Communication and transparency: It's important to communicate what you are doing internally (to ensure adoption and compliance) and where appropriate, externally (to build trust and transparency with other stakeholders). Consider whether there is any training that could be given to staff about your policy framework, eg when and where they can and should be using AI.
Final thoughts
There are a plethora of different models and templates available that can help establish an AI governance framework, but whatever approach you adopt, it's key to get buy-in from around the organisation.
AI has the potential to be genuinely transformative because it can impact on all areas of your business, from HR, to IT, product, marketing, security and management. But siloes can create difficulties. For example, if all departments do not know about an organisation-wide AI governance framework, or don't buy into it, then there may be difficulties with compliance. Good consultation, training and internal communication channels can help build an open culture around AI.
Finally, be prepared to adapt! The technology, and the regulators are just getting started in this space. So set up your AI governance framework for regular review and update and consider starting simple. In this area, starting simple certainly beats not starting at all.
If you need help getting started with AI governance, please contact one of our technology, media and telecommunications team members.