For several years now, there have been innumerable articles about AI (some no doubt generated by AI). Such articles (including our own recent articles: Getting started with AI governance, The future of AI in the health sector, Regulatory update: Cabinet paper on strategic approach to work on artificial intelligence, Substantial AI regulation - EU first off the blocks, and Privacy Commissioner guidance on use of AI) have tended to focus on its benefits and also the risks AI poses (including legal risks), the challenges of AI governance, and updates on the regulation of the use of AI overseas and what those regulations might mean for New Zealand.
However, at a practical level, many of the New Zealand organisations lawyers work for are of course already using or experimenting with AI and this raises an important practical question – to what extent are our existing contract templates fit for purpose when we are purchasing IT services and products that utilise AI?
Over the last few months, I've been asked by several clients for a 'standard AI' clause (or series of clauses) they can include in their contracts. In my view, at this stage there isn't a one-size fits all standard clause that can easily be added to our template contracts and used for every AI tool.
However, that's not to say that there aren't some common issues that any contract drafter should be considering and addressing in contracts that will involve the use of AI. While many of the issues are the same issues that arise in relation to the adoption of any new technology, they may have AI-specific answers and require AI-specific drafting.
At this stage, there are two main pitfalls that any contract drafter should seek to avoid:
- Over-reliance on generic requirements: Many ICT template contracts will already have obligations on suppliers to provide services and deliverables in accordance with applicable laws and good industry practice. It can be tempting to rely on these provisions but that approach can be problematic when it comes to AI. There is no New Zealand specific regulation of AI and in many cases, there won't be any statutory requirements that can be readily pointed to that address some of the risks AI may involve. A 'compliance with law' clause is therefore unlikely to give much AI-specific protection – particularly as they are often drafted to cover only laws that apply to the supplier and so won't guarantee that the customer can use the AI tool in compliance with all laws that might apply to that use. In addition, in the context of a very fast-moving, evolving space, what constitutes good industry practice may be very difficult to establish, much less enforce.
- Over-reliance on very specific obligations and model clauses: At the other end of the spectrum, it can be tempting to rely on AI-specific contractual model clauses. There are a number of these available and they can be a useful starting point (AI could also draft you one). For example, the European Commission has published clauses for the procurement of AI that are drafted with the requirements of the EU AI Act in mind (there are two sets – one for high risk and one for non-high risk systems) - EU model contractual AI clauses to pilot in procurements of AI. The UK's Society for Computers and Law (SCL) has also published sample clauses that can be reviewed in their article Society for Computers and Law AI Group launches Artificial Intelligence Contractual Clauses. However, over-reliance on model clauses that are too prescriptive can make negotiations unnecessarily difficult or protracted. For example, an obligation to provide detailed information about how the AI tool works that can be understood by anyone might be appropriate for a public facing tool, but may be extremely challenging for a tool that uses complex algorithms or where the supplier understandably wants to keep some of the AI's 'secret sauce' secret to protect the supplier's intellectual property. These offshore clauses will also be drafted to comply with the relevant local AI laws, which New Zealand does not have, so won't necessarily translate well into a New Zealand context.
Put simply, not all AI systems are created equal. They can entail quite different uses and risks and so a nuanced, tailored approach to drafting is often required. I've been involved in a client's procurement of an AI tool that initially seemed very risky but which (on closer analysis) involved technological protections that significantly mitigated a number of perceived risks (eg no customer data was leaving the customer's environment and no customer data was being used to train the tool for the benefit of other customers).
So what should a good set of AI clauses address? We suggest taking a step back and first considering the following issues before putting pen to paper:
- Consent to use of AI: Use of AI often arises when suppliers wish to incorporate (or at least have the right to incorporate) AI into their existing products. This begs the question of whether a supplier should be required to obtain a customer's consent to incorporate AI into an existing IT service or product. In some cases this may be appropriate, but if the service or product is a SaaS product, it may not be practical for a supplier to have to get every customer's consent to change the product. If that's the case, a customer may instead need to impose an obligation on the supplier to notify the customer of its use of AI and provide a right to terminate its subscription without paying early termination fee if that use isn’t acceptable (and with disengagement assistance where required).
- Transparency: To what degree does the supplier have to provide information about how the AI tool is designed to operate? To what extent should the supplier be responsible for producing material that explains how the tool works and who is the intended audience? Many customers will want some oversight and understanding of how the AI tool will operate.
- Monitoring: Is the supplier required to have some human oversight to monitor or test the AI tool? What should this involve and how should the supplier give comfort to its customers that the monitoring is effective?
- Record keeping and auditability: What records must the supplier keep regarding how the AI tool has operated in relation to the particular customer and its data? Does the customer have rights to audit this (and, if so, who would do this and how)?
- Use of data to train the model: What customer data is included in the AI tool? Could the customer's data be provided to another customer of the supplier and in what circumstances? How is the AI trained? Can the customer's data be removed from the AI tool in the future? These questions are particularly acute when the data in question includes personal or commercially sensitive information. As a starting point, one of the most important things to do is to make sure that the definitions of data and confidential information used in the contract are broad enough to cover data that is generated not by the customer or the supplier but by the AI tool itself.
- Intellectual property (IP): Will new IP be created and who should own it or have rights to use it? Will the supplier be on the hook if the AI produces data or an output that infringes a third party's rights? While a number of large multi-nationals have updated their terms to include indemnities from the supplier to the customer for paid services, see for example Microsoft's recent announcement on new Copilot Copyright Commitment for customers, those indemnities are often limited to use of the AI tool within limited parameters. Furthermore, it's often the customer's responsibility to ensure that it has the requisite rights to use any IP it may input into the AI tool (eg by uploading a document), as using an AI tool may involve making a copy of a copyright work. It's also important to consider what rights the customer should have to any outputs generated by AI. Are these rights perpetual or will they only last for so long as the customer uses the tool? Not all jurisdictions will recognise copyright in work that is generated purely by AI whereas others may recognise copyright (typically vesting ownership in the 'author' that set the parameters for the AI tool, unless modified by contract). Because the copyright position varies between jurisdictions, it will be worthwhile carefully describing how each party can use the tool and its inputs and outputs, using language that applies whether or not protectable IP exists.
- Accuracy and security: Service levels may be very important for the use of AI. A customer may wish to have some measurement (and remedies) relating to how accurate the AI tool or its outputs are so the customer can be assured that the AI tool is as reliable as any human-operated process. In particular, a number of AI tools have been found (like humans) to have significant biases. A customer may want comfort that the supplier is regularly testing the AI tool for biases and taking steps to correct them where it can (or at least inform its customers). Customers will often also want to understand how secure a tool is and what security protections are in place. Accuracy and security are areas where we expect to see the development of industry standards and processes specific to generative AI over time.
- A kill switch: Many of the global tech giants have made non-binding commitments to incorporate kill-switches into their AI tools in response to global concerns that AI may go rogue (as imagined in numerous Sci-Fi films, books and series). Individual customers who are worried about the experimental nature of AI functions employed in existing tools they use, may wish to require a customer kill switch – namely a contractual right (and perhaps even a technical ability) to turn off any AI functionality embedded in a service or product provided to that customer. Whether that is possible (and what it would mean for the continuation of the service or product without the benefit of AI) may require careful thought and negotiation.
- Change: In a rapidly shifting landscape, there is no doubt that new risks or issues may emerge and that those risks or issues may prompt additional regulatory or governmental responses. In this context, it may be important that long term contracts for the procurement of AI technology can respond appropriately to those changes. Customers may sensibly require easy termination rights they would not have had in other contexts. On the other hand, suppliers may need to know that they are free to develop and change their products and businesses to address future needs and challenges (as is already the case in respect of many multi-tenanted, cloud-based tools).
- Liability: Contracting parties can of course spend a lot of time worried about what could possibly go wrong and who should bear that risk. While a careful analysis of risks is appropriate (and we would recommend that organisations undertake a proper risk assessment before using new AI tools), organisations will necessarily need to take a practical view of those risks – or risk being left behind. As with any IT tool, appropriate contractual allocation of risks involves an assessment of what could go wrong, how likely is it that the risks would materialise, what losses would likely be caused, who is best able to manage the risks, and whether the fees paid by a customer are sufficient to compensate the supplier for the level of risk a customer may wish the supplier to bear. There are no easy answers and, whatever the position, both suppliers and customers will also need to be mindful of what insurance either or both parties may have in place and whether and how that insurance might respond to a loss caused by using AI.
If you're grappling with these issues and would like to chat, please contact one of our technology, media and telecommunications team members.