Often, early adopters of emerging technologies with universal application get an advantage over their competitors. It follows that the sudden availability of AI has left companies scrambling to work out how they might be able to best apply this technology to their work.
However, before joining the rush, it would be wise to take a step back and examine some of the pitfalls associated with the use of AI, especially when used in a commercial environment. In a recent survey, 99% of respondents ranked governance as a critical AI challenge.
Nearly half of business leaders want to unlock the value of AI in line with their organisational values, yet there are concerns over the performance of third-party AI suppliers. The main concerns are regulatory compliance and ethical use of AI.
Complying when using generative AI
- What are the types of AI?
- How does AI affect intellectual property rights?
- What does the law say about copyright?
- Who owns AI-generated work?
- What are the terms of service of generative AI?
- Is there possible copyright infringement?
- How is inputted data used in generative AI?
- What are things to consider before using AI services?
What are the types of AI?
AI could be seen to incorporate a spectrum of functions, ranging from a basic form, moving to a limited memory model, and then progressing into the realms of science fiction, where it would have the capacity to apply intuition and emotion to problems and, finally, achieve a state of self-awareness.
The most basic form of AI uses reactive functions. This means that it uses stored data to respond to input. Reactive machines have no memory and are task-specific. Any repetition of input will always deliver the same output.
One example of this form of AI is the use of customer data (such as purchase history) to deliver recommendations to that customer. This AI technology does not have the ability to predict outcomes unless it is led towards a prediction through the input of specific data.
The second tier of AI, a limited memory model, provides the main focus of this article. This type of AI, also known as generative AI, is capable of storing inputted data and using it, together with its own databases, to respond.
A limited memory model will generally aim to predict outcomes by using fluid algorithms and (often, fuzzy) logic.
Common examples of this type of AI are predictive modelling and analytics, chatbots, virtual personal assistants, image and speech recognition tools, and smart search and recommendation engines. For example, we might use this form of AI to:
- rewrite a speech so that it is likely to appeal to a certain type of audience;
- develop a formula to better organise data sets;
- limit access to a building applying facial and behavioural recognition;
- predict whether a particular type of advertising will result in more sales of a new product.
How does AI affect intellectual property rights?
From a corporate perspective, the main purpose of compliance is to limit the risk of sanction and loss. In order to achieve this goal, we need to know and apply policies, procedures, standards and the law while also securing our assets from harm, including our intellectual property rights (IPRs).
Awareness of IPRs is essential for several reasons, including:
- preventing our ideas from being stolen;
- safeguarding the assets and profitability of the Company;
- giving the Company a commercial advantage over its competition;
- minimising the chance of an inadvertent breach; and
- limiting our company's exposure to damages for the breach of IPRs.
The use of AI for commercial purposes has the potential to affect both our IPRs and result in a breach of the IPRs of others.
Some AI systems are able to autonomously generate work, leading to the question of who might hold the intellectual property in AI-generated work. Furthermore, AI systems learn from data, and this data may be protected by IPRs.
For the purposes of simplicity, from this point on, we will focus on copyright, but it is important to remember that some of these issues could also affect other IPRs, especially designs and possibly patents.
What does the law say about copyright?
Under the 1886 Berne Convention for the Protection of Literary and Artistic Works, copyright protection is granted automatically to any original work once 'fixed' (formally recorded on some physical medium).
The author of the work is also entitled to copyright in any derivative work unless the right is disclaimed or it expires. For the majority of work, including digital content, copyright lasts for between 50 and 70 years, depending on the jurisdiction. A holder of copyright may:
- license, sell or transfer the copyright to someone else;
- benefit financially from the work (request royalties for its use); or
- assert moral rights over the work - for example, the right to be named as the creator of the piece or object to changes that may damage their reputation.
Copyright protects the holder from direct infringements of unauthorised copying, use, or reproduction of the work and also from secondary infringements of unauthorised possession, supply and import of or dealings with the work.
The holder of the copyright may also bring an action where apparatus or premises are provided or permitted to be used for a direct infringement.
There are limited exceptions to the protection offered by copyright, all bound by the principle of 'fair dealing' and acknowledging ownership and authorship. For example:
- copying limited extracts of works for non-commercial research or private study;
- criticism, parody, review or quotation;
- recording a broadcast to watch or listen to it at a later (more convenient) time;
- improving access to material by disabled people or for education purposes.
The main three questions that arise when considering AI technologies and copyright are:
- Who owns the work?
- Does generative AI automatically breach copyright?
- Would the use of AI create a risk of breach of copyright?
Who owns AI-generated work?
This is a two-part question. The first question that needs to be asked is whether it is possible to have ownership of AI-generated work. Then, one must ask the question: with whom does the right rest - with the service provider, the person who inputs content, or the AI technology itself?
In March 2021, the UK Intellectual Property Office (IPO) published the outcome of a consultation on AI technologies and IPRs. In that publication, the IPO acknowledged there is some uncertainty about how copyright law applies to certain aspects of AI-generated material.
The IPO recognised that in the UK and other common law jurisdictions, primarily, copyright is seen as an economic tool used to incentivise and reward creativity. This is reflected in subsection 9(3) of the Copyright, Designs and Patents Act 1988 (the 'UK legislation'), which provides:
‘in the case of a literary, dramatic, musical or artistic work which is computer-generated, the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken.’
In contrast, European civil law focuses on the protection of an individual author's rights, placing the emphasis on the individual who creates the work, not on the economic reward to which the right is linked.
Therefore, it appears that in the UK, it may be possible to establish a right over AI-generated work so long as the work can be classified as 'original', but this is less likely to be the case in European civil law jurisdictions.
This leads to the conclusion that companies working across jurisdictions should take this possible discrepancy into account when developing their policies on the use of AI in their business.
The question of who might own or use the copyright in the output is also not the easiest question to answer. One thing is certain - the actual AI cannot own the copyright. This rests with the author, and the author must be a person. So, would the owner be the user of the AI or the service provider?
What are the terms of service of generative AI?
The terms of service for Open AI assign to the user all its rights, title and interest in and to 'content', where 'content' includes all inputs and outputs.
Google is less clear in its assignment of copyright, but it does provide that the user's content includes anything the user creates, uploads, submits, stores, sends, receives or shares using Google's services.
The question of whether the user is creating an output or whether Google Bard is 'creating' an output is left open. The IPO's report on its consultation implies that in the UK, the creator of the output would be seen to be the AI, and so following Google's Terms and Conditions, ownership would revert to Google.
To conclude this discussion, let's look at this from a practical perspective. Given the nature of the offering by Google and Open AI and their business models, it is very unlikely that either provider would challenge the use of content generated by their respective AI technologies (unless the user is in violation of the relevant Terms of Service). Therefore, in practice, it wouldn't really matter who the owner would be.
Irrespective of who is understood to be the owner of the copyright, the dilemma persists with regard to the requirement for 'originality', especially where the creator of the content (the AI) and the recognised author (either the user or the service provider) are different.
Interestingly, the US Copyright Office has issued a policy statement providing guidance on whether work created with AI is eligible for copyright protection.
The Copyright Office stated that the determining factor for originality would lie in whether the AI contribution is a result of 'mechanical reproduction' or a user's 'original mental conception'.
Is there possible copyright infringement?
Currently, in the USA, there are a number of pending legal actions against AI service providers linked to claims of breach of copyright. Defendants to these actions universally argue that the use of the material is covered by a 'fair use' (fair dealing) exemption.
Both the UK legislation and the EU Copyright Directive (Directive (EU) 2019/790) allow for text and data mining (TDM), but only for non-commercial purposes. In particular, the EU Directive expressly limits the TDM exception to scientific research, innovation, teaching and preservation of cultural heritage purposes.
Although the majority of the information used by AI chatbot technology involves large data sets, much of which has been 'scraped' (taken) from websites, articles, journals and books, and 'mined' (subjected to analysis), as noted by the IPO, it is unclear whether these activities of AI systems can be described as 'data mining'.
This makes it hard to find a valid exception. Furthermore, the outputs rarely include an acknowledgement of the source and even when sources are requested, outputs often end in dead links.
These factors mean that the use of AI, especially for commercial purposes, poses real risks of copyright breach that could result in legal claims and reputational damage.
How is inputted data used in generative AI?
Another consideration when using AI services is what happens to your input.
It is important to note that both Open AI and Google retain the right to use 'content' (by way of licence) for purposes such as providing, operating, improving and maintaining services, promoting services, developing new technologies and services, complying with the law, and enforcing usage policies.
This means that, in accordance with relevant privacy and data protection principles, your input may be stored and used by the providers, and it may even be used to generate outputs for other users.
Users must be very careful about what information is inputted into these services. Never include personal, potentially confidential or commercially-valuable information.
What are things to consider before using AI services?
One thing is certain, the recent move towards opening up AI services might not represent the proverbial goose that lays golden eggs. To take that metaphor further, a company that uses AI services for commercial purposes may find that the 'golden eggs' are made of fool's gold and are rotten inside.
It may be wise to heed the advice of both Google Bard and Chat GPT:
“Use discretion before relying on, publishing, or otherwise using content provided by [AI] services.”
In short, here's a list of things to consider before using AI services at work:
- Is the input commercially sensitive – would we be happy to share it publicly?
- Do we need to retain copyright over the output?
- Is the output going to be used for non-commercial research purposes?
- If the answer to 3 is no, then do we have another exception to copyright upon which we could rely?
- Have we acknowledged the original author of any work used to generate the output?
If the answer to any of these questions is 'no', it may be worth considering whether the use of AI is appropriate in the circumstances.
Looking for more compliance insights?
We also have 100+ free compliance training aids, including assessments, best practice guides, checklists, desk aids, eBooks, games, posters, training presentations and even e-learning modules!
Finally, the SkillcastConnect community provides a unique opportunity to network with other compliance professionals in a vendor-free environment, priority access to our free online learning portal and other exclusive benefits.