AI seems to be everywhere these days, whether in a substantive fashion, or as a thinly veiled attempt to latch on to the latest trend. Time and again, we’ve seen how difficult it can be to regulate fast developing technologies, and this is certainly no less the case with AI.
In the US, home of Silicon Valley and big tech, there is currently a lack of overarching AI-specific regulation at the federal level. However, many US states are beginning to look into the matter more seriously, and last October, President Biden issued an Executive Order on AI, establishing new standards for AI safety and security, protecting individuals’ privacy, advancing equity and civil rights, supporting consumers and workers, promoting innovation and competition, and more.
In the European Union, the Artificial Intelligence Act, first proposed by the European Commission in 2021, was approved by MEPs on 13 March this year. Hailed as a world-first for comprehensive AI legislation, the AI Act is designed to impose a set of binding requirements to mitigate the risks arising from AI. Where an AI application poses a “clear risk to fundamental rights”, it will be banned. For “high risk” applications, strict requirements will apply. For low-risk applications, only light-touch regulation will apply. Since “high-risk” applications are generally those relating to critical infrastructure, healthcare, education, law enforcement, and similar contexts, it is likely that most AI developed by or for use in small businesses will be on the low-risk end of the scale. Moreover, as the European Parliament explains, “Regulatory sandboxes and real-world testing will have to be established at the national level, and made available to SMEs and start-ups, to develop and train innovative AI before its placement on the market.”
An AI Office has also been established in the EU, and will offer support to organisations working with AI, enabling them to comply with the new rules before they come into force.
Furthermore, on the international stage, on 5 September, the UK, EU, and US became the first signatories of the Council of Europe’s Framework Convention on AI and Human Rights, Democracy, and the Rule of Law, a convention aligned with the EU AI Act. It is important to note, however, that the Convention will typically not apply to the use of AI in business: it will apply primarily to public authorities or to private interests acting on behalf of public authorities.
Meanwhile, in the UK, the approach thus far, has been more hands-off, favouring greater self-regulation and perhaps seeking to fashion a competitive advantage in a post-Brexit world. Indeed, with the news earlier this year that Microsoft is opening a new AI R&D office in London, proponents of this approach perhaps may have cause for optimism. This is not to say that the UK is ignoring the risks posed by AI, however. Moreover, following the General Election earlier this year and the resulting shift from a Conservative government to a Labour one, it is quite possible that we will see a change in the UK’s legal stance on AI in the coming months and years.
The Future of AI Regulation in the UK
In 2023, the UK hosted the world’s first AI Safety Summit at Bletchley Park, bringing together academics, civil society groups, AI companies, and representatives from 28 countries and the EU. The resulting Bletchley Declaration was “a landmark commitment to share responsibility on mitigating the risks of frontier AI, collaborate on safety and research, and to promote its potential as a force for good in this world”.
More recently, Lord Clement Jones introduced a private members’ bill, the Public Authority Algorithmic and Automated Decision-Making Systems Bill, aimed at regulating the use of automation and algorithms in public sector decision making.
In terms of promoting the growth of AI in the UK, the House of Lords Communications and Digital Committee recently launched an inquiry examining the challenges faced by startups when scaling up in AI and creative technologies. The Committee has noted that, despite initiatives seeking to improve matters in recent years, many business still face barriers and end up selling to overseas investments or moving out of the UK. More details of the inquiry are available here (external link), and the call for evidence is open until 16 October.
The Government is also developing an AI Opportunities Action Plan, aimed at identifying how AI “can drive economic growth and deliver better outcomes for people across the country”. The aim is to look at how the UK can create a globally competitive AI sector, enhancing people’s interaction with government, and strengthen key areas that support AI.
Taken broadly, the regulatory forecast for AI seems to be split into two. Larger organisations and public bodies should expect to face more regulation designed to govern their use of AI (indeed, the Labour manifesto specifically pointed to binding regulations for the “handful of companies developing the most powerful AI models”). Smaller organisations, on the other hand, appear to be the target of policy changes aimed at making the UK a more competitive place for AI development in the future.
As will be seen below, however, there are other areas of law – particularly relevant both for businesses developing and using AI – which need attention, not least data protection and intellectual property.
Using AI in Business
The use of AI in business, and particularly generative AI, has expanded rapidly in the past few years. AI can be used for creating content ranging from text to video, for proofreading content, for making (or assisting in the making of) decisions, for analysing data, and many more applications besides. Despite the rapid uptake, however, and the temptation to integrate AI into business-critical operations, there are a number of risks to consider and to ensure that all staff using AI tools in their work are aware of.
An important element in the development of AI models is training. Vast amounts of data are used to train AI models and, as one would expect, that data has to come from somewhere. In many cases, such data is “scraped” from the internet, and this can pose problems further down the line when the AI model is used.
Intellectual Property Risks
Taking the example of generative AI, whether it is used for producing text, images, or other media, the training data used will comprise vast quantities of example content scraped from the internet. A generative AI model designed to produce images, therefore, will have been trained using billions of images that already exist. Just as the images themselves exist, so do the intellectual property rights subsisting in them. Theoretically, the AI model is using those images to learn not simply storing them as a repository to then spit out in response to the relevant prompt. It is, however, possible for elements of training data to appear in the output produced by generative AI, thereby potentially leaving the user open to IP infringement proceedings. There may also be questions as to who actually owns the output.
As already noted, the owner of content used to train the AI may have a claim. It may also be possible that the developer of the AI model owns the output. Another option is that IP ownership lies with the end user. Finally, albeit not currently possible under UK law, there may be scenarios in which the AI itself could own the output. This last option is not something that users should be concerned about at present, but as the law evolves, it is important to be aware of the possibility. The key issue at present is that much commentary on the ownership of IP produced by AI makes liberal use of the word “might”, making unfettered use of AI to produce content for business purposes a potentially risky activity.
Data Protection Risks
Just as AI is trained on a range of “content” type data, so too will it often be trained on data that may contain personal and/or confidential information. Similarly, AI models may use user inputs for further training once they are in active use, meaning that any personal data or confidential information input into the AI during use may be stored to further train or refine the model. This gives rise to the risk that such data may be unintentionally exposed to the public or other unauthorised recipients when using the AI.
Discrimination
Training data can also be a source of bias in AI models. Biases may be inherent in the training data itself or the data used may simply lack a suitable balance. If the training data used does not suitably reflect the environment in which the AI model is being used, the outputs may result in biased decision making or, worse still, outputs that incorporate misinformation or disinformation.
Reducing Risks
Given that AI and indeed the risks and regulation surrounding its use are still very much in development, it can be difficult to establish hard and fast rules governing its use in business. One way to help address the risks in business use is to put in place a Generative AI Usage Policy, setting out which AI tools are approved for use, with specific rules and guidelines to address key areas of concern including intellectual property, data protection, confidentiality, bias, and so on.
It is also important to maintain a keen awareness of developments in AI in order to ensure that your use of it within your business remains lawful, safe, and compliant as regulation emerges.