The UK, the US, Israel and the EU signed the first international treaty on artificial intelligence (AI) in September. Under the legally binding agreement, national states must implement safeguards against any threats posed by AI to human rights, democracy and the rule of law.
While this is a welcome step towards establishing international standards for AI governance, there is still a glaring gap between the rapid pace of AI innovation and governments’ ability to regulate the technology. This poses significant risks to democracies and individuals, and at the core of these risks lies data.
The nations that lead the way in safe AI adoption will be the ones that reap the biggest benefits of the technology. To realise its ambition to become a global AI superpower, the UK needs to get the balancing act between providing regulatory oversight and encouraging innovation right. This would require establishing clear AI policies that safeguard data and protect the rights of content owners.
Vice President for Solutions Consulting (Partners) at Appian Corporation.
Leaders and laggards in the AI race
So far, Europe has a head start on AI because there is more clarity on regulation, which is vital for the future development of the technology. The EU AI Act that came into force in August this year is the world’s first comprehensive AI law. It requires businesses to comply with security, transparency, and quality requirements depending on the risk level associated with their AI applications.
However, outside of the EU AI Act, other significant regulations are cropping up, too, with the UK and US currently working on developing their regulatory frameworks for AI. As the first comprehensive AI regulation, the EU AI Act is expected to become a blueprint for future AI regulations. According to IDC, 60% of governments worldwide will adopt a risk management approach to framing their AI policies by 2028.
The UK has a great opportunity to seize the moment and lead the way in AI regulation alongside other major powers such as the EU. However, this will require bold action and effective policies that encourage competition and defend the rights of citizens and content creators.
Getting AI regulation right
To get AI policies right, the government needs to address the biggest issues in AI regulation – data. Currently, large language models (LLMs) can use privately owned data provided it is anonymized. This places too much power in the hands of a few big tech giants and does not adequately protect intellectual property. We need to assert more data rights and safeguard the rights of data owners and content creators.
To assure data privacy and intellectual property, governments must introduce regulatory provisions such as mandatory disclosure of data sources used to train LLMs and consent and compensation for using copyrighted information and private data. Protecting data privacy and intellectual property is the number one thing we should be focusing on in the age of AI, yet it is the last thing anyone seems to be talking about.
Secondly, we need to consider more holistically who needs to have a say in the future of AI. Focusing discussions on AI regulation on the usual suspects from Big Tech is limiting and potentially dangerous. This technology impacts everyone in our society, and we must ensure that a wider range of viewpoints gets heard. We should be mindful not to create unhealthy market dynamics by concentrating excessive power into the hands of a few big players who have a disproportionate influence on how AI gets developed and regulated.
A safer model for enterprise AI adoption
One of the biggest challenges to regulating AI is that the most widely used open-source generative AI models offer limited control over who can access the data fed into these LLMs and how it has been used. Making these models accessible to many developers or users increases the risk of misuse by malicious actors as well as the number of actors in scope for any regulatory approach.
To minimize the risk of AI misuse, governments should encourage the adoption of private AI models, particularly in business sectors such as financial services, healthcare, insurance, and the public sector, where data protection is of critical importance. With private AI, users can purpose-build an AI model to deliver the needed results and train it on the data they have while ensuring their data never escapes their control. This will enable organizations to keep their innovations and customer, patient or citizen data safe while reducing the risk of data misuse or leakage.
The way forward
Data safety is the number one issue regulators must address today to regulate AI. This means not only safeguarding the data that feeds AI algorithms but also securing the rights of content creators and consumers.
What we need in these transformative times is an enabling environment for wide-ranging innovation governed by clear regulatory frameworks that provide fair opportunities for everyone. The implications of AI touch every corner of our society, and we must ensure everyone can have a say in how our lives will be impacted by it.
The UK can play a leading role on the global AI stage but to do so, it needs to carefully balance openness to innovation with regulatory oversight while taking swift action to address the most pressing AI risks and protect the rights of citizens and content owners.
We’ve featured the best AI phone.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro