UK presents proposals for new AI regulations to unleash innovation and boost public trust in technology
Comes as the Data Protection and Digital Information Bill is introduced in Parliament, including measures to use AI responsibly while reducing compliance burdens for businesses to boost the economy
Regulators like Ofcom and the Competition and Markets Authority (CMA) will apply six principles to oversee AI in various contexts
New plans for regulating the use of artificial intelligence (AI) will be released today to help develop consistent rules to promote innovation in this game-changing technology and protect the public.
It comes as the Data Protection and Digital Information Bill is introduced in Parliament, which will transform UK data laws to drive innovation in technologies such as AI. The bill will take advantage of Brexit to maintain a high level of privacy and personal data protection while saving businesses around £1billion.
Artificial intelligence refers to machines that learn from data how to perform tasks normally performed by humans. For example, AI is helping identify patterns in financial transactions that could indicate fraud, and clinicians are diagnosing illnesses based on chest images.
The new AI paper released today outlines the government’s approach to regulating technology in the UK, with proposed rules around future risks and opportunities so businesses are clear about how they can develop and use AI systems and that consumers are convinced that they are safe and robust.
The approach is based on six core principles that regulators should apply, with the flexibility to implement them in a way that best suits the use of AI in their sectors.
The proposals aim to support growth and avoid unnecessary barriers being imposed on businesses. This could see companies sharing information on how they test the reliability of their AI as well as following guidelines set out by UK regulators to ensure AI is safe and avoids unfair bias.
Digital Minister Damian Collins said:
We want to make sure the UK has the right rules to hold businesses accountable and protect people, as AI and the use of data keep changing the way we live and work.
It is essential that our rules provide clarity for businesses, confidence for investors and build public confidence. Our flexible approach will help us shape the future of AI and solidify our global position as a science and technology superpower.
The UK is already home to a thriving AI sector, leading in Europe and third in the world for levels of private investment after domestic companies attracted $4.65 billion last year. AI technologies have unlocked benefits across the economy and the country – from tracking tumors in Glasgow and improving animal welfare on dairy farms in Belfast to accelerating the purchase of properties in England. This year’s research predicted that more than 1.3 million UK businesses will use artificial intelligence and invest more than £200 billion in the technology by 2040.
The extent to which existing laws apply to AI can be difficult for organizations and small businesses to understand. Overlaps, inconsistencies and gaps in current regulators’ approaches can also blur the rules, making it harder for organizations and the public to know where AI is being used.
If UK AI rules fail to keep up with rapidly changing technology, innovation could be stifled and it will become harder for regulators to protect the public.
Instead of giving responsibility for AI governance to a central regulator, as the EU does through its AI law, the government’s proposals will allow different regulators to take a single approach. personalized use of AI in various contexts. This better reflects the growing use of AI in various industries.
This approach will create proportionate and adaptable regulation so that AI continues to be rapidly adopted in the UK to drive productivity and growth. The Core Principles require developers and users to:
- Ensure AI is used safely
- Ensure AI is technically secure and works as intended
- Make sure the AI is sufficiently transparent and explainable
- Consider equity
- Identify a legal entity responsible for AI
- Clarify the means of appeal or contestability
Regulators – such as Ofcom, the Competition and Markets Authority, the Information Commissioner’s Office, the Financial Conduct Authority and the Medicines and Healthcare Products Regulatory Agency – will be asked to interpret and implement the principles.
They will be encouraged to consider lighter touch options which could include voluntary guidance and action or the creation of sandboxes – such as a test environment where companies can verify the safety and reliability of AI technology before launching. introduce to the market.
Industry experts, academics, and civil society organizations that focus on this technology can share their insights on putting this approach into practice through a call for evidence launched today.
Responses will be considered alongside further development of the framework in the upcoming AI white paper which will explore how to put the principles into practice.
The government will consider ways to encourage coordination between regulators and review their capabilities to ensure they are equipped to provide a global regulatory framework for AI.
Professor Dame Wendy Hall, Acting Chair of the AI Council, said:
We welcome these important first steps in establishing a clear and consistent approach to AI regulation. This is key to driving responsible innovation and supporting our AI ecosystem to thrive. The AI Council looks forward to working with the government on the next steps in developing the White Paper.
The government is also releasing the first AI action plan today to show how it is meeting the national AI strategy and to identify new priorities for the year ahead.
The government has invested over £2.3 billion in AI since 2014. Since the release of the National AI Strategy last year, the government has announced new investments in the long-term needs of the sector, including funding for up to 2,000 new scholarships in AI and data science. , and has opened up new visa avenues to ensure the industry has the skills and talent it needs to continue to thrive.
As part of the strategy, the AI Standard Hub was unveiled earlier this year. The hub will provide users from industry, academia and regulators with practical tools and educational materials to effectively use and shape AI technical standards. The interactive hub platform, led by the Alan Turing Institute with support from the British Standards Institution and the National Physical Laboratory, will launch in autumn 2022.
Notes to Editors:
The guidance document “Establishing a pro-innovation approach to regulating AI,” which includes a link to the call for evidence, is here.
The ten-week call for submissions will run until September 26. Organizations and individuals working in AI are encouraged to provide feedback to inform government work in this area.
The full AI action plan is available here
The Alan Turing Institute today publishes a independent report who found that there is a need for stronger coordination among regulators to meet the challenge of regulating the use of AI.
The Data Protection and Digital Information Bill is tabled in Parliament today. The bill will bolster the UK’s high data protection standards, introduce tougher fines for nuisance calls and cut unnecessary paperwork to release businesses. The reforms will also modernize the Information Commissioner’s Office so it can better help businesses comply with the law.