Artificial intelligence will be subject to tighter regulation in the UK in future, the government has signalled.
The Department of Business, Energy and Industrial Strategy this week published a paper setting out ministers’ thinking on trying to balance the need to embrace the technology and use it for competitive advantage and the perils of unfairness, bias and disadvantage attached to AI.
Business secretary Kwasi Kwarteng said: “It is essential that we maximise the full opportunities which AI can bring to the UK, including by meeting our target of total R&D investment in the UK reaching 2.4% of GDP by 2027. We must achieve this while ensuring that we can build consumer, citizen and investor confidence in our regulatory framework for the ethical and responsible use of AI in our society and economy.”
The paper, Establishing a pro-innovation approach to regulating AI, proposes to “regulate AI based on its use and the impact it has on individuals, groups and businesses within a particular context”. Regulators will have responsibility for designing and implementing “proportionate” responses. The approach is designed to be targeted and support innovation.
Regulators will be asked to focus on risk and missed opportunities “rather than hypothetical or low risks associated with AI”.
AI adoption in the UK
Innovation and the avoidance of unnecessary barrier will be encouraged with regulators asked to interpret, prioritise and implement these principles within their sectors and domains. They will work closely with the Digital Regulation Cooperation Forum (DRCF) and other regulators and stakeholders.
The cross-sectoral principles will be set out on a non-statutory basis, the paper said, although this will be reviewed. Lighter touch options, such as guidance or voluntary measures, will be preferred over introducing new laws and rules.
The paper highlighted that while the UK has a distinguished and effective regulatory system – highly regarded in its “rule of law and support for innovation” – there’s a need to keep up with the ever-evolving challenges and opportunities that AI presents, which is paramount in remaining globally competitive.
Industry expert, Sridhar Iyengar, MD for AI and machine learning tools provider Zoho Europe, welcomed the proposals. He said: “On a global scale, we have barely scratched the surface when it comes to AI and its potential use cases. Early predictions indicate that AI and machine learning could soon lead to widespread development of autonomous transportation, manufacturing and even education and healthcare roles. From a business perspective, the technology is already making huge strides in revolutionising customer service, data analysis and business intelligence tools.”
He said: “The only area where artificial intelligence does need to be regulated more thoroughly, is in how, and whether, it is being deployed ethically. Private, and even public sector, organisations could easily use the technology for illicit activity such as surveillance or profiling, as has already been seen in the past. It is important that while innovation in AI is encouraged, so too is the transparency around how it is being deployed. Even without being forced to by law, organisations should take an ethics-led approach to any deployment and regularly review to refine as needed.”
Last week the UK’s information commissioner announced plans for an inquiry into the AI systems that screen job candidates, including looking at employers’ evaluation techniques and the AI software they use. Over recent years, concerns have mounted that AI, in many cases, discriminates against minorities and others because of the speech or writing patterns they use.