Pressure is growing on ministers to set up safeguards to protect employees from the misuse of artificial intelligence technology in the workplace.
The TUC is holding a meeting today (18 April) aimed at building a consensus around the need for more regulation around the use of AI by employers.
The forum will be attended by the likes of MP David Davis, a member of the All-Party Parliamentary Group on the future of work; Robert Bancroft, AI and digital services lead at the Equality and Human Rights Commission; Neil Ross, associate director at TechUK; and Anna Thomas, director and co-founder of the Institute for the Future of Work.
The TUC said AI was now making “high-risk, life-changing” decisions about workers’ lives, including line-managing, hiring and firing staff. It was also being used to analyse facial expressions, tone of voice and accents to assess candidates’ suitability for roles.
AI’s future impact on the workforce can’t be exaggerated. Existing employment legislation will need applying in a different context” – Jonathan Exten-Wright, DLA Piper
Without more safeguards, warned the union body, AI could lead to greater discrimination at work across the economy, particularly as “workers are being kept in the dark about how AI is being used to make decisions that directly affect them”.
It cited polls that showed most workers harboured significant reservations about the rise of unfettered AI.
Light touch regulation
The government has signalled that it intends to have a light-touch regulatory presence in the use of AI. In July 2022 it stated it would “regulate AI based on its use and the impact it has on individuals, groups and businesses within a particular context”.
Generative AI
Generative AI will ‘replace 300 million jobs’, says Goldman Sachs
Computer says ‘what?’: The risks of AI-generated HR docs
Regulators would have responsibility for designing and implementing “proportionate” responses, the government said. The approach, said a paper issued by the Department for Digital, Culture, Media and Sport, would be to promote innovation with regulation being “targeted”.
Last month, a UK Data Protection and Digital Information Bill (no 2) was introduced. This proposed restrictions on automated decision-making under Article 22 UK GDPR that should only apply to decisions that are a result of automated processing without “meaningful human involvement”. It suggested that profiling would be a relevant factor in the assessment as to whether there has been meaningful human involvement in a decision, but the meaning of this is vague and ambiguous, its critics have said.
In last month’s budget, chancellor Jeremy Hunt announced initiatives designed to encourage research and investment including an “AI Sandbox” to serve as a space for innovators to trial tools, models and systems to ensure more efficient approaches to getting “cutting-edge” products to market for AI businesses.
Dismal failure
According to the TUC, the Bill was a “dismal failure” with the government providing only vague guidance to regulators on how to ensure AI is used ethically at work, and no additional capacity or resource to cope with rising demand.
AI will not be just a race to the bottom for workers’ rights, the whole idea of any rights at work will become illusory” – Robin Allen KC
The TUC warned that the Bill was already setting a “worrying direction of travel” and would dilute important rights – currently guaranteed under GDPR – that provide workers with protections against automated decision making and give workers and unions a say over the introduction of new technologies through an impact assessment process.
Joining speakers at the TUC forum is AI and employment rights lawyer Robin Allen KC, who said that without more money, more expertise, more cross-regulatory working, more urgent interventions, and more control of AI “it will not be just a race to the bottom for workers’ rights, the whole idea of any rights at work will become illusory.”
Fair rules
TUC assistant general secretary Kate Bell added that without fair rules, AI could lead to widespread discrimination and unfair treatment at work. She added that it was “essential that employment law keeps pace with the AI revolution. Last month’s dismal AI white paper spectacularly failed to do that.”
Jonathan Exten-Wright, a partner at law firm DLA Piper, warned that employers needed to tread very carefully with AI to avoid “significant liability”. He said: “AI’s future impact on the workforce can’t be exaggerated. Existing employment legislation will need applying in a different context. And it remains to be seen whether the government’s proposed regulatory approach will meet concerns. Employers must navigate the pitfalls carefully – managing what is effectively systemic risk at every stage with appropriate controls, or else face significant liability and employee mistrust.”
Meanwhile, in Italy, use of ChatGPT was banned on 31 March because of fears over data privacy. Italy’s data protection authority, the Garante, has given Open AI, the creators of ChatGPT, until 30 April to address concerns over privacy and security. The ban has seen a rise in the use of virtual private networks in the country as curious users look to sign up to the AI software.
Latest HR job opportunities on Personnel Today
Browse more human resources jobs