The TUC has sounded the alarm over what it describes as ‘huge gaps’ in UK employment law over the use of artificial intelligence at work.
New legal protections were needed as workers could be “hired and fired by algorithm” the trade union body said in a report published today (25 March). The study was carried out by employment rights lawyers Robin Allen QC and Dee Masters from the AI Law Consultancy.
TUC general secretary Frances O’Grady said AI, the use of which had accelerated rapidly since the start of the pandemic, “could be used to improve productivity and working lives. But it is already being used to make life-changing decisions about people at work – like who gets hired and fired”.
She warned that without fair rules, the use of AI at work “could lead to widespread discrimination and unfair treatment – especially for those in insecure work and the gig economy.”
Earlier in March, Uber was strongly criticised by using AI to wrongly deny gig workers the use of its app, a problem that was indirectly causing racism because the facial identity software used by the AI was not accurate when it came to dark-skinned faces.
Many companies use AI systems with no human oversight in the early stages of their hiring processes, to filter out “weaker” applications.
And as AI becomes more sophisticated, warned the TUC, firms are likely to entrust it with more high-risk decisions, such as analysing performance metrics to establish who should be promoted or let go.
Among the 15 statutory changes the TUC is calling for is a legal right to have any such “high-risk” decision reviewed by a human.
“A human might undertake some formal task, such as handling a document, but the human agency in the decision is minimal,” the report stated.
“Sometimes the human decision-making is largely illusory, for instance where a human is ultimately involved only in some formal way in the decision what to do with the output from the machine.”
The TUC is calling for the legal right to have a human review decisions and changes to UK law to protect against discrimination by algorithm.
Allen and Masters (from Cloisters law firm and the AI Law Consultancy), said while AI could be beneficial, “used in the wrong way it can be exceptionally dangerous”.
“Already important decisions are being made by machines,” they said. “Accountability, transparency and accuracy need to be guaranteed by the legal system through the carefully crafted legal reforms we propose. There are clear red lines, which must not be crossed if work is not to become dehumanised.”
David Lorimer, director and employment lawyer at Fieldfisher, said, however, it was “worth recognising there are some legal protections in place.
Accountability, transparency and accuracy need to be guaranteed by the legal system through the carefully crafted legal reforms we propose” – Robin Allen and Dee Master, AI Law Consultancy
“For instance, decisions that are tainted by discrimination (which can be the case where algorithms are trained on biased sets of data) are challengeable. It’s also the case that employers can only make decisions without human intervention in limited circumstances, and they must be transparent about this, under the UK GDPR.”
Lorimer added: “Certainly there is more to do when it comes to baking ethical considerations into AI tools deployed in the workplace. My experience is that employers are increasingly live to this, and are taking steps to carefully consider these issues.”
The report authors urged for the establishment of a “comprehensive set of ethical guidelines would sit in parallel to, and enhance, the existing (and hopefully improved) legal framework, creating flexible, practical, and dynamic guidance to employers, trade unions, employees, and workers. Again, the trade union movement could use its unique access and perspective on the challenges faced by workers and employees to construct ethical guidelines that would be particularly meaningful and considered.”
HR Director opportunities on Personnel Today