MPs and peers are calling for new laws to curb the use of algorithms to monitor workers and set performance targets, warning that employees’ mental health is being damaged by the use of such technology.
The all-party parliamentary group (APPG) on the future of work, chaired by David Davis MP, stated that there had been a “marked increase in the use of AI technologies in the workplace”. It said that use of algorithmic surveillance, management and monitoring technologies … had significantly increased during the pandemic.
The MPs’ report, the New Frontier: Artificial Intelligence at Work, found that the use of artificial intelligence was behind “significant negative impacts on the conditions and quality of work across the country”.
Giving evidence to the committee Dr David Leslie of the Alan Turing Institute said: “The dignity of workers is under assault in our emerging algorithmically driven environment”.
The report argued that AI had the potential improve the quality of work and to provide more employment opportunities, if these were the clear objectives behind its use. Prior to the Covid pandemic, it stated, the technology’s main impact was thought to be the substitution of human labour by machine, “but the rise of remote working has increased public concern about the impact of remote monitoring and management”.
In particular, “pervasive monitoring and target setting technologies” were clearly causing “negative impacts on mental and physical wellbeing as workers experience the extreme pressure of constant, real-time micro-management and automated assessment.”
AI and automation
Employers must be wary of laws surrounding automated decisions
How will technology support background screening in 2021?
It said people did not understand how personal information could be used to inform decisions about the work they did and there was an absence of ways of challenging these decisions and seeking redress.
The MPs found very low levels of “confidence in the ability to hold the designer, developers, and users of algorithmic systems meaningfully accountable for their responsible governance”.
Accountability Act
Current laws had been outpaced by the development of the technology, said the APPG. “We find that the challenges we have identified lie between data protection, labour and equality laws,” it said. AI’s adverse impacts, without legislative change, were set to be projected into the future, the report added “shaping the future in the image of the past.”
However, an Accountability for Algorithms Act would establish a new corporate and public sector duty to undertake, disclose and act on pre-emptive Algorithmic Impact Assessments (AIA). This would mean, for example that businesses would have to evaluate and publish the use of AI in performance-driven regimes such as deliveries per hour rules for couriers and queue monitoring in supermarkets.
The dignity of workers is under assault in our emerging algorithmically driven environment” – Dr David Leslie, Alan Turing Institute
The act would give workers the right to be involved in the use of algorithm-driven systems to determine allocation of shifts and pay, or how employees gained their role
Enforcement could fall within the remit of an expanded Digital Regulation Cooperation Forum (DRCF), argued the report, with new powers to create certification schemes, suspend use or impose terms and issue statutory guidance, to supplement the work of individual regulators and sector-specific standards. The forum should be equipped and funded to “rigorously enforce existing and new obligations”.
A fresh approach was needed, Tabitha Goldstaub, chair of the UK Government’s AI Council, told the APPG. “The thinking in the UK is that companies should be required and encouraged to consider and remedy any adverse impacts as soon
as possible in the innovation cycle, not post event. The opportunity is for the UK to do better, and lead globally,” she said.
Diversity dimension
There was also a gender dimension to the use of AI, found the committee. Dr Abigail Gilbert, head of research at the Institute for the Future of Work, said: “Women are having to cancel their care responsibilities to respond to shift scheduling at the last minute, workers are no longer talking to each other to keep optimum pace, disabled workers are quitting or being let go due to an inability to perform against these standards … the impact of these systems can be seen beyond any given workforce. It is impacting communities and changing what ‘work’ is.”
David Davis MP, the Conservative chair of the APPG on the future of work, said: “Our inquiry reveals how AI technologies have spread beyond the gig economy to control what, who and how work is done. It is clear that, if not properly regulated, algorithmic systems can have harmful effects on health and prosperity.”
Clive Lewis, a Labour member of the APPG, added: “Our report shows why and how government must bring forward robust proposals for AI regulation. There are marked gaps in regulation at an individual and corporate level that are damaging people and communities right across the country.”
Sign up to our weekly round-up of HR news and guidance
Receive the Personnel Today Direct e-newsletter every Wednesday
The APPG called for a new focus on the creation of Good Work and said that tackling the workplace challenges with AI it had identified would ensure innovation and governance of the best, most-human centred AI working for people and the public interest.
Latest HR job opportunities on Personnel Today
Browse more human resources jobs