Unions and employment lawyers have called for legislation to protect candidates and employees from ‘life changing’ decisions about their careers made by artificial intelligence. We gauge the extent of the issue and ask whether more rules are the answer.
Recently the TUC sounded the alarm over what it describes as ‘huge gaps’ in UK employment law over the use of artificial intelligence at work.
Employment rights lawyers Robin Allen QC and Dee Masters from the AI Law Consultancy authored a report commissioned with the TUC which concluded that new legal protections were needed because workers could be “hired and fired by algorithm”.
Launching the study, TUC general secretary Frances O’Grady said AI, the use of which had accelerated rapidly since the start of the pandemic, “could be used to improve productivity and working lives. But it is already being used to make life-changing decisions about people at work – like who gets hired and fired”.
Discrimination by algorithm
She warned that without fair rules, the use of AI at work “could lead to widespread discrimination and unfair treatment – especially for those in insecure work and the gig economy.”
Although her comments were directed in particular at employers in the gig and casual economies, the technology could be a tempting resource for large professional services firms hoping to filter large numbers of candidates.
Tim Skipper, managing director of cross-functional legal sector talent specialist Totum, says he does not believe law firms are currently making significant use of AI technology in recruitment.
However, he says: “For big international law firms – given they get thousands of applications in areas such as graduate recruitment – it would be very easy to employ AI to sift out people based on certain criteria (eg degree qualification or the type of university attended) but this would clearly not be aligned with the approach of a modern firm which has a well-defined diversity and inclusion strategy running through its approach to resourcing all roles.”
He adds that quality specialist recruitment relies on assessing people’s suitability for a role based on a competency framework which is aligned with how a role is performed successfully. “That’s how we shortlist people,” he says, “and it’s an approach in which face-to-face contact is necessary, though the pandemic has forced a greater reliance on Zoom and other virtual platforms which has worked quite well”.
The use of AI systems with no human oversight in the early stages of their hiring processes, to filter out “weaker” applications, was troubling, say the report authors.
And as AI becomes more sophisticated, warns the TUC, firms are likely to entrust it with more high-risk decisions, such as analysing performance metrics to establish who should be promoted or let go.
Among the 15 statutory changes the TUC is calling for is a legal right to have any such “high-risk” decision reviewed by a human.
“A human might undertake some formal task, such as handling a document, but the human agency in the decision is minimal,” the report states.
“Sometimes the human decision-making is largely illusory, for instance where a human is ultimately involved only in some formal way in the decision of what to do with the output from the machine.”
The TUC is calling for the legal right to have a human review decisions and changes to UK law to protect against discrimination by algorithm.
Transparency is key
Allen and Masters (from Cloisters law firm and the AI Law Consultancy), say that “already, important decisions are being made by machines. Accountability, transparency and accuracy need to be guaranteed by the legal system through the carefully crafted legal reforms we propose. There are clear red lines, which must not be crossed if work is not to become dehumanised.”
David Lorimer, director and employment lawyer at Fieldfisher, says, however, it is “worth recognising there are some legal protections in place.
“For instance, decisions that are tainted by discrimination (which can be the case where algorithms are trained on biased sets of data) are challengeable. It’s also the case that employers can only make decisions without human intervention in limited circumstances, and they must be transparent about this, under the UK GDPR.”
Lorimer adds: “Certainly there is more to do when it comes to baking ethical considerations into AI tools deployed in the workplace. My experience is that employers are increasingly live to this, and are taking steps to carefully consider these issues.”
Recruitment specialist Applied, has long been cautious around AI, but takes the view that the technology is here to say and that legislation as envisaged by the TUC may be hard to enforce and formulate.
Its CEO, Khyati Sundaram, says how AI is being used at present doesn’t utilise the beneficial power of technology sufficiently: “All AI is relatively nascent and great care needs to be taken when it is used in people decisions. We know that algorithmic bias and biased training data sets can lead to perverse or outright prejudiced decisions to be made. The vast majority of AI in recruitment tech is currently focused purely on speeding up the process to the benefit of the hiring organisation and we think that this needs to change to genuinely find what is best for both candidate and organisation before the full benefits of AI can be realised in hiring.”
She says that to avoid bias Applied does not use historic data sets such as resumé decisions, instead building an entirely new dataset. It recognises that if biases can be built into AI then counter biases can be built into it too.
Sundaram says: “The reason for the alarm against AI is fairly understandable as most people don’t understand the sources of the issues and therefore blame AI for all the troubles. There are also veritable issues of algorithmic aversion – humans don’t trust what comes in the form of a blackbox – but this can be rectified by making AI models more transparent, and is one area where regulatory frameworks could be made more stringent.”
She agrees with the report authors that transparency is key when it comes to AI practitioners but adds, “creating AI regulation is tricky – for the most part it is interdisciplinary, needing input from law, philosophy, computer science, programming, diversity specialists and so on. This makes me think regulators will always be playing catch up to the advancing tech. So the best way to combat that is to have some ethics frameworks and explainability requirements rather than stringent laws that could be very difficult to enforce.”
On this point the report’s authors join Sundaram to urge the establishment of a “comprehensive set of ethical guidelines that would sit in parallel to, and enhance, the existing legal framework, creating flexible, practical, and dynamic guidance to employers, trade unions, employees, and workers”.