With the use of AI tools such as ChatGPT, professionals can streamline processes and achieve greater efficiency. However, business must take responsibility of ensuring that jobs completed with AI are done so securely, effectively and without bias. Here, Áine Fanning, managing director of Cpl’s Talent Evolution Group, investigates eight key questions facing recruitment with the advent of ChatGPT-style technology.
Q: Will ChatGPT-like AI change the way people apply for jobs?
A: The promise of ChatGPT is to automate tasks that we used to think were the exclusive domain of humans. ChatGPT touches anything that involves language: CVs, covering letters, emails – perhaps even scripts for telephone and in-person interviews. But the big question is competence. Can ChatGPT truly match human efforts? For now, at least, the answer appears to be “No”. The outputs ChatGPT generates, though often passable, tend to be generic. It lacks the unique spark that distinguishes exceptional candidates. There are also big issues with its depth of understanding. Ask ChatGPT to share genuine expertise on a niche topic and it will often get its facts wrong. Neither of these weaknesses bodes well for a job application. With that said, a little human help goes a long way. A human editing ChatGPT’s output could produce a better result than either the human or AI acting alone. As the technology improves, it’s even possible that the human element could fade away entirely.
Q: How about recruiters? Will ChatGPT free up their time to focus more on strategy and less on admin?
A: For us, the role of ChatGPT starts and ends with automation. At no point should outputs from ChatGPT be instrumental in making decisions. Potential short-term applications of ChatGPT will likely be low impact. For example, creating a CV template to pass along to a candidate. Even in these instances, though, the role of human oversight – of sense-checking outputs to ensure they are of a high quality – will remain absolutely crucial.
AI in action
Challenges for HR in the use of ChatGPT-style AI tools
Computer says ‘what?’: The risks of AI-generated HR docs
Davos: Using AI without human oversight is ‘downright dangerous’
Employee monitoring software spots ‘time theft’
Minister signals light touch approach to AI
Q: There is a concern that there will be a bias in documents produced by ChatGPT. What are your thoughts on this?
A: It’s certainly a problem. Language models like ChatGPT work by aggregation, meaning they take in huge volumes of data and average it out to answer specific prompts. Inherently, that means they are going to reproduce biases that exist in society at large. For recruitment professionals, that’s a huge problem – because our aim should always be to strive for a better society, not merely perpetuate existing power structures. If ChatGPT usage becomes widespread, it’s imperative we stay wise to the prospect of bias. It’s never going to be a valid excuse to simply blame ChatGPT for an instance of bias or discrimination. Recruitment organisation must take ownership of their attitudes and perspectives – that’s true now and will remain so in the future, regardless of what technological advances await us.
Q: There have been experiments whereby ChatGPT has sat and passed exams. Will this affect recruiters’ ability to effectively filter candidates?
A: Yes and no. On the one hand, it’s undeniable that ChatGPT has access to a vast amount of knowledge, which it can harness and reproduce to serve a variety of purposes. A candidate could quite plausibly get ChatGPT to write a document they would have been unable to write themselves, then edit the output to put their own stamp on it. That said, there are certain hard limits on ChatGPT’s usefulness. An obvious one is the brute facts of a candidate’s employment history – recruiters will still be able to check up on references and establish the truth of those claims. ChatGPT also won’t be much help in an in-person interview, particularly one where the candidate must complete a task assigned to them then and there. We expect to see recruiters adjusting their methods in response to ChatGPT. Ultimately, recruiters want to find out the ways in which candidates are distinct from one another – how their skill sets differ. Even if we’re accepting ChatGPT as a valid way to assist with CV writing and other tasks, that’s still only one skill – there’s much more to a candidate than that. Accordingly, if the use of AI becomes widespread, recruiters will simply find new ways to surface those unique traits that makes candidates special.
Q: What consequences may candidates face, if using ChatGPT inappropriately when applying for job roles?
A: All applicants should be conscious that several tools exist for checking whether a given piece of text was generate by ChatGPT. Though none are 100% accurate, they could nevertheless be used to justify a hiring manager’s suspicions if they suspect a CV or cover letter displays ChatGPT’s hallmarks. Enforceability is an issue, however. Savvy candidates may edit ChatGPT’s responses into their own words, or even use tailored prompts to encourage ChatGPT to depart from its default tone of voice.
Q: Do you predict there will be restrictions placed on the use of ChatGPT in recruitment, for example, for security reasons?
A: We’re conscious of a few news stories where big companies have banned the use of ChatGPT. A notable case was the financial services company JPMorgan Chase. But they didn’t specify a reason for banning ChatGPT beyond their existing guidelines about the use of third-party software. So, on the surface at least, their reasons for the ban don’t seem related to any security threat from AI technology. It all comes back to the same question of enforcement. If ChatGPT can do a job effectively, employees who use it are going to benefit. And when the tool is public, can you really stop people using it? A more likely future is one where ChatGPT usage becomes accepted as a part of normal working life. Recruitment professionals – just like workers in all other domains – will come to understand its benefits and limitations. They’ll use it when it helps them do their job more effectively and stick to more traditional methods where ChatGPT is of no help.
Q: To what extent do you feel there is a risk of great candidates being left out of a recruitment process due to a reliance on AI technology? For example, if AI is used to scan CVs for specific words.
A: This already happens. Some companies use software to scan for keywords; others instruct their recruiters to manually look for those keywords. It’s easy to see how this process could be counter-productive when implemented in an unsophisticated way. What about the candidates who express their qualifications in a more creative – but equally valid – fashion? We don’t necessarily want to punish candidates who dare to break the mould. With ChatGPT, the logic is the same. We look at the outcomes and ask whether this outcome is favourable. Perhaps ChatGPT goes beyond the capabilities of legacy CV-scanning software. If so, great – but how can we use that capability to actually improve our filtering processes? Any errors that follow from that can be remedied by keeping humans in the loop to review where necessary. We shouldn’t be afraid of automation – but we need to make sure we preserve the highest possible levels of rigour.
Latest HR job opportunities on Personnel Today
Browse more human resources jobs