Proxima

30 May 2023
Topics in this article
  • People
  • Technology

The world of recruitment tech is moving ever faster with new developments in AI, with lots of purported claims about the tech being unbiased.  But how do you know if the new AI-enabled tech is truly unbiased or are there just snake-oil claims from slick salespeople?  Getting it wrong could mean that your company is liable for biased recruitment decisions.  Tread carefully: you’ll need to plan time in your procurement to test and contract accordingly.  Without a plan, you could face regulatory fines, reputational damage and poor outcomes for candidates and your company.

It’s hard to describe the impact of Artificial Intelligence (AI) on the recruitment industry lately without using dramatic words like “explosion,” “exponential,” and “revolutionary.”  AI is being used in every element of recruitment – from scanning resumes and CVs for keywords, to chatbots, video interviewing software, candidate assessment, selection, and more.  Not a week goes by that you don’t hear of more use cases to consider in the HR space. 

And yet, if not built correctly, there could be bias embedded in these otherwise innovative solutions.  Bias could prioritise some candidate applications over others; eliminate candidates from the process altogether; even evaluate candidates for cultural fit based on the language they use in an interview – or judge their facial expressions (insert sceptical eyebrow-raised emoji here). 

Could your organization be at risk of allowing bias in via AI-enabled recruitment tech?  In the US, the Equal Employment Opportunity Commission (EEOC) has recently released guidance[1] on this.  It is meant to remind employers and staffing agencies of their obligations under Title VII of the Civil Rights Act of 1964 (a federal anti-discrimination law).  This covers discrimination against race, colour, national origin, religion, sex (including pregnancy, sexual orientation, and gender identity), disability, age (40 or older), and genetic information.  And some local US jurisdictions are taking an even stronger approach to anti-discrimination – against military status, marital status, or, according to the recent law in New York City[2], height or weight discrimination.    

Your organization could be held liable for discrimination made by the recruitment tech you use, just the same as if your organization had discrimination in an offline process.  That liability cannot be pushed off to the software provider.  The EEOC guidance states:

if an employer administers a selection procedure, it may be responsible under Title VII if the procedure discriminates on a basis prohibited by Title VII, even if the test was developed by an outside vendor.  In addition, employers may be held responsible for the actions of their agents, which may include entities such as software vendors, if the employer has given them authority to act on the employer’s behalf.  This may include situations where an employer relies on the results of a selection procedure that an agent administers on its behalf.” 

You could therefore be at risk from fines and reputational damage from a faulty process. 

As for other countries’ approach?  Note that anti-discrimination laws haven’t changed; what remains important is how you ensure that you’ve tested both the on- and offline processes from end-to-end and taken steps to eliminate bias.       

So what must procurement do when evaluating AI-enabled recruitment tech?

  • Testing.  Allow time in your procurement process to test the software against bias.  You may have to enlist the help of experts – you’ll need data analysis, impact analysis, fairness metrics, and user feedback.  [It would be unsurprising to see if, in future, accreditation schemes pop up that can aid in the bias testing of supplier software.] 
  • Indemnity clauses.  If you are fined by a regulatory body, can you contractually have those fines covered by your recruitment tech provider?  Insert indemnity clauses into your contract that sit above the End User Licence Agreement (EULA).  It won’t save you from reputational damage but it may make the tech provider more proactive in managing risk.
  • Ongoing audit.  If a regulatory body asked you to provide evidence that your process is free from bias, and you hadn’t tested it (or tested it recently/regularly), what would you do?  Consider the evidence you’d need (initial testing, then perhaps annually?) and how you would document it was completed. 

The EEOC does not report on the number of complaints it gets from people who believe they may have been discriminated against.  And it’s quite likely that cases are underreported – after all, if you as a candidate were discriminated against, by an AI or by a person, would you know?  Organizations that take a cynical approach to managing this risk could be sleepwalking into peril. 

“Life moves pretty fast.  If you don’t stop and look around once in awhile, you could miss it.” – from Ferris Bueller, in Ferris Bueller’s Day Off.   Yes, AI is moving fast; but don’t be blind to risks of bias in your recruitment tech: take your time to evaluate, contract, and manage properly to protect your organisation and make the process better and fairer for all involved.  And, as always, if you need help with your contracts and third-party suppliers, contact the team at Proxima.


[1] Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964 | U.S. Equal Employment Opportunity Commission (eeoc.gov)

[2] The New York City Council – File #: Int 0209-2022 (nyc.gov)

Let’s talk.

If you are looking to drive purposeful and profitable change, get in touch.

Contact us