As executive recruiters we have always prided ourselves on our natural and honed skills in judging people for a living. That is what clients pay for, after all. Clients demand that we have vetted candidates not only for their technical competencies but for all the intangible “fit” qualities they seek before hiring someone. Are values aligned? Can a trusted relationship be formed? Are they really who they say they are?
With the introduction of AI, we’ve generally been dismissive of the threat of AI replacing our unfailing and steadfast very interviewing expertise. A robot isn’t going to sit in front of someone and notice when they’re not telling you the whole story. Nor will they build that rapport to put you at ease so that they eventually spill the beans on something they were hiding. Having attended recently an Association of Executive Search Consultants partners’ forum in Chicago, where the topic of discussion was technology use in the search industry, the threat of AI replacing our services was still being dismissed around the table, with “AI will never replace us”. After all, we’ve been trusted with vetting candidates for the highest-profile C-level and board placements. The individuals we screen are smart, well-spoken, present well and confident. It’s hard to imagine they have something to hide, but good thing we have well-developed techniques and expert analysis to verify professed qualifications and uncover inconsistencies, omissions and red flags that may be hidden.
Despite our reliance on human interaction and expertise, I’ve been reading things lately that are making me very uncomfortable. It started with Malcolm Gladwell’s newest book, Talking to Strangers where he provides example after example of specialists in the field, be they criminal judges, CIA officers, or Pentagon security officials, none of whom can detect lying better than a computer generated random selection. Gladwell explains that most of us have a predisposition to doubt out-of-the-ordinary occurrences which is why it was parents who were the ones who ultimately covered for Larry Nassar’s prolific abuse on the US Women’s Gymnastics team. The baffling “stranger problem” Gladwell writes of leads to the question of “how do we get people we don’t know so wrong?” So, if criminal judges and Pentagon officials can’t spot when someone’s not being truthful, what are the odds of executive recruiters getting it right?
One would think that screening candidates in person would surely help us get it right. We can read their expressions, sense any nervousness, read their body language, pick up on their tone, and pretty much judge them with a pass or not. According to a Farnam Street blog, The Illusion of Transparency: Your Poker Face is Better Than You Think, in reality, you can’t assume that facial expressions and body language will help you detect what you’re looking for. Unless someone is being obvious, there exists an illusion of transparency which is the disconnect between what is conveyed and what others pick up on. Pretty much what Gladwell points out in his book when he says that advanced facial and voice recognition systems are far more effective than expert humans at identifying those who are not being honest.
Let’s just remember AI stands for Artificial Intelligence. Artificial. Not “Intuition” which is a human executive recruiter’s special skill. AI can match up a job to qualifications, but a recruiter can see the potential in a candidate, and the value they can bring to an employer, value that even an employer hasn’t thought of. Also, AI also won’t provide the empathy that creates a positive candidate experience. Our clients care about their brand, they care about cultural fit, candidate perceptions of them, and relationships beyond just job matching or whether someone can form a bond with a chatbot. What AI still hasn’t figured out, is the soft skills. There are clearly things that require an expert human touch to manage, which I have to say, is something AI developers are catching on to, and working on.
The next advance: machines that appear more human. Enter the life-like android Erica developed in Japan at Osaka University’s Intelligent Robotics Laboratory, in collaboration with Kyoto University and Advanced Telecommunications Research Institute International. Erica can express connection through her gaze and emotion through her sophisticated synthesized voice, subtle facial expressions and physical behaviours. All because the business case shows that as customers gravitate to digital interfaces, brands are using emotion-enabled digital agents to create emotional connections with customers, which drives long-term customer engagement and ultimately, loyalty.
For now, executive recruitment is still very much about the human element and relationship building, however, we may need to work with the next generation of AI as it evolves. Something to keep in mind when you read about our newest team member at Pekarsky & Co.….Erica.
Regards,
Ranju.