AI Job Interviews - another good example of bias in ML 🤦♂️
Two journalists tested some AI tools for assessing job candidates. Even when the candidate read a Wiki article in German instead of answering questions in English, the AI systems gave them good scores 🤷♂️
Let's unpack 👇
The Setup 🔬
The journalists created a fake job posting on two AI interview platforms. They specified the traits of the ideal candidate and provided the questions that need to be answered during the interview.
Then they started experimenting... 👇
The Positive Test ✅
One of them did a fake interview giving all the right answers and predictably got very high scores - 8.5 out of 9 👍
Then she tried something different... 👇
The Negative Test ❌
In a second interview, instead of answering the questions in English, she just read the article on psychometrics from the German Wikipedia 😁
One system gave her a score of 6 out of 9, while the other determined she is a 73% match for the job.
Oops... 👇
What happened? 🔍
Interestingly, one of the systems generated a transcript which was obviously meaningless.
This means that the machine learning model behind the tool likely captured nuances of the intonation of the speaker instead of the meaning of the actual words.
👇