Bias is bad for business. Psychologists have identified more than 180 different kinds of human biases and every single one of them can impact a businesses decision making process in one way or another. According to a Gallup poll, employees who perceive bias are more likely to be disengaged from their work and this kind of active disengagement costs US companies around $500 billion dollars each year. If the lost revenue wasn’t bad enough, perceived bias has shown to negatively impact employee retention rates as well.
While there are a multitude of solutions that can be used to combat human bias in processes, one of the more popular ones is using AI and Machine Learning tools to remove the human element from the equation.
There have been a litany of incidents that prove that AI has not turned out to be the panacea that it was promised to be. Amazon’s automated recruiting process repeatedly picked candidates that mentioned words like “executed”, “captured” in their resumes. These terms were more likely to be used by men and as a result women were less likely to be selected.
Furthermore, training natural language processing models on news articles can lead them to show gender stereotypes.
So this brings us to an important in-flexion point. To remove biases, companies use AI and ML processes. However, if the biases are baked into the algorithm from the beginning itself, then that leads to a cascade of failure events that can have an even greater negative impact on the business, compared to not implementing AI processes in the first place.
The key question when it comes to dealing with bias is- how should fairness be codified in the first place? According to Prof Arvind Narayan, there are at least 21 different definitions of fairness, and most experts believe that even that number might be a low-ball figure. Much of the conversation on defining fairness revolves around topics such as individual fairness, group fairness and treating individuals with similar attributes or qualifications similarly.
However there are some trade-offs that can’t be ignored. Scientists have discovered that even the best AI models can’t conform to more than a few groups of fairness metrics simultaneously. As a result, companies can make a legitimate claim that their AI systems are unbiased, while critics can also push back and say that the same system is biased because it does not match a different set of definitions of fairness.
A great example of this conundrum in action is Equivant, the company that developed COMPAS scores. They claimed that its system was unbiased because it met the fairness objective of “predictive parity”, but ProPublica considered it biased because it failed to “balance for the false positives”.
So where do we go from here? It's clear that adoption of AI and ML models is gaining steam and increasing exponentially, but the way forward in terms of reducing bias both human and data related is still an ongoing process. Experts have identified some solutions that can help improve peoples’ trust in AI systems.
The Customer service sector has also oriented towards adoption of AI systems, to help improve their Agent training, reduce customer churn, improve customer experience and loyalty. However they also need to be cognizant of the fact that not all AI models are created equal. The processes to mitigate bias may be different for each vendor, and as a result two models may lead to very different results, which can have a knock on impact on customer experience, identifying and promoting the right agents and most importantly, the company bottom line. However AI based systems have officially entered the market, are constantly evolving and provide users with insights and solutions that weren’t previously available like e.g. scaling QA processes without increasing headcount, giving real time insights on customer sentiment and even predicting CSAT scores with a high degree of accuracy. These benefits have ensured that AI powered solutions are going to be adopted across most customer contact centers, and the ones that don’t will find themselves falling behind their peers.