Bias is a funny thing, in that it’s generally only the people subject to it, that notice it. Those with the rose-coloured glasses on, carry on thinking everything is working perfectly. They see the benefits, and write off the claims of bias with dismissive comments like: “you can’t get everything right all of the time.”
Except that artificial intelligence (AI) is supposed to learn so that it can be more accurate. If it’s being developed to help us become more efficient and effective, then those benefits need to be inclusive.
It’s not enough for AI to be fast, if it’s delivering results that aren’t correct, then the benefits are diluted. In some applications, can even be dangerous. Think of traffic or public security systems that are designed to identify threats, but consistently overlook potentially dangerous behavior due to bias. Or worse, flag innocent people as potential risks due to that same bias in algorithms.
There’s another social element to this as well. Increasingly algorithms are being used to by banks, colleges and businesses to determine eligibility for loans, access to programs or other benefits. If there’s bias in theses systems, then people who could qualify are more likely to be excluded. There could also be a loss to businesses. In excluding certain customer profiles due to bias, it limits a target audience and can decrease business opportunities.
The Flipside of AI
There are two key advantages of AI – it operates rapidly and at scale. In terms of bias, this is equally a disadvantage. If there is bias in the algorithms, then that bias will disseminate rapidly and at scale until it is so widespread that it becomes harder to identify. How the information is being presented becomes accepted and becomes the norm because nobody flags the blind spots.
There’s an interesting example that highlights this point: MIT graduate, Joy Buolamwini, first discovered bias in algorithms when working with social robots as an undergraduate at Georgia Tech. Interactions with the robot worked off facial recognition and whenever Joy tried to interact with the robot, it failed to recognize her face because of her dark skin tone. It worked with everyone else and even worked when she wore a white mask, but it didn’t recognize her face as a human face. That’s bias.
What this example highlights is that when algorithms are being coded, it needs to be based on data that is inclusive of the whole population, not just one or two samples. Bias is often a reflection of who is developing the algorithms and even if it’s unconscious or unintentional, bias can creep in. This is part of the problem. Unless there are people and organizations actively working to identify bias in algorithms, it will continue to go undetected.
This means auditing current systems as well as having a more inclusive view when developing new algorithms. We all operate within our own world view and that is fine for individuals. However, when we’re developing AI systems to think and make decisions on our behalf, then that individual world view needs to broaden beyond ourselves. If we’re more conscious about potential bias in AI, we’ll also be better equipped to resolve it and build better AI systems.
If you’ve experienced AI bias and want to contribute to the conversation, please share your experiences with us.