ITBusinessToday

A Bias AI can harm those which belongs to low income segment

AI bias

Object recognition algorithms sold by tech companies, including Google, Microsoft, and Amazon, fail to cater to the household demand of less rich nations. Even Facebook’s ad-serving algorithm plans campaigns based on similar race, gender, religions, and financial bias.

A research conducted on five popular object recognition algorithms – Microsoft Azure, Clarifai, Google Cloud Vision, Amazon Recognition, and IBM Watson clearly indicates their failure to recognize less sophisticated household items used in low-income countries.

AI made around 10% more errors while identifying objects from a household with a $40 monthly income compared to those from a household making more than $3,200. The absolute difference in accuracy was even greater as the algorithms were 15% to 20% better at identifying items from the US compared to items from the Philippines or Nigeria.

On average, the accuracy was about 85 percent for identifying items in homes that had a monthly income of $10,097 compared to about 71% for homes that had a monthly income of just $55.

Read More: How to solve the challenges in adopting Artificial Intelligence

This is because these commercially available models are not familiar with objects found in poorer households. AI can simply identify an expensive handwash fashionably packed in a hand-pump bottle as compared to a bar of traditional rigid soap.

Clearly, the training data used to create these algorithms is impacted by the lifestyle of its creator, who are often rich men from high-income countries. This disparity is resulting in business loss up to 40% from the rest of the globe consisting of lower-income and non-Western countries.

If intelligence is solely defined by purchasing power, then the majority of the global population would be left further behind. To add, social bias is another winning factor in this race. Facebook’s algorithm automatically decides whom to show a particular ad, carries out similar demographic discrimination.

Postings for preschool teachers were majorly shown to women, while postings for taxi drivers were shown to a higher proportion of minorities. While LinkedIn’s search engine showed a preference for male names. Machine Learning neural network, which was trained with datasets consisting predominantly of white male faces, failed to recognize a non-white female face.

Read More: Fighting the Risks Associated with Transparency of AI Models

These examples clearly show that Machine Learning algorithms in the form of “AI” are admittedly incapable of making rational decisions due to lack of customization and diverse training inputs. AI models can become as prejudiced as humans having grave business implications for society with significant economic losses.