Single Blog

Why lack of diversity creates biases in artificial intelligence and machine learning

I recently attended The Linux Foundation’s Open Source Summit Europe which took place in the Edinburgh International Conference Centre. This amazing venue was a fantastic backdrop to a busy, exciting and revealing summit.

There are always so many opportunities to learn during tech conferences. However, sometimes the more I learn the less I know. My main areas of interests are in diversity, new and emerging technologies such as blockchain, artificial intelligence and machine learning. Often learning about these technologies raises more questions than answers and can sometimes lead to scary revelations and realizations.

My interest in gender diversity in tech began last year when I attended The Linux Foundation’s OSSummit in Prague. I learned about the ‘Bro Culture’ and how it impacted women in the tech industry. This year, my knowledge of diversity was broadened during the Diversity Empowerment Summit which was part of the main event.

Diversity, an umbrella term, encompasses minority groups underrepresented in tech such as:

  • LGBTQ+
  • African-American/Black
  • Latino/ Hispanic
  • Native American/Native Hawaiians
  • Other ethnic minorities

If these groups are underrepresented and the main group in tech are white males, the question of conscious and unconscious bias in the development of Artificial Intelligence (AI) and Machine Learning (ML) models needs to be raised.

Laura Gaetano, Manager, Travis Foundation, gave some examples of biases embedded in AI and ML applications that shocked me and brought home the urgent need for more diversity in tech.

Amazon.com

Amazon.com’s recruiting tool used artificial intelligence and machine learning tools to rate and select applicants for jobs in the tech industry. This tool taught itself that male applicants were more suitable for these positions than female applicants.

It also downgraded applications that included the word ‘women’ and graduates of two all-women’s colleges. Recently it has been reported that Amazon has now scraped this AI recruiting tool. However, the damage has been done and the consequences for not hiring women as a result of using this tool are unknown.

Flickr

Flickr’s auto-tagging image recognition algorithm tagged a photo of a black man with the word ‘ape’ and ‘animal’ and it seemed to use the same tag for ‘white women’.

Racist Soap Dispenser

Chukwuemeka Afigbo from Nigeria who works in the tech industry put his hand under a soap dispenser, however the light sensors didn’t recognize a non-white hand so no soap came out. When he put a white napkin under it, soap was dispensed quickly without any problem.

Eric Berlow, Co-Founder, Chief Science Officer, Vibrant Data Inc spoke about ‘the rise of the racist robots’ and how AI is learning all our worst impulses. He also confirmed that Amazon ditched its AI recruiting tool and also that research has found that facial software recognition is biased towards white men.

He added: “It’s not about the algorithms. It’s about the data ecosystem. Healthy AI depends on a healthy data ecosystem. Bias data in leads to bias outcome.”

Patrick Ball, Director of Research, Human Rights Data Analysis Group warned: “Many, many machine learning applications are terribly detrimental to human rights and society.”

AI and ML applications are becoming part our lives, so we must be hyper-cognizant of the potential biases, and delinquent data that is fed into them.

It’s difficult enough for us to recognize and understand our own conscious and unconscious biases without feeding them into these technologies which are then deeply embedded, relearned and reinforced.

Let’s hope that we don’t get to a situation in the future where the creators of an algorithm are pursued by discrimination cases for not hiring a certain type of person based on their gender, race or place of origin. Or where predictive analytics could cause human rights issues to be violated due to inbuilt biases, stereotypes, prejudice or misleading statistical analysis.

Some solutions I believe may ensure that these situations don’t come to pass are:

  • ensuring diversity, inclusion and accessibility of everyone to participate and innovate in the tech industry
  • setting up of watchdog institutions and standards to check that end products and services created through deep learning algorithms are thoroughly regulated against biases
  • that predictive analytics make sense and are in alignment and backed up by human empathy and logic
  • that blaming technology if something goes wrong is unacceptable and that the creators, not the technology must be held accountable

AI and ML used in the right way bring us many new and exciting advances in our everyday lives. However, they also have a dark side. When in the hands of a minority privileged group, statistical outcomes and predictions have the ability to discriminate and exclude large parts of the population.

More than ever before diversity and inclusion in the tech industry is imperative to ensure a more inclusive, democratic, holistic and fair future for all when using AI and ML to create products and services.

 

 

Comments (0)