Saturday, September 13, 2025
Digital Rights Monitor
  • DRM Exclusive
    • News
    • Court Updates
    • Features
    • Comment
    • Campaigns
      • #PrivacyHumSabKe
    • Vodcasts
  • In Media
    • News
    • OP-EDs
  • Editorial
  • Gender & Tech
    • SheConnects
  • Trends Monitor
  • Infographics
  • Resources
    • Laws and Policies
    • Research
    • International Frameworks
  • DRM Advocacy
    • Exclusives
    • Featured
    • Publications
    • Statements
No Result
View All Result
Digital Rights Monitor
  • DRM Exclusive
    • News
    • Court Updates
    • Features
    • Comment
    • Campaigns
      • #PrivacyHumSabKe
    • Vodcasts
  • In Media
    • News
    • OP-EDs
  • Editorial
  • Gender & Tech
    • SheConnects
  • Trends Monitor
  • Infographics
  • Resources
    • Laws and Policies
    • Research
    • International Frameworks
  • DRM Advocacy
    • Exclusives
    • Featured
    • Publications
    • Statements
No Result
View All Result
Digital Rights Monitor
No Result
View All Result

in DRM Exclusive, Features

The biases in our algorithm

Hija Kamranby Hija Kamran
September 22, 2020

Photo by Daryan Shamkhali on Unsplash

Social media platforms have repeatedly been criticised for employing algorithms that favour certain kinds of content or certain groups of people over others, through what is called “Algorithm Bias”. Vox Recode explains it as, “these systems can be biased based on who builds them, how they’re developed, and how they’re ultimately used.” The phenomenon is not restricted only to social media as machine learning and Artificial Intelligence (AI) is now part of most technology and digital platforms that people navigate. The bias in this technology is widespread, and in many cases has serious repercussions.

Various users on Twitter posted threads of, what they claim is the platform’s algorithm bias over the past weekend. This  led to a global discussion on how social media platforms fail to ensure their technology, and in extension their services, are free from all kinds of prejudice. The threads that were posted highlighted how the preview of the photos posted on Twitter prioritised faces of white people over the faces of people of colour.

The discussion was started by a PhD student who was helping a faculty member in figuring out why Zoom kept removing his head when he tried to use virtual backgrounds on the video calling app. He soon realised that Zoom’s face-detection algorithm was removing faces of Black people.

A faculty member has been asking how to stop Zoom from removing his head when he uses a virtual background. We suggested the usual plain background, good lighting etc, but it didn't work. I was in a meeting with him today when I realized why it was happening.

— Colin Madland 🇺🇦 (@colinmadland) September 19, 2020

However, in the process he also realised that this algorithm bias was not restricted to only one platform, instead, Twitter was doing something similar by prioritising the white student’s face to be shown in the photo’s preview of the tweet despite his attempt to change the orientation of the photos.

Geez…any guesses why @Twitter defaulted to show only the right side of the picture on mobile? pic.twitter.com/UYL7N3XG9k

— Colin Madland 🇺🇦 (@colinmadland) September 19, 2020

It led to a series of threads by people testing this algorithm bias with variations in photos, their orientation, format, and the subjects in the photo. Some also tried it with fictional characters to test if it still prefers those with seemingly lighter skinned people over darker skin tones and Black people. Unsurprisingly, most of the time, the preview of the tweets showed images of white people fairly deducing that the bias does in fact exist.

Trying a horrible experiment…

Which will the Twitter algorithm pick: Mitch McConnell or Barack Obama? pic.twitter.com/bR1GRyCkia

— Tony "Abolish ICE" Arcieri 🦀🌹 (@bascule) September 19, 2020
https://twitter.com/sina_rawayama/status/1307506452786016257
https://twitter.com/rasmusmidjord/status/1307578929990033408

Holy shit, this thread is a horror show. Made me wonder who Twitter would pick as the star of Black Panther… https://t.co/f68y6nGd9N pic.twitter.com/sIgaK6c4Xm

— Jack Couvela (@JackCouvela) September 20, 2020
https://twitter.com/_jsimonovski/status/1307542747197239296

Alternate Explanation:

However, some tried to find alternative explanations to this to check whether something else other than algorithm bias was at play. A Twitter user @IDoTheThinking suggests that this might have more to do with contrasting colours in the photo than skin colour itself. They propose that Black people mostly have dark hair, and when wearing dark clothing against a dark background, machine learning overlooks the blending colours and looks at the image with contrasting colours. 

Twitter Algo Test pic.twitter.com/zKP4hh37MB

— Darrell Owens (@IDoTheThinking) September 19, 2020

They also test whether solid colours are treated the same way, and find that the results in fact are similar.

Ok final test: Using 4 solid contrast color vs 4 blending colors.

The twitter algo should show red, blue, yellow, green because they contrast most. pic.twitter.com/scwO85q6JE

— Darrell Owens (@IDoTheThinking) September 19, 2020

Responding to one of the threads, Dantley Davis, the Chief Design Officer at Twitter, says that it’s their fault, and urges that the next step is fixing the problem.

It’s 100% our fault. No one should say otherwise. Now the next step is fixing it.

— Dantley Davis (@dantley) September 19, 2020

As old as tech:

The problem, however, is not new and has in fact been highlighted by people of colour, specifically Black people, in the past. An MIT graduate Joy Buolamwini in her TED talk in 2017 shares that while working on facial analysis software, the software did not detect her face. It was at this time she realised that the people who developed the software had not trained it with a broad range of skin tones and facial features.

Although technology makes the work and the world easier and quicker for everyone, it’s imperative to acknowledge that it is being developed and coded by humans whose biases are prone to be replicated in the work that they are producing. Algorithm and machine learning bias comes from humans that write their codes and train them to operate, hence they are susceptible to the biases of these coders and developers.

These are not isolated incidents and are certainly not the only ones that have been flagged in the past couple of years. These biases have been witnessed in a vast array of technology employing algorithms of various kinds. 

In 2017, Gizmodo reported on a footage shared by a Facebook employee in Nigeria showing a Black man trying to get soap from a soap dispenser, but failed to do so. Whereas, his white colleague is able to get soap instantly. The Black man is then shown to use a white napkin to get soap from the soap dispenser, and this time he’s able to get it as well.

https://twitter.com/nke_ise/status/897756900753891328

Similar incidents have been detected in wearables like fitness trackers, heart-rate monitors, in health care treatments, beauty pageants, among many other reported examples. One of the many research concludes that face detection algorithms do not only prioritise light skin, but also misgenders faces of Black subjects. For instance, in 2017, Joy in her research paper Gender Shades found that Microsoft face detection software misgendered 93.6 percent faces of darker skinned subjects that were studied.

Algorithmic bias often has serious implications where the technology is unable to assist those that look a certain way, have certain facial features, or have more melanin in their skin. Many incidents, researches and analysis point at the fact that technology is inherently biased towards lighter skin, and results in discrimination and unfair practices targeting those who are already marginalised. With racial justice and many other civil liberty movements ongoing across the world, the increased use of face detection and machine learning technology is proving to have a drastic impact on those protesting for their rights on the streets. Wrongful arrests and charges, economic loss, and social stigmatisation are just some of the implications that those falling victim to this bias would have to deal with.

It’s imperative that while analysis is being conducted to highlight the shortfalls of technology, algorithm and machine learning developers move towards training AI programs in a way that their own biases do not influence the process, and subsequently the implementation of the technology. Transparency in the process of training the AI could be the first step while moving towards inclusive testing of the program in order to ensure that it does not unfairly ostracise the already vulnerable.

Featured Image by Daryan Shamkhali on Unsplash

Tags: Algorithm BiasArtificial IntelligenceMachine LearningTwitter
Previous Post

Journalist reports online violence to FIA

Next Post

Twitter apologises for ‘racist’ image-cropping algorithm

Share on FacebookShare on Twitter
PTA denies role in massive data leak, says 1,372 sites blocked

PTA denies role in massive data leak, says 1,372 sites blocked

September 11, 2025
Khyber Pakhtunkhwa police crack down on TikTokers for ‘promoting obscenity’

Khyber Pakhtunkhwa police crack down on TikTokers for ‘promoting obscenity’

September 11, 2025
Afghan refugee children at Girdi Jungle refugee camp. Photo credits: Ramna Saeed

Pakistan blocks SIMS of Afghan refugees after deportation deadline

September 9, 2025
No Content Available

Next Post

Twitter apologises for ‘racist’ image-cropping algorithm

About Digital Rights Monitor

This website reports on digital rights and internet governance issues in Pakistan and collates related resources and publications. The site is a part of Media Matters for Democracy’s Report Digital Rights initiative that aims to improve reporting on digital rights issues through engagement with media outlets and journalists.

About Media Matters for Democracy

Media Matters for Democracy is a Pakistan based not-for-profit geared towards independent journalism and media and digital rights advocacy. Founded by a group of journalists, MMfD works for innovation in media and journalism through the use of technology, research, and advocacy on media and internet related issues. MMfD works to ensure that expression and information rights and freedoms are protected in Pakistan.

Follow Us on Twitter

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

No Result
View All Result
  • DRM Exclusive
    • News
    • Court Updates
    • Features
    • Comment
    • Campaigns
      • #PrivacyHumSabKe
    • Vodcasts
  • In Media
    • News
    • OP-EDs
  • Editorial
  • Gender & Tech
    • SheConnects
  • Trends Monitor
  • Infographics
  • Resources
    • Laws and Policies
    • Research
    • International Frameworks
  • DRM Advocacy
    • Exclusives
    • Featured
    • Publications
    • Statements