Recent discussions on racism have moved us away from ideas around tolerance and being “not racist” and towards an aim of being “anti-racist”. That’s why it’s absolutely crucial we now tackle the more insidious – and dangerous – racism inherent in the products and technology we use every day.
Last month I found myself reading about pulse oximeters. They are the little gadgets you can put on your finger to get a reading on your “sats” – your oxygen saturation. This is a good indicator for how unwell you are, especially when you have a respiratory illness. When I had my pulmonary embolism in 2018, I invested in one and still keep it around. Over the last couple of years, they’ve become more popular consumer medical devices due to Covid and people wanting to keep an eye on how well they are faring as they do battle with the virus.
“First, do no harm”; the baked in bias within medical devices
However, pulse oximeters have come under fire recently because of the way they work – or don’t – with dark skin. This has become even more salient given we know that Covid is killing more black and brown people than whites.
The way a pulse oximeter works is to blame for this misreading – it relies on two lights to detect the colour of haemoglobin. One is infrared and the other is red, which in white skin works well to determine whether the haemoglobin is a purple-red (low in iron-infused haemoglobin) or a bright crimson (healthily rich with haemoglobin). The results of pulse oximetry measurements are commonly used, at the moment, to determine the oxygen therapy delivered to a patient struggling with Coronavirus.
In December 2020, a research letter, entitled “Racial Bias in Pulse Oximetry Measurement” was published in the New England Journal of Medicine.
This detailed the results of a large study assessing the accuracy of pulse oximetry on black skin. The results were damning:
“[I]n two large cohorts, Black patients had nearly three times the frequency of occult hypoxemia that was not detected by pulse oximetry as White patients. Given the widespread use of pulse oximetry for medical decision making, these findings have some major implications, especially during the current coronavirus disease 2019 (Covid-19) pandemic. Our results suggest that reliance on pulse oximetry to triage patients and adjust supplemental oxygen levels may place Black patients at increased risk for hypoxemia.”
Racial Bias in Pulse Oximetry Measurement” published in the New England Journal of Medicine.
After a “rapid review” into the evidence and concerns surrounding the use of pulse oximeters for people with black or brown skin, the NHS recently published revised guidance on the use of the medical devices.

The NHS advice for anyone with brown or black skin, on the “How to look after yourself at home if you have coronavirus (COVID-19)” webpage.



Like many of the products and technologies we use today, the pulse oximeter was developed using white skin as the default. Colour sensing technology is infamous for perpetuating racial bias – indeed things such as sensor-based soap dispensers can even frustrate those with black or brown skin as this video set in Facebook’s HQ shows.
Colour-sensing is, quite literally, colour-blind
Color-sensing technology has long been known to discriminate and distort reality when it comes to black skin. Photography was originally only calibrated for white skin, which gave peculiar results when photographs were taken of darker skinned people. This was recalibrated in the 1970s but there are still remaining artefacts and racial biases that remain within the technologies, revealing a distinct lack of diversity within the design of products.
Kodak’s photo laboratories originally calibrated skin tones, light and shadows using special cards featuring white models. Known as “Shirley” cards, after one of the original models, Shirley Page, everything was compared with the Shirley card. Tagged with the word “normal”, every photograph developed in a lab was calibrated so it matched Shirley and, since only white people were buying cameras and snapping their families and friends in the 1950s, everything seemed fine.
It wasn’t until 1978 that some controversy around the way that Kodak film had been calibrated hit the headlines when the director Jean-Luc Godard declared he would be refusing to use the brand, calling it racist. He was filming a short movie in Mozambique and rejected what he saw as a faulty and prejudiced product.
Race “correction”in medical settings
However, it’s not just light and colour sensing technology that leads to biased product design – if we go back to medicine, there are other devices that have this baked in prejudice. The spirometer – again related to respiratory function – is used in the diagnosis of conditions associated with lung function, like asthma and chronic obstructive pulmonary disease (COPD). According to Med City News: “The devices are programmed with an automatic “correction” for lung capacity based on a patient’s race, adjusting for as much as a 15% lower lung capacity for Black patients, and as much as 6% lower for Asian patients.”
Most doctors don’t realise that there are any adjustments built in to the machines and researchers are concerned it will influence the outcome of rehabilitation for many Covid patients, affecting entry to trials or medication prescriptions.
Race “correction” is not based on any scientific differences between the lungs of those of different ethnicities. According to the Brown University professor, Lundy Braun, who lectures on Africana studies, the calculations date back to the days of plantations. Braun published an article in the Canadian Journal of Respiratory Therapy detailing her findings. She discovered that slaveholder doctor, Samuel Cartwright, was likely the first to try to determine disparities in lung function amongst different races. The spirometer outputs results assuming “a 10–15% smaller lung capacity for Black patients and 4–6% smaller lung capacity for Asian patients compared with their White counterparts”.
These differences, however, were more likely to stem from structural inequalities than anything truly biologically dissimilar between races.
Medical literature and lexicon
New technology in development that focuses on blink rates when it comes to diagnosing Parkinson’s apparently struggles with the eyes of Asian people.
This is linked with photography again, as Nikon cameras struggle with the ‘blink recognition’ setting, constantly asking the photographers of Asian subjects “did someone blink?”
Medical literature and lexicon has also come under fire when it comes to talking about skin and its related afflictions. To “pink up” is a common phrase when it comes to assessing the health of newborn babies and the idea of skin going “blue” when there is an issue with oxygenation leads us right back to the pulse oximeter issue once again. When it comes to problems with skin rashes and other concerning health conditions, redness isn’t always the right indicator, and jaundice is often missed in babies who are darker-skinned. Many medical textbooks omit black skin in their representations, which means that HCPs are ill-equipped to diagnose issues in any patients who aren’t white.
As more and more medicine becomes based on AI and technology, researchers are increasingly finding rapidly increasing issues that lead to discrimination. Optum, which calls itself a “health services innovation” company and says on its website “health care should be equally available to everyone, when and how they want it” fell foul of it’s own supposedly “race-blind” algorithm, which underestimated the costs associated with the most unwell black patients. Reported in the journal Science, the authors say “Black patients assigned the same level of risk by the algorithm are sicker than White patients” and that
“this racial bias reduces the number of Black patients identified for extra care by more than half. Bias occurs because the algorithm uses health costs as a proxy for health needs. Less money is spent on Black patients who have the same level of need, and the algorithm thus falsely concludes that Black patients are healthier than equally sick White patients. Reformulating the algorithm so that it no longer uses costs as a proxy for needs eliminates the racial bias in predicting who needs extra care.”
“Dissecting racial bias in an algorithm used to manage the health of populations” https://science.sciencemag.org/content/366/6464/447
Artificial intelligence that lacks smarts
We also have voice recognition algorithms that don’t work for anyone with a detectable accent, motion recognition systems that can’t find black people and facial recognition technology that can’t distinguish people from the same race properly, aiding racial profiling and perpetuating discrimination within law enforcement. When technology is designed by and trained on primarily white men, it will only be effective for that cohort of the population. The National Institute of Standards and Technology has suggested that Native Americans are the hardest for facial recognition to identify with the “highest false-positive rate” of identification, whereas “Asian and African American people were up to 100 times more likely to be misidentified than white men”.
But it’s not just race that AI fails to account for: older people and women were also misidentified. Once again, “the system” favours young, white men. Coincidentally, the type of people who design and programme these systems.
One such case that hit the headlines meant Robert Julian-Borchak Williams spent 30 hours in a jail cell in Michigan because of his false identification. Another, wholly shocking revelation in 2015 came when a young black man, opened his Google phone photography app only to find pictures of himself and his friend labelled as “gorillas”. Google fixed it, they said, but in 2018 it was revealed that they did so just by removing gorillas from the image-labelling algorithm. Wired magazine tested the system by uploading various images of different primates, finding that gorillas and chimpanzees were not recognised, whereas baboons, gibbons and marmosets were still easily categorised.
Tackling oppressive algorithms
Yet AI issues are not just a social and cultural safety concern – the dearth of diversity in conceiving, designing and training artificial intelligence could lead to issues of life and death when it comes to training things like the cars of the future. The narrow perspective of a young, white male, who has never been confronted with his own white privilege could eventually lead to misidentification of a black skinned individual as a technology-aided vehicle identifies things to avoid. It’s simply unknown how many products, technologies and devices have this baked-in prejudice that is favouring the white population or actively discriminating against black and brown individuals.
The only way to solve this is to concentrate on programmes that work to diversify the teams that make up the product research and development teams as well as look to properly source a diverse product testing team, working with a representative cohort of the population to ensure that products work properly for everyone, regardless of skin colour. It’s impossible to know whether there is any tendency towards exclusion unless you bake IN inclusion right from the inception. We need to strive for diverse workplaces and diverse customer bases to get us closer to ensuring products serve us all equally.