Racial Biases in Algorithms: A Shallow Dive for the Junior Software Dev

Lucius VanSlyke
4 min readApr 19, 2021
A dark-skinned person stares directly at the screen with facial recognition mapping points symmetrically placed around their face.

As a new software developer in a rapidly changing tech world, I tend to be both burdened and excited by the thought of what my impact on technology will be. A major point of interest for me recently has been how Computer Science impacts societal choices, and what it means to be apart of the community shaping that relationship. Over the years, many major industries have replaced decision-makers in favor of algorithmic thinking for a variety of reasons. Not only does it streamline company workflow, but workers like healthcare agents, hiring managers, judges, and so on are able to shift their responsibilities onto algorithms that promise to be more efficient, accurate and fairer in decision-making. Of course, this isn’t always the case.

Many badly designed algorithms reflect the discriminants & biases of both the programmer, and the data provided to the program in order for it to first learn and make decisions. In my process, I’ve learned that algorithms are some of the most powerful, yet dangerous tools in an engineer’s toolbox. Hopefully this informs you on some of the ways computer systems can take on the judgements of their creators, and provides a firmer sense of awareness to take with you when developing tech.

What even is an algorithm?

Simply put, an algorithm is a set of rules or instructions used to solve a problem or perform a task. Many algorithms are created under the promise that it will make life easier by answering questions and making predictions. These processes thereby use statistical patterns to make decisions that impact every day life.

Want to read this story later? Save it in Journal.

Algorithms can be used for things like:

  • Determine Housing & Credit Eligibility
  • Predicting Crime Risk/Identifying Offenders
  • Sorting Resumes for Employers
  • Determining Best Hospital Treatment Plans and Access to Insurance

How do algorithms work?

Algorithms work in a wide variety of ways, though decision-making algorithms generally work by taking the characteristics of an individual, (for example, the age and income of a loan applicant) and reporting back a prediction of that person’s outcome according to a set of rules(in our case, the likelihood that person will default on the loan). That prediction is then used to make a decision —(whether to approve or deny the loan). Algorithms often discover their process for decision-making by first analyzing useful patterns in test data to determine the relationships between variables. The patterns created by the test data received become the overarching basis/rulebook for the way an algorithm operates. If the data is biased or shows patterns of discrimination, then the algorithm will pick up on that in its decisions.

As an example, a bank’s past lending data could show that it routinely and unfairly gave higher interest rates to residents in predominantly black areas. A banking algorithm trained on that biased data could pick up that pattern of discrimination and learn to charge residents in those zip codes more for their loans, even if they don’t know the race of the applicant.

What is Algorithmic Bias? Why does it matter?

Algorithmic bias occurs when an algorithmic decision creates unfair outcomes that unjustifiably and arbitrarily privilege certain groups over others. It’s a scary thought to consider that algorithms are a gatekeeper to economic opportunity, especially when we as developers are the potential architects of this process. Societal institutions use algorithms to determine things like job eligibility, education, government resources, healthcare, etc. Addressing these biases are critical to closing the variety of socioeconomic gaps that exist.

Real Example — Predictive Policing & PredPol

Consider PredPol, a predictive policing algorithm intended to forecast the location of future crimes introduced to the Bay area sometime between 2012–2016. When researchers applied the algorithm to Oakland, they found that the algorithm targeted Black neighborhoods at twice the rate of White ones. This is because the algorithm relied on Oakland’s racially biased arrest data to make its predictions.

Researchers found that black neighborhoods in Oakland had 200 times more drug arrests than others— even though drug use rates were generally the same across racial lines. A policing algorithm only trained on arrest data could then infer from that data that black neighborhoods use more drugs, even though the arrest data says more about the over-policing of Black communities than actual rates of criminality and drug use.

When the researchers compared PredPol data to police data, they found that the neighborhoods PredPol identified as crime hotspots were the same neighborhoods that were already disproportionately targeted by the Oakland police for drug arrests. This shows how algorithms can reinforce existing patterns of over-policing, and the need to work collaboratively on the separation of data & bias, as well as be conscious about the data we choose to implement into our programs.

A data chart showcasing the disproportionate amount of PredPol targets based on race compared to the amount of estimated drug users based on race.
Graphic via The Greenlining Institute

As members of the greater tech community it’s well within our power and better judgement to develop the ability to analyze potential biases that could exist in our applications. The easiest ways to ensure that the future of technology is diverse and free of socioeconomic disparities is to first ensure that those creating the tech desire that themselves!

Additional Readings:

📝 Save this story in Journal.

--

--

Lucius VanSlyke

Lover of games, programming, and food. Consistently creating something that involves at least two of these.