Algorithms

The Algorithmic Accountability Act Is Designed to Remove Bias From Big Data

How does your Netflix account always seem to know just what cheesy rom-com or gory slasher flick you’re in the mood for? How is an iPhone X able to recognize your face as easily as a person does? How can Google Maps calculate routes that circumvent traffic jams and get you where you need to be in as little time as possible? These, and many other technological innovations, are possible due to algorithms. 

In the world of computer science, algorithms can be defined as lists of instructions that tell computers what to do. In this increasingly digital age, algorithms are a part of almost everything we do. Amazon uses algorithms to suggest items that you might want to browse. Online dating sites like eHarmony and OkCupid use algorithms to match up potential couples. Financial analysts and traders train algorithms to predict and react to fluctuations in the stock market at speeds that no human being could ever accomplish. Modern society has grown reliant on algorithms for a lot of the tasks we take for granted, and this reliance shows no signs of stopping. Although these increasingly complex algorithms present new and exciting opportunities to harness the power of computers to better modern life, they still have flaws. 

The Algorithmic Accountability Act, sponsored by (2020 presidential candidate) Senator Cory Booker from New Jersey and Senator Ron Wyden from Oregon, seeks to address some of these flaws. Algorithms are essentially just lines and lines of code, and as such, they cannot think for themselves. However, algorithms often have the unintended effect of reflecting the unconscious biases of their creators. For example, a facial recognition algorithm called Rekognition, championed by computer engineers at Amazon, was recently accused of racism. In a study conducted at MIT, Rekognition could successfully identify the genders of lighter-skinned individuals, but misidentified the gender of darker-skinned individuals at a 20-30% rate. This isn’t because the algorithm itself harbors prejudice against people of color, but because its coders originally only trained it on white subjects. This major oversight could have dangerous implications in the real world — a recent Georgia Tech study concluded that the algorithms behind self-driving vehicles were consistently unable to identify dark-skinned people as pedestrians. In practice, that would make dark-skinned people more likely to get hit by self-driving cars than white people. Driverless cars aren’t the only technology accused of potentially dangerous bias — Amazon, Facebook, and Google have all been accused of algorithmic bias in the past five years. 

The Algorithmic Accountability Act would be a huge stride forward for ethics in tech. Under the act, large tech company and data broker algorithms would be held accountable to a greater degree — they would have to evaluate any algorithm involving behavior prediction, sensitive data, or the monitoring of publicly accessible spaces for discrimination and potential privacy breaches. If any concerns arose in these evaluations, tech companies would need to address them in a timely manner. This way, we can reap all the technological benefits that algorithms have to offer, but without suffering the discriminatory, hurtful, and even physically harmful consequences of race, gender, or class-based algorithmic bias.