Are Algorithms Impartial? Keeping Big Data Fair for EveryoneSubmitted by Bernhardt Wealth Management on October 21st, 2019
For most of us these days, hardly an hour goes by without interacting in some way with Google, Facebook, or one of the other “Big Data” companies. Their search engines place a world of information at our fingertips, even if we must exercise careful judgment about its accuracy. Nowadays, it would be just about impossible for any of us to perform our usual daily tasks without the assistance of the internet and the algorithms that drive it.
We also know that every time we use one of these search engines—to read, “like,” or share postings in our Facebook feed, to do an online search for the best price on a new gadget, or to review the latest news articles—we leave digital footprints that those same algorithms use to develop a profile for other uses. Haven’t we all noticed, right after searching for a product online, that suddenly our Facebook feed is populated with ads for similar items? How many times have we “liked” a news story, then seen similar stories from the same news source subsequently offered to us?
Data about our preferences—for shopping, politics, and many other categories—is extremely valuable to the companies and other organizations that want our business, our support, or our interest. Big Data knows this and has spent billions of dollars developing and closely guarding the algorithms that do the best job of mining, securing, and analyzing the trillions of bits of data that internet users around the world generate daily.
But are these algorithms fair? We may think of them as dispassionate, mathematical functions that disregard everything but the raw information. However, evidence is mounting that some algorithms—and the increasingly complex artificial intelligence (AI) programs that run them—may be reflecting biases that are built into the system by history, economics, and even the programmers—both human and AI.
For example, in 2015, Facebook was embarrassed to learn that the AI-managed algorithms governing new accounts had “decided” that some Native Americans could not create Facebook profiles, because it determined that the individuals’ names—like “Lone Hill” and “Brown Eyes”—were fake. In some cases, it even required these persons to change their profile names to something “normal.” Similarly, Amazon had to scrap an AI-driven hiring tool after learning that the algorithm had “taught” itself to screen out women for top positions. Algorithms used by banks have been found to include bias against persons of color, sometimes recommending higher interest rates on loans than those offered to Caucasian borrowers.
This “programmed-in bias” is the topic of a recent book by Safiya Noble, Algorithms of Oppression: How Search Engines Reinforce Racism. It is also the topic of pending legislation, as Congress considers the Algorithmic Accountability Act of 2019, designed to ensure that the tech companies developing algorithms test them for bias. Similar measures are under consideration in the UK and the European Union. While governmental regulation may have a less-than-stellar record of actually preventing injustice, it is important that in our headlong rush into the technological future, we don’t import the same biases and prejudices that have been so harmful to society in the past.