The AI market is expected to grow to well over $100 billion by 2025. We’re just a stone’s throw away from a voice-activated, facially recognized, algorithm-driven life. But, for a rapidly growing segment of the population, AI can be more triggering than innovative. Much of the data being used to train machine learning algorithms, which power the AI movement, doesn’t take ethnicity or race into consideration. To a layperson or someone disconnected from many of the day-to-day plights of people of color, this may seem inconsequential, or even race-baiting. After all, algorithms don’t need to understand a user’s ethnicity to make accurate recommendations and assumptions. That’s the beauty of technology, right? However, the more intertwined our lives become with AI, the more biases could bloom, some of which could result in life or death. Before AI exacerbates inequities throughout society, we must include and protect minority data today.

WIRED OPINION

ABOUT

Angela Benton (@ABenton) is the founder and CEO of Streamlytics, which uses data science to measure what people are watching and listening to across streaming platforms. She previously founded NewME, the first accelerator for minority founders.

Errors from incomplete AI training data already affect people of color. For one, facial recognition software has a history of misidentifying black citizens. (Disclosure: I am an investor in a facial recognition company that has championed not selling its data to authorities.) Last year the ACLU ran a test with Amazon’s Rekognition software, in which Congressional headshots were matched against a database of mugshots. Forty percent of those misidentified were people of color, but they comprised only 20 percent of Congress. Rekognition remains in use within some police departments. Amazon has also partnered with 400 police forces across the country, which will use Amazon’s camera-doorbell product, Ring (whose facial recognition software is still in development), to form a newfangled type of “neighborhood watch.” Also within the American criminal justice system, as a 2016 ProPublica investigation discovered, software used to identify future violent criminal threats ran on an algorithm that was correct only 20 percent of the time. Black defendants in particular were pegged to be a 77 percent higher risk for committing future crimes than reality proved.

Healthcare, which increasingly uses algorithms to determine diagnoses and treatments, is also problematic. Nearly 40 percent of Americans identify as being non-white, but 80–90 percent of participants in most clinical trials are white. This can be a huge issue for illnesses that disproportionately plague minority communities like diabetes or heart and respiratory diseases, or even respiratory disease. In 2015 only 1.9 percent respiratory disease studies included any minorities. While I was going through breast cancer treatments, many of the procedures and therapies my doctors recommended were derived from studies that were predominantly comprised of white female patients. It was also extremely hard for me to get a referral for a mammogram when I was diagnosed at 34. Even though black women are typically diagnosed with breast cancer younger than white women are, the recommended age to even get a mammogram is 40, again from data that disproportionately included white women. Dr. Joy Buolamwini, an MIT computer scientist and advocate for ethical and inclusive technology, says this “coded gaze” is a “reflection of the priorities, the preferences, and also sometimes the prejudices of those who have the power to shape technology.”

Meanwhile, a black and brown diaspora of data is quickly multiplying. In the U.S., people of color are projected to outnumber non-Hispanic white citizens by 2045. Around 50 percent of the world’s population growth between now and 2050 is expected to come from Africa. According to the Pew Research Center, a greater proportion of black and Hispanic adults use Instagram, Twitter, WhatsApp, Snapchat, and YouTube than Caucasians. Facebook owns three of the top six most used social media platforms by people of color. That’s an incredible amount power. Pair that with an estimated 37 percent annual growth rate of AI penetration into business, racial bias will become an even more daunting challenge at the hands of our machines.


Advertisements