Facebook made headlines again last week for allowing advertisers to target its users by “ethnic affinity” — a data-driven guess about their racial and cultural interests.
Facebook doesn’t ask its users about race, but the company does assess their “likes” and and other behaviors. “Let’s say you are a fan of BET and have shown an interest in #BlackLivesMatter — well, then, you might be categorized as part of an African-American ethnic affinity,” explained Annalee Newitz on Ars Technica.
These categories didn’t receive much critical attention until March, when Universal Pictures revealed that it used them to show different trailers for the movie Straight Outta Compton to different ethnic groups.
Then, last week, ProPublica showed that a racist advertiser could, in principle, use those same categories to prevent specified ethnic affinity groups from seeing an ad related to housing. (The Fair Housing Act prohibits housing advertisers — including those advertising on Facebook — from discriminating based on race and other protected classes.)
Advocates and lawmakers were quick to respond. The ACLU called on Facebook to do more to protect civil rights, and the Congressional Black Caucus sent a letter asking the company to “remedy this matter swiftly and responsibly.”
Facebook can and should do more to protect its users from discrimination — especially in civil rights areas like housing, credit, and employment. And the company has some effective and feasible ways available to do just that. (I’ll get to these in a moment.)
First, it’s important to understand the broader context for this discussion:
It’s not just Facebook. Racial ad targeting has been a fixture of Internet marketing for a long time. Just last month, the New York Times profiled a company that claims to “identify [for marketers] 158 distinct ethnicities, with further segmentation for Hispanics and African-Americans.” And, in 2014, the FTC spotlighted racial marketing products as an area of special concern in its report on the data broker industry.
“Ethnic affinity” may be the targeting strategy that most obviously implicates race, but it’s not the only option. Advertisers have ways to target based on race even in the absence of clearly-labeled segments. Professional marketers regularly rely on zip codes, census data, language, and other features to try and reach particular racial groups. And many companies, including Facebook, provide marketers with so-called “lookalike audiences” that help them reach users that are “similar” to the consumers they already know.
In the right hands and for the right reasons, race-related targeting can be a useful and inclusive tool. For example, health officials, political groups, community organizers, and merchants of specialized consumer products all have defensible reasons for wanting to reach particular racial groups.
None of this is reason for inaction. On the contrary, there is a real problem here: Facebook’s platform is currently open to being used in ways that are harmful to civil rights and difficult for outside observers to document. The time is ripe for intervention.
Some initial thoughts on what Facebook could do:
Facebook could sharpen its rules. The company already requires advertisers to comply with the law, and prohibits targeting that “discriminate[s] against” its users. But there’s room for greater specificity, particularly about advertisers’ obligations under civil rights laws that apply to housing, employment, and credit.
Facebook could start to identify ads in protected areas and create opportunities for enforcement. Facebook sells a mind-boggling number of ads each day, so widespread human review isn’t feasible. However, there are scalable strategies that deserve deeper exploration. For example, Facebook could scan for keywords and other cues to automatically classify ads, or require advertisers to self-certify when they are advertising in a protected area (like housing, credit, or employment). Once an ad is classified, Facebook could alert the advertiser to their legal obligations and limit the types of targeting permitted.
Facebook could improve user transparency and participation. Already, Facebook offers a “Why am I seeing this?” dialog alongside ads. Sometimes these explanations are helpful, but often, they’re hard to make sense of. Facebook could commit to showing users whenever ethnic affinity is used by an advertiser and whether the ad was self-certified by the advertiser as being in a protected area. Users would then be in a better position to help flag ads that appear to violate Facebook’s policies.
Facebook says it’s listening. “We’ve heard from groups and policy makers who are concerned about some of the ways our targeting tools could be used by advertisers. We are listening and working to better understand these concerns,” said the company in a statement.
There is no silver bullet here, especially given that “ethnic affinity” is far from the only way to target based on race. More constructive discussion is needed. But Facebook should seize this opportunity to lead the industry in a more inclusive direction.
Edit (2016–11–11): Facebook announced that it will “disable the use of ethnic affinity marketing for ads that we identify as offering housing, employment, or credit,” and will further clarify its policies. This is a really strong start.