We submitted comments to the Federal Trade Commissions's Advance Notice of Proposed Rulemaking on Commercial Surveillance and Data Security published on August 22, 2022. In the following comments, we urged the FTC to use rulemaking to address commercial practices that cause discrimination. It can do so by prescribing a rule that applies its unfairness authority directly to discriminatory practices, which often easily satisfy the three-factor unfairness test. The FTC is well justified in pursuing such a rule, and existing civil rights laws and practices should inform its approach.
Read our full comments below, or download our submission here.
Stuctural discrimination remains a prevalent cause of harm for many Americans, particularly Black and brown people, women, LGBTQ+ people, people with disabilities, and other historically disadvantaged communities. When companies discriminate, whether intentionally or not, consumers can be unfairly hampered in their pursuit of basic services and economic opportunities, such as stable housing, quality jobs, and financial security.
The harms of structural discrimination have been amplified by algorithmic and other data-driven technologies. Civil rights laws that once offered stronger protection against discrimination have not kept pace with changing technology, so new legal and regulatory approaches are needed to protect consumers and to fill gaps.
We believe the Federal Trade Commission (FTC) must use rulemaking to address commercial practices that cause discrimination. It can do so by prescribing a rule that applies its unfairness authority directly to discriminatory practices, which often easily satisfy the three-factor unfairness test. The FTC is well justified in pursuing such a rule, and existing civil rights laws and practices should inform its approach.
I. The FTC must address commercial practices that cause discrimination, whether or not algorithms are involved.
A. Structural discrimination remains prevalent, even as commercial practices, technologies, and industries have changed.
Discrimination and the effects of discrimination still define significant portions of American life. Despite decades of varied attempts to root out and redress discrimination, discrimination still defines how Black and brown people, women, LGBTQ+ people, people with disabilities, and other historically disadvantaged people can access basic goods and services, seek economic opportunities, and pursue safe and healthy lives.
For example, since the 1970s, the median household income for Black and Hispanic workers has significantly trailed that of white households. In 2020, Black and Hispanic median household income was roughly $46,000 and $55,000, respectively, while white households made $75,000.
Across race and ethnicity, women earn less than men. In 2019, the median white family had $184,000 in familial wealth, whereas the median Black and Hispanic family had $23,000 and $38,000, respectively.
In addition, Black renters have evictions filed against them at twice the rate of white renters — and Black women are more likely to be subject to illegitimate eviction filings, and most likely to be further denied future housing due to those filings. In 2020, Black borrowers had double the mortgage application denial rate of their white counterparts.
Rates of discrimination in hiring have also persisted over time, particularly racial discrimination. Women are more likely to occupy low-wage occupations, making up two-thirds of the low-wage workforce. Only 19.1% of people with a disability were employed in 2021 compared to 63.7% of those without a disability. In large part due to occupational segregation, people with disabilities make 66 cents for every dollar that people without disabilities earn. LGBTQ+ people also experience a wage gap, making 89 cents for every dollar earned by non-LGBTQ+ workers. This wage gap is worse for LGBTQ+ women, people of color, and transgender people.
Furthermore, people with disabilities typically have less access to healthcare. Similarly, members of the LGBTQ+ community have less access to healthcare and are more likely to have worse health outcomes than their heterosexual, cisgender counterparts.
Historically, a range of explicitly discriminatory federal, state, and local government policies ensured that Black and brown people, women, LGBTQ+ people, and people with disabilities were categorically denied equal protection under the law. The practice of redlining deliberately excluded predominantly Black communities from economic opportunities and perpetuated residential segregation. Residential segregation has served as the basis for community disinvestment which has resulted in disparities in wealth, health, education, and employment. Furthermore, prior to the Equal Pay Act, the Americans with Disabilities Act, and the Civil Rights Act, very few legal protections existed for women, LGBTQ+ people, and people with disabilities. As a result, disparities in important life opportunities were created that still persist despite greater legal protections.
Today, a range of government policies, corporate practices, and other forces continue to “perpetuate systemic barriers to opportunities and benefits for people of color and other underserved groups.”
B. Algorithmic systems expand and exacerbate structural discrimination.
Powerful institutions now use a variety of automated, data-driven technologies to shape key decisions about people’s lives. These technologies can both expand and exacerbate historical racial and economic disparities in housing, employment, public benefits, education, the criminal legal system, healthcare, and other areas of opportunity and wellbeing. Across these areas, technologies are often used to make decisions that substantially affect people’s material conditions, especially in the absence of government attention and regulation.
In housing, algorithmic systems drive, exacerbate, and obscure decisions about rentals, appraisals, mortgages, and online advertising audiences. For example, the algorithms that banks use to approve or deny mortgage loans have been shown to disproportionately reject applicants who are people of color. Relative to similarly positioned white applicants, Latinx applicants are 40% more likely to be rejected and Black applicants are 80% more likely to be rejected. Such disparities keep minorities from being homeowners. But even when a mortgage is approved, homeowners of color face further discriminatory hurdles. For example, some financial technology companies use algorithms in their underwriting process and charge Black and Latinx borrowers 5.4 to 7.7 basis points more for mortgage loans than similarly situated white borrowers. As a result, Black and Latinx borrowers annually pay $450 million more in interest for home loans.
Algorithmic systems also carry forward the legacy of historic policies and practices that segregated, devalued, and disinvested from communities of color. For example, cities such as Detroit and Indianapolis use market value assessment algorithms to determine the “market strength” of a neighborhood and inform investment strategies such as subsidies, tax breaks, transit upgrades, and code enforcement. Consequently, already disadvantaged neighborhoods with lower homeownership rates, average home prices, and higher foreclosure rates are marked for disinvestment by such algorithms. Similarly, automated valuation models used by real estate agents, brokers, and mortgage lenders to supplement or supplant in-person appraisals have been shown to produce larger errors in majority Black neighborhoods than in white neighborhoods. As part of the Interagency Task Force on Property Appraisal and Valuation Equity, financial regulators committed to “address potential bias by including a nondiscrimination quality control standard in the proposed [automated valuation model] rule.”
Beyond homeownership, algorithmic systems used for rental decisions continue to harm marginalized communities and block access to housing. Algorithmic systems mediate what housing opportunities renters are aware of in the first place. For example, until recently, large ad platforms like Meta allowed advertisers to exclude protected classes from their target audience (though this is no longer the case). Worse, and more importantly, Meta’s ad delivery algorithm has been empirically shown to lead to significant demographic skews on the basis of protected factors, even when an advertiser chooses to broadly target their ad. Critically, Meta itself has acknowledged the potential for discriminatory effects arising from its ad delivery decisions. Furthermore, algorithms used in the tenant screening process have been shown to perpetuate discrimination in part due to their reliance on criminal, credit, and eviction records. In an ongoing case, a woman was denied tenancy because a tenant screening report included a dismissed shoplifting charge for her son. Because arrest, criminal, and eviction records are already racially biased, algorithms that use such information to make housing decisions further harm marginalized communities and lock people out of housing.
Similar to housing, algorithmic systems used in credit tend to replicate and exacerbate historically racist practices. For example, FICO, the predominant credit scoring algorithm used as the basis for over 90% of lending decisions, positively weighs factors like mortgage payments while excluding rental payment history. This systematically disadvantages Black, Latinx, and Native American consumers who have historically had less access to homeownership and traditional credit than white consumers. In addition, credit determinations for minority and low-income borrowers tend to be less accurate than those for white borrowers. Because marginalized communities have historically had less access to credit, algorithms that predict credit risk are less accurate for minority borrowers because there is less data to inform the risk prediction. These inaccuracies perpetuate racial biases within lending practices.
Financial technology companies that rely on newer algorithmic systems — as well as new or alternative data — to make lending decisions are not immune from replicating these longstanding problems. For example, one lender’s platform relies on machine learning models and non-traditional applicant data, including data related to borrowers’ higher education, to underwrite and price consumer loans. Their machine learning models have been shown to penalize loan applicants based on the average SAT and ACT scores of the colleges that they attended, which research shows are not correlated with academic merit or success but are instead correlated with race and socioeconomic status. A monitorship assessment of this model found adverse approval and denial disparities at the final stage of the loan process for Black applicants.
Algorithmic systems also impact people’s ability to navigate the job hiring process on equal footing. Bias is apparent in every step of the hiring process, including who learns of a job in the first place. For example, the same problems with Meta’s ad delivery algorithms described above persist for employers affirmatively trying to reach a broad target audience. Even when a job posting is seen by a diverse audience, resume screening algorithms can lead to further discriminatory outcomes. In a now-defunct recruiting algorithm developed by Amazon, resumes were screened with a bias against women. This occurred because the training data was a function of resumes submitted to Amazon over a 10-year period, which were predominantly submitted by men. As a result, the algorithm learned to downgrade resumes that mentioned the word “women’s” or the names of women’s colleges. Had the algorithm been deployed it would have perpetuated existing gender disparities at Amazon and excluded qualified women from jobs. Similarly, a separate algorithm created by a resume-screening company gave disproportionate weight to resumes that contained the name Jared and mentioned playing high school lacrosse as predictors for job performance. Had that algorithm been implemented without being audited first, it would have disproportionately screened out women and poor people of color.
Other algorithmic systems used in the hiring process also display bias against marginalized communities. For example, major employers such as CVS, Amazon, and Walmart use personality tests to determine the future success of applicants. Personality tests tend to produce results based on a “norm” that is informed by the ethnic majority and able-bodied people. As such, automated hiring systems are more likely to screen out applicants that are disabled. Applicants that are able to avoid being screened out based on resumes or personality tests still face bias in the interview process. In a product no longer offered by HireVue, employers were able to use facial analysis technology and conduct automated interviews. The interview AI evaluated applicants based on gestures, mannerisms, tone of voice, and cadence, making up 29% of their “employability score.” The use of this type of AI in the interview process would disproportionately harm people with disabilities who may have atypical speech patterns, movements, and facial actions.
Beyond the traditional civil rights areas of credit, employment, and housing, algorithmic systems routinely shape healthcare decisions and outcomes. Many algorithmic systems have been developed to help determine when and how much care should be allocated. Frequently, use of these systems leads to disparities in healthcare quality, delivery, and outcomes. One healthcare algorithm (that is representative of a family of risk prediction tools) that affects nearly 200 million people annually was shown to exhibit significant racial bias. Instead of using illness, the algorithm relied on the cost of each patient’s past medical care to predict future medical needs, and recommended early interventions for the patients deemed most at risk. Because Black patients historically have had less access to medical care, and as a result have generated less costs than white patients with similar illness and need, the algorithm wrongfully recommended that white patients receive more care than Black patients. In order to be identified for the same care, Black patients effectively had to be sicker than their white counterparts. Similarly, an algorithm that measures kidney function that is used to determine a patient’s placement on the waiting list for a kidney transplant led to kidney transplant inequities for Black patients. The inclusion of race in the algorithm was intended to correct a previous error that led to overdiagnosing Black patients but ultimately resulted in underdiagnosing Black patients. As a consequence, Black patients were less likely to receive the appropriate care, including life-saving kidney transplants.
Beyond healthcare algorithms that direct the type or level of care patients receive, algorithmic systems used as diagnostics have also been shown to lead to discriminatory outcomes. For example, an algorithm called CheXNet used to diagnose pneumonia and other lung diseases was predominately trained on data that consisted of male chest x-rays. Consequently, the algorithm failed to reliably diagnose women which would have led to significant disparities in lung treatment had the algorithm been implemented.
These are just a few ways that algorithmic systems have created, exacerbated, or obscured discrimination. The White House’s Blueprint for an AI Bill of Rights documents a number of other instances. And of course, these are just publicly known examples of ways by which algorithmic systems contribute to discrimination: many more instances of discrimination exist but have not been investigated, audited, or tested by government agencies, researchers, advocates, and journalists. Without focused attention, technology will reinforce racial, economic, and social injustices found everywhere in our society.