November 21, 2022
Across the Field

Comments on the FTC's proposed rulemaking on commercial surveillance and data security

Commercial Surveillance ANPR, R111004

Logan Koepke, Harlan Yu, Mariah de Leon, and Natasha Duarte

Comments
Table of Contents keyboard_arrow_down

We submitted comments to the Federal Trade Commissions's Advance Notice of Proposed Rulemaking on Commercial Surveillance and Data Security published on August 22, 2022. In the following comments, we urged the FTC to use rulemaking to address commercial practices that cause discrimination. It can do so by prescribing a rule that applies its unfairness authority directly to discriminatory practices, which often easily satisfy the three-factor unfairness test. The FTC is well justified in pursuing such a rule, and existing civil rights laws and practices should inform its approach.

Read our full comments below, or download our submission here.


Executive Summary

Stuctural discrimination remains a prevalent cause of harm for many Americans, particularly Black and brown people, women, LGBTQ+ people, people with disabilities, and other historically disadvantaged communities. When companies discriminate, whether intentionally or not, consumers can be unfairly hampered in their pursuit of basic services and economic opportunities, such as stable housing, quality jobs, and financial security.

The harms of structural discrimination have been amplified by algorithmic and other data-driven technologies. Civil rights laws that once offered stronger protection against discrimination have not kept pace with changing technology, so new legal and regulatory approaches are needed to protect consumers and to fill gaps.

We believe the Federal Trade Commission (FTC) must use rulemaking to address commercial practices that cause discrimination. It can do so by prescribing a rule that applies its unfairness authority directly to discriminatory practices, which often easily satisfy the three-factor unfairness test. The FTC is well justified in pursuing such a rule, and existing civil rights laws and practices should inform its approach.

I. The FTC must address commercial practices that cause discrimination, whether or not algorithms are involved.

A. Structural discrimination remains prevalent, even as commercial practices, technologies, and industries have changed.

Discrimination and the effects of discrimination still define significant portions of American life. Despite decades of varied attempts to root out and redress discrimination, discrimination still defines how Black and brown people, women, LGBTQ+ people, people with disabilities, and other historically disadvantaged people can access basic goods and services, seek economic opportunities, and pursue safe and healthy lives.

For example, since the 1970s, the median household income for Black and Hispanic workers has significantly trailed that of white households. In 2020, Black and Hispanic median household income was roughly $46,000 and $55,000, respectively, while white households made $75,000.
Across race and ethnicity, women earn less than men. In 2019, the median white family had $184,000 in familial wealth, whereas the median Black and Hispanic family had $23,000 and $38,000, respectively.

In addition, Black renters have evictions filed against them at twice the rate of white renters — and Black women are more likely to be subject to illegitimate eviction filings, and most likely to be further denied future housing due to those filings. In 2020, Black borrowers had double the mortgage application denial rate of their white counterparts.

Rates of discrimination in hiring have also persisted over time, particularly racial discrimination. Women are more likely to occupy low-wage occupations, making up two-thirds of the low-wage workforce. Only 19.1% of people with a disability were employed in 2021 compared to 63.7% of those without a disability. In large part due to occupational segregation, people with disabilities make 66 cents for every dollar that people without disabilities earn. LGBTQ+ people also experience a wage gap, making 89 cents for every dollar earned by non-LGBTQ+ workers. This wage gap is worse for LGBTQ+ women, people of color, and transgender people.

Furthermore, people with disabilities typically have less access to healthcare. Similarly, members of the LGBTQ+ community have less access to healthcare and are more likely to have worse health outcomes than their heterosexual, cisgender counterparts.

Historically, a range of explicitly discriminatory federal, state, and local government policies ensured that Black and brown people, women, LGBTQ+ people, and people with disabilities were categorically denied equal protection under the law. The practice of redlining deliberately excluded predominantly Black communities from economic opportunities and perpetuated residential segregation. Residential segregation has served as the basis for community disinvestment which has resulted in disparities in wealth, health, education, and employment. Furthermore, prior to the Equal Pay Act, the Americans with Disabilities Act, and the Civil Rights Act, very few legal protections existed for women, LGBTQ+ people, and people with disabilities. As a result, disparities in important life opportunities were created that still persist despite greater legal protections.

Today, a range of government policies, corporate practices, and other forces continue to “perpetuate systemic barriers to opportunities and benefits for people of color and other underserved groups.”

B. Algorithmic systems expand and exacerbate structural discrimination.

Powerful institutions now use a variety of automated, data-driven technologies to shape key decisions about people’s lives. These technologies can both expand and exacerbate historical racial and economic disparities in housing, employment, public benefits, education, the criminal legal system, healthcare, and other areas of opportunity and wellbeing. Across these areas, technologies are often used to make decisions that substantially affect people’s material conditions, especially in the absence of government attention and regulation.

In housing, algorithmic systems drive, exacerbate, and obscure decisions about rentals, appraisals, mortgages, and online advertising audiences. For example, the algorithms that banks use to approve or deny mortgage loans have been shown to disproportionately reject applicants who are people of color. Relative to similarly positioned white applicants, Latinx applicants are 40% more likely to be rejected and Black applicants are 80% more likely to be rejected. Such disparities keep minorities from being homeowners. But even when a mortgage is approved, homeowners of color face further discriminatory hurdles. For example, some financial technology companies use algorithms in their underwriting process and charge Black and Latinx borrowers 5.4 to 7.7 basis points more for mortgage loans than similarly situated white borrowers. As a result, Black and Latinx borrowers annually pay $450 million more in interest for home loans.

Algorithmic systems also carry forward the legacy of historic policies and practices that segregated, devalued, and disinvested from communities of color. For example, cities such as Detroit and Indianapolis use market value assessment algorithms to determine the “market strength” of a neighborhood and inform investment strategies such as subsidies, tax breaks, transit upgrades, and code enforcement. Consequently, already disadvantaged neighborhoods with lower homeownership rates, average home prices, and higher foreclosure rates are marked for disinvestment by such algorithms. Similarly, automated valuation models used by real estate agents, brokers, and mortgage lenders to supplement or supplant in-person appraisals have been shown to produce larger errors in majority Black neighborhoods than in white neighborhoods. As part of the Interagency Task Force on Property Appraisal and Valuation Equity, financial regulators committed to “address potential bias by including a nondiscrimination quality control standard in the proposed [automated valuation model] rule.”

Beyond homeownership, algorithmic systems used for rental decisions continue to harm marginalized communities and block access to housing. Algorithmic systems mediate what housing opportunities renters are aware of in the first place. For example, until recently, large ad platforms like Meta allowed advertisers to exclude protected classes from their target audience (though this is no longer the case). Worse, and more importantly, Meta’s ad delivery algorithm has been empirically shown to lead to significant demographic skews on the basis of protected factors, even when an advertiser chooses to broadly target their ad. Critically, Meta itself has acknowledged the potential for discriminatory effects arising from its ad delivery decisions. Furthermore, algorithms used in the tenant screening process have been shown to perpetuate discrimination in part due to their reliance on criminal, credit, and eviction records. In an ongoing case, a woman was denied tenancy because a tenant screening report included a dismissed shoplifting charge for her son. Because arrest, criminal, and eviction records are already racially biased, algorithms that use such information to make housing decisions further harm marginalized communities and lock people out of housing.

Similar to housing, algorithmic systems used in credit tend to replicate and exacerbate historically racist practices. For example, FICO, the predominant credit scoring algorithm used as the basis for over 90% of lending decisions, positively weighs factors like mortgage payments while excluding rental payment history. This systematically disadvantages Black, Latinx, and Native American consumers who have historically had less access to homeownership and traditional credit than white consumers. In addition, credit determinations for minority and low-income borrowers tend to be less accurate than those for white borrowers. Because marginalized communities have historically had less access to credit, algorithms that predict credit risk are less accurate for minority borrowers because there is less data to inform the risk prediction. These inaccuracies perpetuate racial biases within lending practices. 

Financial technology companies that rely on newer algorithmic systems — as well as new or alternative data — to make lending decisions are not immune from replicating these longstanding problems. For example, one lender’s platform relies on machine learning models and non-traditional applicant data, including data related to borrowers’ higher education, to underwrite and price consumer loans. Their machine learning models have been shown to penalize loan applicants based on the average SAT and ACT scores of the colleges that they attended, which research shows are not correlated with academic merit or success but are instead correlated with race and socioeconomic status. A monitorship assessment of this model found adverse approval and denial disparities at the final stage of the loan process for Black applicants. 

Algorithmic systems also impact people’s ability to navigate the job hiring process on equal footing. Bias is apparent in every step of the hiring process, including who learns of a job in the first place. For example, the same problems with Meta’s ad delivery algorithms described above persist for employers affirmatively trying to reach a broad target audience. Even when a job posting is seen by a diverse audience, resume screening algorithms can lead to further discriminatory outcomes. In a now-defunct recruiting algorithm developed by Amazon, resumes were screened with a bias against women. This occurred because the training data was a function of resumes submitted to Amazon over a 10-year period, which were predominantly submitted by men. As a result, the algorithm learned to downgrade resumes that mentioned the word “women’s” or the names of women’s colleges. Had the algorithm been deployed it would have perpetuated existing gender disparities at Amazon and excluded qualified women from jobs. Similarly, a separate algorithm created by a resume-screening company gave disproportionate weight to resumes that contained the name Jared and mentioned playing high school lacrosse as predictors for job performance. Had that algorithm been implemented without being audited first, it would have disproportionately screened out women and poor people of color.

Other algorithmic systems used in the hiring process also display bias against marginalized communities. For example, major employers such as CVS, Amazon, and Walmart use personality tests to determine the future success of applicants. Personality tests tend to produce results based on a “norm” that is informed by the ethnic majority and able-bodied people. As such, automated hiring systems are more likely to screen out applicants that are disabled. Applicants that are able to avoid being screened out based on resumes or personality tests still face bias in the interview process. In a product no longer offered by HireVue, employers were able to use facial analysis technology and conduct automated interviews. The interview AI evaluated applicants based on gestures, mannerisms, tone of voice, and cadence, making up 29% of their “employability score.” The use of this type of AI in the interview process would disproportionately harm people with disabilities who may have atypical speech patterns, movements, and facial actions.

Beyond the traditional civil rights areas of credit, employment, and housing, algorithmic systems routinely shape healthcare decisions and outcomes. Many algorithmic systems have been developed to help determine when and how much care should be allocated. Frequently, use of these systems leads to disparities in healthcare quality, delivery, and outcomes. One healthcare algorithm (that is representative of a family of risk prediction tools) that affects nearly 200 million people annually was shown to exhibit significant racial bias. Instead of using illness, the algorithm relied on the cost of each patient’s past medical care to predict future medical needs, and recommended early interventions for the patients deemed most at risk. Because Black patients historically have had less access to medical care, and as a result have generated less costs than white patients with similar illness and need, the algorithm wrongfully recommended that white patients receive more care than Black patients. In order to be identified for the same care, Black patients effectively had to be sicker than their white counterparts. Similarly, an algorithm that measures kidney function that is used to determine a patient’s placement on the waiting list for a kidney transplant led to kidney transplant inequities for Black patients. The inclusion of race in the algorithm was intended to correct a previous error that led to overdiagnosing Black patients but ultimately resulted in underdiagnosing Black patients. As a consequence, Black patients were less likely to receive the appropriate care, including life-saving kidney transplants.

Beyond healthcare algorithms that direct the type or level of care patients receive, algorithmic systems used as diagnostics have also been shown to lead to discriminatory outcomes. For example, an algorithm called CheXNet used to diagnose pneumonia and other lung diseases was predominately trained on data that consisted of male chest x-rays. Consequently, the algorithm failed to reliably diagnose women which would have led to significant disparities in lung treatment had the algorithm been implemented.

These are just a few ways that algorithmic systems have created, exacerbated, or obscured discrimination. The White House’s Blueprint for an AI Bill of Rights documents a number of other instances. And of course, these are just publicly known examples of ways by which algorithmic systems contribute to discrimination: many more instances of discrimination exist but have not been investigated, audited, or tested by government agencies, researchers, advocates, and journalists. Without focused attention, technology will reinforce racial, economic, and social injustices found everywhere in our society.

Download


1

U.S. Census Bureau, Current Population Survey, 1968 to 2021, Annual Social and Economic Supplements, Real Median Household Income by race and Hispanic Origin: 1967 to 2020, available at https://www.census.gov/content/dam/Census/library/visualizations/2021/demo/p60-273/figure2.pdf.

1

U.S. Department of Labor, Median annual earnings by sex, race and Hispanic ethnicity, available at https://www.dol.gov/agencies/wb/data/earnings/median-annual-sex-race-hispanic-ethnicity.

1

Ana Hernández Kent, Lowell R. Ricketts, “Has Wealth Inequality in America Changed over Time? Here Are Key Statistics,” Federal Reserve Bank of St. Louis, Dec. 02, 2020, available at https://www.stlouisfed.org/open-vault/2020/december/has-wealth-inequality-changed-over-time-key-statistics.

1

Sophie Beiers, Sandra Park, Linda Morris, “Clearing the Record: How Eviction Sealing Laws Can Advance Housing Access for Women of Color,” ACLU, Jan. 10, 2020, available at https://www.aclu.org/news/racial-justice/clearing-the-record-how-eviction-sealing-laws-can-advance-housing-access-for-women-of-color.

1

Jung Hyun Choi, Peter J. Mattingly, “What Different Denial Rates Can Tell Us About Racial Disparities in the Mortgage Market,” Urban Institute, Jan. 13, 2022, available at https://www.urban.org/urban-wire/what-different-denial-rates-can-tell-us-about-racial-disparities-mortgage-market (“According to the most recent Home Mortgage Disclosure Act (HMDA) data, 16.1 percent of all mortgage applications in 2020 were denied. Of those denials, Black borrowers had the highest denial rate (27.1 percent), whereas white borrowers had the lowest (13.6 percent).”).

1

Lincoln Quillian, Devah Pager, Ole Hexel, Arnfinn H. Midtbøen, Meta-analysis of Field Experiments Shows No Change in Racial Discrimination in Hiring over Time, Proceedings of the National Academy of Sciences 114, no. 41 (2017): 10870-10875, available at https://www.pnas.org/doi/10.1073/pnas.1706255114.

1

Joan Entmacher, Lauren Frohlich, Katherine Gallagher Robbins, Emily Martin, Liz Watson, Underpaid & Overloaded: Women in Low-Wage Jobs, National Women’s Law Center, 2014, available at https://nwlc.org/wp-content/uploads/2015/08/final_nwlc_lowwagereport2014.pdf.

1

U.S. Bureau of Labor Statistics, “9.1 percent of people with a disability were employed in 2021,” The Economics Daily, Mar. 01, 2022, available at https://www.bls.gov/opub/ted/2022/19-1-percent-of-people-with-a-disability-were-employed-in-2021.htm.

1

Jennifer Cheeseman Day, Danielle Taylor, “Do People With Disabilities Earn Equal Pay? In Most Occupations, Workers With or Without Disabilities Earn About the Same,” U.S. Census Bureau, Mar. 21, 2019, available at https://www.census.gov/library/stories/2019/03/do-people-with-disabilities-earn-equal-pay.html#:~:text=In%20nearly%20every%20occupation%2C%20workers,those%20with%20no%20disability%20earn.

1

Human Rights Campaign, “The Wage Gap Among LGBTQ+ Workers in the United States,” available at https://www.hrc.org/resources/the-wage-gap-among-lgbtq-workers-in-the-united-states.

1

Human Rights Watch, “‘You Don’t Want Second Best’: Anti-LGBT Discrimination in US Health Care,” July 2018, available at https://www.hrw.org/report/2018/07/23/you-dont-want-second-best/anti-lgbt-discrimination-us-health-care.

1

Exec Order 13985 86 Fed. Reg. 7009, Jan. 20, 2021.

1

Emmanuel Martinez, Lauren Kirchner, “The Secret Bias Hidden in Mortgage-Approval Algorithms,” The Markup, Aug. 25, 2021, available at https://themarkup.org/denied/2021/08/25/the-secret-bias-hidden-in-mortgage-approval-algorithms.

1

Id.

1

Robert Bartlett, Adair Morse, Richard Stanton, Nancy Wallace, Consumer-lending Discrimination in the FinTech Era, 143 Jour. of Fin. Econ. 1, 30-56 (2022), available at https://www.sciencedirect.com/science/article/abs/pii/S0304405X21002403.

1

Id.

1

Sara Safransky, “Geographies of Algorithmic Violence: Redlining the Smart City,” International Journal of Urban and Regional Research, Nov. 24, 2019, 9, available at https://onlinelibrary.wiley.com/doi/full/10.1111/1468-2427.12833.

1

Id. at 10.

1

Automated valuations models are defined in 12 U.S.C. § 3354(d) as “any computerized model used by mortgage originators and secondary market issuers to determine the collateral worth of a mortgage secured by a consumer’s principal dwelling.”

1

Michael Neal, Sarah Strochak, Linna Zhu, and Caitlin Young, How Automated Valuation Models Can Disproportionately Affect Majority Black Neighborhoods, Urban Institute, Dec. 2020, available at https://www.urban.org/sites/default/files/publication/103429/how-automated-valuation-models-can-disproportionately-affect-majority-black-neighborhoods_1.pdf.

1

Interagency Task Force on Property Appraisal and Valuation Equity, Action Plan to Advance Property Appraisal and Valuation Equity Closing the Racial Wealth Gap by Addressing Mis-valuations for Families and Communities of Color, Mar. 2022, 27, available at https://pave.hud.gov/sites/pave.hud.gov/files/documents/PAVEActionPlan.pdf.

1

Julia Angwin, Ariana Tobin, Madeleine Varner, “Facebook (Still) Letting Housing Advertisers Exclude Users by Race,” ProPublica, Nov. 21, 2017, available at https://www.propublica.org/article/facebook-advertising-discrimination-housing-race-sex-national-origin.

1

Id.

1

Settlement Agreement and Release, Exhibit A – Programmatic Relief, National Fair Housing Alliance, et al., v. Facebook, Inc., No. 18-cv-02689-JGK (S.D.N.Y. Mar. 8, 2019), Doc. 67-2, https://nationalfairhousing.org/wp-content/uploads/2019/03/FINAL-Exhibit-A-3-18.pdf. See also DOJ Settlement (“Meta will develop a system to reduce variances in Ad Impressions between Eligible Audiences and Actual Audiences, which the United States alleges are introduced by Meta’s ad delivery system, for sex and estimated race/ethnicity”).

1

Conn. Fair Hous. Ctr. v. Corelogic Rental Prop. Solutions, LLC, 369 F. Supp. 3d 362 (D. Conn. 2019).

1

Emmanuel Martinez, Lauren Kirchner, “The Secret Bias Hidden in Mortgage-Approval Algorithms,” The Markup, Aug. 25, 2021, available at https://themarkup.org/denied/2021/08/25/the-secret-bias-hidden-in-mortgage-approval-algorithms.

1

Id.