We submitted the following testimony to DC Council's Committee on Government Operations and Facilities regarding B24-0558, the Stop Discrimination by Algorithms Act of 2021.
Chair White and members of the Committee on Government Operations and Facilities,
Thank you for the opportunity to testify on the Stop Discrimination by Algorithms Act (SDAA). This bill represents a positive step toward acknowledging and addressing technology’s role in determining DC residents’ access to basic economic needs and opportunities. Our testimony provides some concrete examples of discrimination we believe the Council must address — through the SDAA and other legislation.
Upturn is a DC-based research and advocacy organization whose mission is to advance justice in the design, governance, and use of technology. We study and challenge the systems that mediate people’s access to essential opportunities, like housing, jobs, and health care. Our team includes computer and data scientists, lawyers, researchers, and policy experts. We often work in partnership with community-based organizations.
Our work aims to uncover and fight the types of discriminatory harm that the SDAA seeks to address. For example, our research has exposed how Facebook’s ad delivery algorithm showed users different job ads based on their race and gender; how job applicants in DC are screened using ableist assessments when they apply for hourly positions at companies like Walmart, CVS, and Starbucks; and how an algorithm proposed in Missouri would cut or take away in-home care for many people in the state. We recently launched the Benefits Tech Advocacy Hub, a toolkit and community of practice for challenging the systems used to determine people’s access to public benefits programs. We write to share with the Council what we’ve learned from doing this work over the years. These lessons should inform the Council’s approach to the SDAA and other important legislation for combatting discrimination.
1. Technology's role in discrimination is an important and timely issue for DC Council to address because it is already affecting people in DC.
The SDAA seeks to address discrimination that is driven, exacerbated, or obscured by automated decision systems. The discrimination that many DC residents experience when they apply for housing, jobs, loans, or public benefits is not new. However, discriminatory outcomes can scale quickly and evade detection when these systems are standardized, automated, and outsourced to third-party vendors.
As researchers analyzing these systems, we and others in our community of practice have uncovered several examples of this problem. More examples can be found in the new Blueprint for an AI Bill of Rights released by the White House Office of Science and Technology Policy this week.
A. Screening job applicants in DC
For a research study last year, we completed and documented the online application process for 15 hourly, entry-level jobs in DC at large employers like Walmart, CVS, and Starbucks. These employers used standardized applicant tracking systems, which allow them to integrate assessments — such as multiple-choice tests and resume screeners — from different vendors into one application process.
We found that large employers are using ableist personality tests at scale to screen job applicants in DC. People who can perform the essential functions of a job — such as ringing up customers, counting change, or stocking shelves — but don’t fit a particular personality model could find themselves repeatedly knocked out of applicant pools. The scoring of personality tests is often calibrated based on a disproportionately white and middle-class population. These personality tests are not new — they’ve been around since people applied for jobs using a pencil and paper. But online job application systems allow these tests to scale more easily, so that someone applying to multiple cashier positions may see the same personality questions over and over.
Some hiring assessments have a history of being used or developed to weed out job applicants who may be more likely to organize, including Black workers. Today’s personality tests still include questions that may be part of a union-avoidance strategy. For example, we saw questions that asked if we questioned authority, or prioritized our well-being over our performance at the job. One question asked if we preferred a job where “there are high performance expectations” or where we are “highly compensated for [our] work.” These questions were not clearly related to performing the essential functions of the jobs we were applying for.
Job application systems ask candidates to provide their availability and pay preferences, without telling candidates what shifts the employer seeks to fill, what salary they offer for the job, or how the information will be used to assess the applicant. As applicants, we could not see how this information was being used to score or disqualify candidates. These practices may pressure workers to overstate their availability and can disadvantage people with caretaking or other responsibilities, like school or a second job.
B. Background checks and digital records as a barrier to housing and employment
You may not think of housing or employment background checks when you think of automated decision systems. However, background checks are one of the most widespread and racially discriminatory applications of data and algorithms that impede DC residents’ access to basic needs like jobs and housing every day.
In our job application research, we found that the applicant tracking systems employers used made it easy for them to integrate background checks from third-party vendors into the job application process. Many applications required us to agree to background checks, but did not disclose what information would be checked or how it would be used in hiring decisions.
Similarly, almost everyone who searches for housing in DC must undergo a tenant screening process for each unit they apply for. Landlords usually purchase reports from tenant screening companies (the DC Housing Authority contracts with RentGrow to screen tenants for public housing). These companies use algorithms to match housing applicants with eviction, credit, criminal, or other records, which may be used to create a numerical score, a risk assessment, and/or a recommendation about whether to accept the tenant, reject them, or charge them a higher security deposit.
Background checks and the records that populate them — especially criminal, credit, and eviction histories — have become overwhelming barriers to housing and employment that especially harm Black, brown, low-income, and disabled people in DC. For example, evictions in DC are disproportionately concentrated in Wards 7 and 8, because landlords in those neighborhoods serially file evictions against their tenants as a first resort to collect rent and avoid making repairs. Tenant screening companies collect those eviction records and translate them into high-risk ratings or lower scores on tenant screening reports. In turn, landlords tend to reject or charge higher security deposits to any tenant who has an eviction history. The result of this system — which generates great profits for data brokers and tenant screening companies — is that Black residents are disproportionately locked out of access to housing.
C. Race and gender discrimination in online advertising
In 2019 and 2020, Upturn partnered with academic researchers to conduct several studies on how Facebook ads were targeted and delivered to users. Previous studies had shown that advertisers who actively wanted to use Facebook ads to discriminate could do so. But we also found that even when advertisers tried to avoid targeting their ads to a particular type of audience, Facebook’s algorithm still (at the time) delivered ads to audiences with significant race and gender skews. For example, even when researchers directed ads to all US users, ads for jobs in the lumber industry were more likely to be delivered to white men, while ads for janitor jobs were more likely to be delivered to Black women. These studies, along with research by others in the field, helped support litigation that ultimately led to Meta making several significant changes to its targeting and delivery of housing, credit, and employment ads.
D. Alternative data and "educational redlining" in lending decisions
Over the last two years, Upturn, along with the NAACP’s Legal Defense Fund and the Student Borrower Protection Center (SBPC), has been part of an effort to assess the fair lending outcomes of the machine learning models used by lending platform Upstart. The investigation into Upstart’s model initiated by SBPC led Upstart to make changes to its lending model, which was penalizing loan applicants based on the average SAT and ACT scores of the colleges they went to. Research shows standardized test scores are not correlated with academic merit or success but that they are correlated with race and socioeconomic status. But it was only after significant advocacy and an inquiry from several US Senators that SBPC was able to uncover this discriminatory use of educational background. Upstart refuted the results of SBPC’s investigation, stating that they were invalid because Upstart changed its model during the course of the investigation. This is a common response by companies when external researchers uncover discrimination, and it’s hard to verify because we don’t have visibility into these model changes.
The ongoing investigation of algorithmic discrimination in Upstart’s lending model shows both the benefits of independent research and the significant information asymmetry between the developers of algorithms and those trying to identify and address algorithmic discrimination. The SDAA is a positive step toward making it easier to do these investigations. Making the audits provided to the OAG publicly available would further enable the type of independent research that has played a significant role in identifying algorithmic discrimination in the past.
E. Using algorithms to cut home care hours for people with disabilities
Many states and DC use or are considering using an algorithm to assess whether people are eligible to receive care in their homes and how many hours of care people receive. In many cases, the hours of care people receive have been cut, sometimes dramatically, after these algorithmic assessments are deployed. The assessments may operationalize policy changes to the maximum amount of care available, and may also substantially change which conditions are considered when allocating care to people. The people impacted by these cuts, along with advocates like legal aid attorneys, have sought to challenge these systems by demonstrating that they don’t account for many people’s needs, perpetuate austerity policies, and push people into institutions instead of home-based care. People are sometimes forced to litigate and/or file public records requests to try to find out the factors and formulas the assessments use to calculate care hours or determine eligibility. Many of these lawsuits have revealed that unconscionably restrictive and arbitrary algorithmic assessments have affected people with disabilities across the country. Upturn is currently engaged in research to try to learn more about the factors used to screen and assess people for home care eligibility and hours in DC.
In 2018, the Missouri Department of Health and Senior Services proposed and published a new home care assessment algorithm for public comment (other states have not had this type of public process). Legal aid organizations, home and community based service providers, and Upturn tested the algorithm and showed that it could disqualify as many as 66% of currently eligible people.It contained basic errors and fundamentally failed to assess people’s needs. For example, the algorithm considered people’s mobility issues with getting in and out of bed, but not with getting up and down stairs.
Public scrutiny of Missouri’s home care algorithm has helped to at least slow its implementation and ensure that Missouri residents who wouldn’t qualify under the new assessment system are not currently cut off from their benefits.
2. These discriminatory harms are not new, but they often go unaddressed because of gaps in civil rights laws and enforcement.
Technology’s role in discrimination has not been adequately addressed by existing civil rights enforcement and litigation for several reasons, including:
A. Existing civil and human rights laws don’t always explicitly cover technology vendors that create, sell, and/or administer the systems that determine people’s access to essential economic needs and opportunities.
When our federal and DC civil and human rights laws were drafted, they did not contemplate that so many decisions about access to housing, jobs, credit, and other economic opportunities would be mediated by systems created by technology vendors. While these laws clearly regulate first-party decision-makers, such as employers, landlords, and banks, the laws are often much less clear about the liability of third parties like tenant screening companies, hiring assessment vendors, and online advertising platforms.
For example, Title VII, which protects against employment discrimination, covers employers and employment agencies, but there is no guidance as to whether platforms like ZipRecruiter, LinkedIn, or Meta qualify as employment agencies. Vendors routinely disclaim civil rights liability by stating that they do not make decisions about who ultimately gets a job, a loan, or an apartment — even though their products are designed and marketed to help make and standardize those decisions at scale.
B. A lack of information hinders enforcement.
Much of civil rights enforcement relies on impacted people or advocates to file complaints. But the automated or standardized processes used to help make life-altering decisions about people are often obscured or invisible. For example, as applicants to entry-level retail jobs in DC, we were aware that we were taking standardized hiring assessments like personality tests, but we couldn’t see whether employers were using the scores on those assessments to rank candidates, or whether they were rejecting all candidates below a certain score. We could see that we were asked to provide our availability and pay preferences, but we couldn’t see whether we were disqualified based on our stated salary preference.
As another example, DC has struggled to enforce its tenant protections due to obscurity in how tenants are screened. DC law prohibits landlords from doing a criminal background check on potential tenants until they’ve extended a conditional offer of housing, and then it limits the types of criminal records landlords can use to deny applicants. But some tenant screening tools may not even reveal to landlords, let alone tenants, the specific criminal records they use to produce scores and recommendations. The US District Court for the District of Connecticut is currently hearing a fair housing case brought by Carmen Arroyo, whose disabled son was denied access to move into her apartment based on a tenant screening report that simply concluded “disqualifying record” found, and did not reveal any of the underlying information about the records to the property manager.
Even when people are able to find out that they were subject to an automated decision system, it’s usually after the decision has been made, and too late to recover the benefit or opportunity they were denied. By the time people are able to gather enough information to file a complaint about a hiring assessment or tenant screening process, the job or apartment has already gone to someone else. Few people in that position have the time and resources to research and challenge the decision-making process that left them without income or shelter.
In some cases, litigation fails because courts expect plaintiffs’ prima facie cases to include statistical evidence of discrimination that plaintiffs have no good way of obtaining. For example, the EEOC has said that national statistics support a finding that excluding job candidates based on criminal records will have a racially disparate impact. However, plaintiffs relying on national statistics to challenge employment background checks have been dismissed for failing to show disparate impact statistics on the specific applicant pool for the job.
The audit reports and adverse action notices required under the SDAA would help impacted people and DC agencies find out about and enforce against civil and human rights violations. Requiring entities to assess and disclose information about their systems before deploying them could help prevent more people from unfairly losing opportunities and benefits in the first place, when effective remedies are still possible.
3. Independent research to investigate the role of technology in discrimination can make a difference.
External research into automated decision systems has been a catalyst for the following positive developments:
External research into Facebook’s ad targeting and delivery system fed directly into several fair housing lawsuits, which ultimately led to Meta announcing that it would discontinue discriminatory ad targeting and delivery tools for housing, credit, and job ads.
SBPC’s study and report on Upstart’s lending model brought about a Congressional inquiry, prompted Upstart to change its model, and eventually led to a monitorship of Upstart designed to test for disparate impacts.
After gaining access to and testing public benefits eligibility and care allocation algorithms, advocates and beneficiaries have been able to slow, alter, or in some cases stop the use of these systems to cut people’s benefits.
Research into payday lenders’ harmful advertising and lead generation practices prompted Google to ban payday loan ads.
In 2015, Quirtina Crittenden documented that Airbnb hosts repeatedly denied her booking requests until she shortened her name to Tina and changed her profile picture so hosts couldn’t tell she was Black. Crittenden’s advocacy — she launched the #Airbnbwhileblack hashtag which inspired many similar accounts — eventually led Airbnb to undergo a civil rights audit, make changes to its system to hide profile pictures from hosts until after booking, and launch a new research program to test its products for discrimination.
These are just a few of the many examples where external research has catalyzed important changes to technologies and systems that impact people’s daily lives and opportunities. This research has complemented, supported, and often prompted regulatory enforcement and litigation.
4. Public access to information about how these systems work is critical for enforcing the law.
The SDAA has the potential to enable external researchers and advocates like Upturn, as well as impacted people in DC, to scrutinize automated decision systems and identify discrimination against DC residents. While we applaud OAG for its attention to these problems, we know that one public agency cannot investigate and litigate every case of discrimination. External research will continue to be critical for discovering, focusing attention on, and challenging the harms the SDAA is designed to address. Complaints and litigation from impacted people are also a critical enforcement mechanism — not only for enforcing SDAA but also for existing DC human rights laws. However, to achieve this potential, the SDAA must facilitate some public disclosure of information about the technologies that impact DC residents.
Currently, the SDAA provides for some disclosure of information to the public and to impacted people. It requires covered entities to disclose whether and how they use personal information in covered automated decision systems, and requires them to provide adverse action notices. These are both positive steps, but more disclosure may be needed to effectively enforce the law. A disclosure on a company’s website can help people who already know where to look for information. But it might not help someone who is preparing to apply for public benefits and wants to know ahead of time what system(s) will be used to screen them. However, the SDAA as written would only require the results, methods, and other documentation of audits to be disclosed to OAG. The Council should consider making some subset or version of this information available to the public so that, for example, external researchers can help scrutinize the legitimacy and soundness of the audit reports and DC residents can better identify systems that may have adversely impacted them.
5. The Council must address the technologies that people encounter every day even if they’re not novel.
As the examples in this testimony demonstrate, the problems the SDAA describes are not limited to big tech companies or complex algorithms that use machine learning or other sophisticated techniques. In our work, we often see simple standardized tools and checklists used to make decisions that have widespread impacts in terms of denying people access to resources. It’s often small companies building software for specific purposes using simple logic and data matching, for example, tenant screening software companies that purchase eviction and criminal records from data brokers and rely on basic name matching to link these records to housing applicants (often erroneously).
It’s also important to note that DC residents experience material harms in the form of denials of housing, jobs, healthcare, and other essential needs even when they are not interacting with an entity online or even when the data being used to discriminate doesn’t come from their online presence or past activities. While it’s true that companies collect massive amounts of data about our online activities, people can experience algorithmic discrimination even if they themselves are not interacting with the entity online. For example, a person applying for housing could fill out a paper application, or a simple online application, and using tenant screening software the landlord can access court records about that person that lead them to make a discriminatory decision not to offer that person housing.
6. The Council shouldn’t overlook other policies that are needed to address these problems.
As the SDAA acknowledges, technology is deeply embedded in all systems that mediate access to basic needs and impact civil rights. However, while algorithms add a new vector for discrimination, they are not the root cause of discrimination. While the SDAA is an important step, Upturn is also advocating for other interventions that are complimentary to the SDAA and are critical for addressing technology’s role in discrimination.
For example, one source of discrimination the Coucil must address is the use of data — such as court records and information held by credit reporting agencies — to lock DC residents out of jobs, housing, and other resources. In May, Council passed the Eviction Record Sealing Authority and Fairness in Renting Amendment Act of 2022, which implemented automatic sealing of eviction records that did not result in a judgment after 30 days. This is a significant step in limiting the use of eviction records to deny housing to DC residents. However, as the DC Council Office or Racial Equity (CORE) has acknowledged, eviction records must be sealed immediately upon filing in order to improve the status quo of racial inequity. Data brokers scrape court websites and gather eviction filings as soon as they are posted, and those filings can remain in circulation long after they’re sealed. Moreover, all eviction records in DC are products of racial injustice and using them to produce tenant screening scores or make housing decisions only deepens that injustice. Automatically sealing all eviction records at the point of filing, and limiting the types of information tenant screening companies can report, are important steps the Council can take toward addressing algorithmic discrimination.
For similar reasons, Council should also move to limit access to criminal records by passing the RESTORE Amendment Act. Criminal records are one way discriminatory policing practices are codified into data, and can be used by algorithmic decision-making tools to both replicate the surveillance and suspicion of marginalized communities, as well as limit people’s housing and job opportunities. Mentioned earlier are the ways background checks can be easily implemented in employment and tenant screening processes. DC has some of the weakest criminal record sealing laws in the country. Non-conviction arrests can appear on a background check for seven years, and convictions of any kind can appear on a background check indefinitely. This means arrests or convictions that have nothing to do with a person’s ability to perform a job or uphold their lease can still keep them from getting a job or securing housing for years or even decades after an encounter with the criminal legal system. Of the criminal record sealing reform bills currently introduced, the RESTORE Amendment Act is the strongest. The Act would automatically seal non-conviction records from public view and provide a streamlined process and shorter waiting periods for sealing certain convictions. Passing the RESTORE Amendment Act would offer significant relief to DC residents and keep criminal records from being used in algorithmic decision-making systems.
7. Many of the algorithmic decisions that impact DC residents’ wellbeing are decisions made by government agencies.
When a District resident applies for home-based care through Medicaid, they are subject to an eligibility decision based on a scoring algorithm that does not consider the impact of cognitive issues on the amount of care that they need. In this case the technology is relatively simple: there is a 286-question assessment conducted by a nurse in-person. The responses to these questions are scored and entered into a software system. This software then uses a small subset of these questions to calculate an eligibility score and, if the person is found eligible, to decide how many hours of care should be allocated to that person. This algorithmic decision has serious implications. It can dictate whether someone is able to stay in their community to get the care they need or is forced into an institution or to go without care. The failings of this algorithmic decision system also mean that people are forced to appeal the decisions, get legal support, and spend the time to make their case in a hearing in order to have a chance at getting the care they need. Whether in the SDAA or other legislation, the Council must address DC agencies’ discriminatory use of technology, as well as the underlying policies that limit residents’ access to the care they need.
8. Demographic testing is essential to civil rights enforcement. However, collecting and inferring demographic data for antidiscrimination testing requires careful planning and safeguards.
In several civil rights domains, demographic testing has been a historically important (and in some cases legally mandated) means of rooting out discrimination. Fair housing testers investigate whether landlords treat potential tenants differently based on their race or source of income. Mortgage lenders are required to collect demographic data from borrowers and analyze their lending practices for disparities. Many employers are required to ask job applicants and employees to answer voluntary demographic questions and to submit reports to government agencies on the aggregate demographic makeup of their workforce, broken down by race and gender categories.
In recent years, in response to pressure from civil rights groups, some technology firms have begun to acknowledge the need to collect or infer demographic data to test products and algorithms for discriminatory impacts. As Upturn wrote in a paper on demographic testing, “Organizations cannot address demographic disparities that they cannot see.” Thus, it’s important that the audits under the SDAA include testing for discrimination.
Testing algorithms for discrimination will require covered entities to collect or infer demographic data. In many cases, covered entities may not already have access to such data. Choosing a methodology for collecting or inferring this data and for conducting discrimination testing is a sensitive and context-specific process. There is no one-size-fits-all approach. For example, when Airbnb tests for discrimination by its hosts against potential guests, it wants to measure guests’ “perceived” race (i.e. how hosts perceive guests). However, when analyzing whether homes in neighborhoods of color are systematically undervalued in appraisals, it may be reasonable to infer neighborhood demographics using census data. While we should expect covered entities to self-test their systems, we cannot assume that they already possess the data or expertise needed to do it responsibly.
Finally, the process of collecting and using demographic data for anti-discrimination purposes must be subject to safeguards. Of course, demographic data about an individual, like race, can be very sensitive and potentially harmful if it’s shared or used in the wrong way, particularly in the process of obtaining employment or housing. Covered entities should be required to store such data separately from other data, and should only access and use this data for antidiscrimination purposes. The Council should consider including these safeguards in the SDAA.
We would be happy to meet with you and your offices to share more about the harms of algorithmic decision-making that we mentioned as well as how through both the SDAA and other legislation Council can tackle these issues.
Emily Paul, Project Director (firstname.lastname@example.org)
Natasha Duarte, Project Director (email@example.com)
Urmila Janardan, Policy Analyst (firstname.lastname@example.org)