We submitted comments in response to the Office of Science and Technology Policy’s request for information on public and private sector uses of biometric technologies. The data-driven technologies (including biometric tech) used by powerful institutions to shape key decisions about people's lives often mirror and exacerbate historical racial and economic disparities in housing, employment, public benefits, the criminal legal system, and other areas of opportunity and wellbeing.
Office of Science and Technology Policy
Submitted via email to BiometricRFI@ostp.eop.gov
RE: Request for Information on Public and Private Sector Uses of Biometric Technologies (FR Doc. 2021-21975)
Thank you for the opportunity to respond to this Request for Information on Public and Private Uses of Biometric Technologies. Upturn is a research and advocacy group that works to advance equity and justice in the design, governance, and use of technology.
We write in support of the Office of Science and Technology Policy’s efforts to protect people’s fundamental rights and opportunities as powerful institutions continue to use data-driven technologies to shape key decisions about people’s lives. These technologies, which include biometric technologies, often mirror and exacerbate historical racial and economic disparities in housing, employment, public benefits, the criminal legal system, and other areas of opportunity and wellbeing.
Across these areas, technologies are often used to make political decisions that can substantially affect people’s material conditions, especially in the absence of careful attention and government regulation. Over the past few decades, these technologies have undermined existing legal protections, including longstanding civil rights protections that have not kept pace with technology.
1. Biometrics are just one type of technology that are shaping people’s rights and opportunities and deepening existing racial, economic, and other social disparities.
Biometric technologies are among the latest in a long line of technologies that purport to measure people’s attributes and predict future behavior, often with serious consequences. For decades, both governments and the private sector have used digital technologies to help determine people’s access to social resources, such as housing and government benefits; economic opportunities, including jobs, credit, and education; and basic autonomy and wellbeing, including healthcare and public safety.
Today’s terminology places all of these technologies in the frame of “AI,” which confers more complexity and novelty than the issues often deserve. The most consequential technologies that are affecting people’s rights and shaping their opportunities today are often not new — and the problems that these technologies exacerbate, such as racial, gender, disability, and other forms of discrimination and inequities, are longstanding. For instance, statistical risk assessment tools that states are adopting today for pretrial release decisions date back to at least the 1990s. Consumer credit scoring algorithms, like FICO, emerged in the 1980s.
The same concerns that animate today’s call for a new Bill of Rights for an “AI-powered world” were raised during the Obama administration, under the frame of “big data,” which was the fashionable term at the time. While biometric technologies, particularly recent applications of face recognition, may be an attractive starting point, this administration must consider the impact of a broader scope of technologies and data practices, most of which are not biometrics or AI.
Consider a job applicant who is applying online for an hourly job. Many large employers in the U.S. now use multipurpose “applicant tracking systems” to manage their hiring processes, which often include background checks and a variety of online skills and personality screening tests. Some personality tests used in this context purport to assess people’s trustworthiness and other traits, but in ways that reflect racist and ableist assumptions and anti-union motivations. While these aren’t complex technologies, they are among the ones that regulators like the Equal Employment Opportunity Commission should center in any examination of hiring discrimination and technology. To be sure, some vendors, like HireVue, have sought to introduce face or voice analysis technologies into employers’ interviewing processes, but the practical impact of these applications today remains quite limited.
Similarly in other areas, many well-entrenched technology and data practices continue to have adverse impacts on Americans’ everyday lives: the use of eviction and criminal records in tenant screening tools, increased digital tracking of families in the child welfare system and of workers in home care, law enforcement searches of people’s cellphones, and so on. Practices and systems like these have harmed people for decades — scrutiny cannot only be limited to emerging tools like biometrics or AI.
2. It’s inadequate to address the harms of technology by examining technology in isolation. It’s vital to consider the broader social, political, and historical context in which technology is used.
Technology tends to amplify structural power — and technology’s impact depends not only on its design, but also on the broader social, political, and historical context in which it is used. While work to assess the statistical validity of a technology may provide important technical guideposts, additional perspectives are needed to more fully evaluate the potential effects of technology in various social contexts.
As a case in point, researchers and government agencies have worked to assess racial and gender disparities in popular face recognition programs. These studies have been indispensable to understanding these programs’ flaws. But even a technically “perfect” face recognition system would still perpetuate many social harms, including the harms of increased surveillance. This is why, last year, over 40 civil society organizations called for an end to law enforcement’s use of face recognition. Due to the long history of racial discrimination and abuse by law enforcement in the United States, which continues to this day, the organizations concluded that “in the context of policing, face recognition is always dangerous—no matter its accuracy.”
In other cases, the use of a technology may benefit some people while at the same time harming others. For example, the use of face recognition to verify the identities of people applying for unemployment benefits may speed up the process for those who have easy access to smartphones and for whom the software works, while creating barriers for others. Technology can also shift and widen power imbalances, such as when landlords install face recognition to control access to their buildings. The problems here are not only about the technology’s accuracy or validity, but are largely tied to existing social inequities and harms that technology further amplifies.
For these reasons, policy debates about the merits of certain technologies need to be rooted in particular social contexts, not in a vacuum. To that end, Upturn, ACLU, the Leadership Conference on Civil and Human Rights, and a coalition of other organizations recently urged relevant federal agencies to step up their regulatory and enforcement activities to specifically address technology’s role in discrimination in housing, hiring, and financial services.
3. Legal barriers such as trade secrets and non-disclosure agreements often hamper efforts to independently scrutinize the use of technologies. Even still, creating meaningful transparency is only the first step to addressing harms.
Too often, it’s difficult or impossible for researchers, advocates, investigative journalists, and communities to interrogate and challenge the use of technologies. While transparency alone will not mitigate the harms, it is an important baseline upon which people can begin to ask questions about how technologies are used and the potential ways they create or exacerbate inequities.
One way that technology shifts power is through opacity. While opacity is often attributed to the complex nature of new technologies, such as machine learning models, opacity is often also created or furthered through legal and policy choices that put corporate interests above people’s fundamental rights.
For instance, claims of trade secrecy have prevented criminal defendants from scrutinizing evidence created by potentially flawed probabilistic DNA analysis software used by law enforcement. At least two courts have ordered disclosure of the software’s source code to uphold the constitutional rights of criminal defendants to confront the evidence against them. Such trade secrets claims have been made not only by private vendors like TrueAllele, but also by government agencies seeking to shield their decision-making tools from independent scrutiny. In a similar vein, private vendors and government agencies have used non-disclosure agreements to hide the mere fact that certain technologies are in use.
At the state level, one step forward has been Illinois’s Biometric Information Privacy Act (BIPA), which requires companies to provide disclosure and obtain individual consent before collecting and using biometric information, and prohibits companies from selling or further sharing biometric data without consent. While notice-and-consent can place undue burdens on individuals and may be insufficient to address systemic harms, BIPA gave rise to a number of high-profile class action lawsuits and settlements seeking to control how biometrics are used.
4. Regulators and enforcement agencies must actively measure, audit, and address systemic discrimination where technologies are used, and consider non-technological alternatives.
Inferential and other predictive technologies make probabilistic guesses and they inevitably make mistakes. They also often fail for more prosaic reasons, due to inequities in access to or familiarity with smartphones and other technological requirements. When these technologies are used to mediate high-stakes decisions, such as determining access to crucial government services and benefits, these failures are not only frustrating and time-consuming but in some cases life-threatening. Even when these technologies work, they can introduce friction and rigidity to processes that ultimately hinder people’s access to vital resources and opportunities. These barriers disproportionately harm people of color, poor people, disabled people, and others.
During the pandemic, as millions of workers sought unemployment benefits, many states began to adopt face recognition tools to verify people’s identities. But this created significant burdens for many who either did not have access to smartphones, or for whom the software failed to match their identity. Many were then required to wait on hold for hours to resolve issues and some — without alternate options or timely redress — ended up abandoning the process altogether in frustration, giving up on the benefits that they deserved to receive. Importantly, because of existing disparities across race, class, and geography in access to smartphones and broadband internet, these burdens too often fell on those who were most vulnerable and most in need of benefits.
In another context, the growing popularity of e-proctoring software — from K-12 classrooms to bar examinations — creates systems that often fail to verify the identities of Black students and other students of color, arbitrarily and unfairly flag some students for cheating, and set up rigid behavioral rules that punish students for getting up to use the bathroom or looking around the room. While these are problems for any student, such software can impose much worse effects on disabled students, “which can also exacerbate underlying anxiety and trauma.” Black students and other students of color, who are already more likely to face punishment in school, are especially vulnerable to long-lasting negative effects of increased monitoring.
Rarely are there alternatives that allow students or unemployed people to opt-out of these mainstream processes and avoid the coercive effects of technology. These are systemic harms that require systemic interventions, but it’s often difficult for individuals who encounter harms to show broader discriminatory patterns. It’s necessary for regulators and enforcement agencies to play a stronger and more active role to assess whether technologies are exacerbating existing inequities in key areas of justice and opportunity. One way to do this is by using demographic data to measure and audit systems for disparate impact. These are long-standing civil rights enforcement measures that can also be used to assess the impact of new technologies.
These are urgent issues that the Biden administration must address. In July 2021, Upturn wrote a letter to the Office of Science and Technology Policy (OSTP), together with 26 other groups, urging OSTP to work across the federal government to “identify how technology can drive racial inequities, and help agencies devise new policies, regulations, enforcement activities, and guidance that address these barriers.” Attached to the letter were three memos sent to federal agencies outlining concrete recommendations to address technology’s role in housing, hiring, and financial services discrimination. While some progress has been made at the agency level, much more remains to be done. OSTP must work to support the administration in developing a proactive and coordinated policy agenda to tackle these challenges.
Thank you for considering these comments. We welcome further conversations on these important issues. If you have any questions, please contact Emily Paul (Project Director, firstname.lastname@example.org) and Harlan Yu (Executive Director, email@example.com).