August 7, 2023
The Honorable Joseph R. Biden
President of the United States
The White House
The Honorable Kamala D. Harris
Vice President of the United States
The White House
RE: Advancing Anti-Discrimination Testing in an Artificial Intelligence Executive Order
In announcing voluntary commitments from several artificial intelligence companies, the Biden-Harris administration noted it is currently working on developing an Executive Order “to help America lead the way in responsible innovation” in artificial intelligence. As the administration considers the contents of an Executive Order on artificial intelligence, we, the undersigned civil rights, technology, policy, and research organizations, call on the administration to continue centering civil rights protections. The administration has played a key role in consistently elevating civil rights protections for artificial intelligence and related technologies. The forthcoming Executive Order offers the administration an opportunity to build upon the Blueprint for an AI Bill of Rights, Executive Order 14091 (“Further Advancing Racial Equity and Support for Underserved Communities Through the Federal Government”), a stream of agency actions, NIST’s AI Risk Management Framework, and the recently secured voluntary corporate commitments.
Among other actions, the forthcoming Executive Order offers the administration an opportunity to launch a new framework of testing, evaluation, and ongoing monitoring of algorithmic systems in civil rights areas. Given the foreseeability and pervasiveness of algorithmic harms, the administration should consider actions that would shift burdens toward companies that develop and use AI tools, such that companies would be mandated to take measures to detect and address algorithmic discrimination — particularly if they operate in key civil rights areas. The Executive Order also offers the administration an opportunity to lead by example, by setting policy for the federal government’s development, procurement, use, and funding of artificial intelligence that is rooted in the AI Bill of Rights.
Within the forthcoming Executive Order on artificial intelligence, the administration should:
Direct agencies to consider opportunities that would encourage or require companies to perform regular anti-discrimination testing of their systems used in sensitive civil rights contexts. To support efforts that would require algorithmic systems used in sensitive civil rights domains to be evaluated for discriminatory effects on an ongoing basis, the Executive Order should direct agencies to consider rulemaking, guidance, policies, and all other available opportunities that would encourage or require companies that design or deploy algorithmic systems used in sensitive civil rights contexts to collect, infer, and protect sensitive demographic information for anti-discrimination testing purposes and to routinely evaluate their algorithmic systems for disparate effects on a prohibited basis.
Direct agencies to consider opportunities that would shift the burden to companies to regularly search for less discriminatory alternative models. Many policy proposals seek transparency and audits of AI tools for discriminatory outcomes, which is beneficial, but often stop short of prescribing what should happen once discrimination is found. The Executive Order should push one step further, by directing agencies to explore how companies operating in covered civil rights areas can affirmatively search for and adopt less discriminatory models, both before and after deployment. Additionally, the Executive Order should create an Interagency Working Group that studies techniques to discover less discriminatory alternative models and provides recommendations to the Assistant to the President for Domestic Policy on potential reasonable and appropriate measures companies can take to search for and implement less discriminatory alternative algorithms.
Establish a dedicated office inside the Civil Rights Division of the Department of Justice to solidify and expand the federal government’s own anti-discrimination testing capabilities to uncover algorithmic discrimination. The federal government has a long history of using undercover testing to uncover evidence of discrimination by landlords, lenders, and others. Just as the federal government stood up anti-discrimination testing efforts to detect discrimination in the physical world, it must reinvent its capabilities to detect discrimination in digital systems. This requires a sustained and directed effort, as well as new staff capacity, resources, and expertise. An Office of Technology inside the Civil Rights Division of the Department of Justice should be charged with implementing and expanding anti-discrimination testing capabilities, assistance on related cases, and other efforts to combat algorithmic discrimination, as well as coordinating with the relevant technology offices at agencies tasked with enforcing relevant civil rights laws. The Office should develop best practices and procedures for conducting anti-discrimination testing of algorithmic systems, including the development of new methods to uncover discrimination and best practices on the use of inference methodologies to infer protected class status.
Direct the Office of Management and Budget to require anti-discrimination testing of algorithmic systems, as well as searches for less discriminatory alternative algorithms, in its forthcoming guidance on federal agency use of artificial intelligence. Many civil rights groups have previously called on the administration to make the Blueprint for an AI Bill of Rights binding administration policy, and to implement it in part through the forthcoming OMB guidance. The administration has previously noted that this guidance would offer “specific policies for federal departments and agencies to follow in order to ensure their development, procurement, and use of AI systems centers on safeguarding the American people’s rights” and would “serve as a model for state and local governments, businesses and others to follow in their own procurement and use of AI.” When the federal government develops, procures, or uses algorithmic systems in covered civil rights areas, it must ensure that those systems are regularly tested for disparate effects on a prohibited basis, as called for by the administration's AI Bill of Rights. Similarly, it must ensure that developers maintain reasonable measures to search for less discriminatory alternative models on an ongoing basis.
In order to ensure public accountability of these measures, the AI Use Case Inventories required by Executive Order 13960 should be expanded to include summaries of any demographic information, associated outcomes, and descriptions of undertaken disparity assessments and mitigations. The National AI Initiative Office should be charged with creating an annual report assessing agencies on these AI use cases based on their adherence to the AI Bill of Rights.
Thank you for your continued attention to these matters. For any questions or further discussion, please contact Logan Koepke (Project Director, email@example.com) and Harlan Yu (Executive Director, firstname.lastname@example.org).
Algorithmic Justice League
Data & Society Research Institute
Electronic Privacy Information Center
Fight for the Future
Jeff Zients, Assistant to the President and Chief of Staff
Bruce Reed, Assistant to the President and Deputy Chief of Staff
Neera Tanden, Assistant to the President and Domestic Policy Advisor, Domestic Policy Council
Arati Prabhakar, Assistant to the President for Science and Technology, Director of the White House Office of Science and Technology Policy
Shalanda Young, Director, Office of Management and Budget