ITIF Logo
ITIF Search

Comments to OMB on Responsible AI Procurement

Comments to OMB on Responsible AI Procurement
April 29, 2024

On behalf of the Center for Data Innovation (datainnovation.org), I am pleased to submit this response to the Office of Management and Budget’s (OMB) request for information on the responsible procurement of artificial intelligence (AI) by federal agencies.[1]

The Center for Data Innovation studies the intersection of data, technology, and public policy, and formulates and promotes pragmatic public policies designed to maximize the benefits of data-driven innovation in the public and private sectors. It educates policymakers and the public about the opportunities and challenges associated with data, as well as technology trends such as open data, artificial intelligence, and the Internet of Things. The Center is part of the Information Technology and Innovation Foundation (ITIF), a nonprofit, nonpartisan think tank.

Summary

OMB’s memorandum on “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence” (AI M-memo) published on March 28, 2024 requires agencies to follow a set of minimum practices when using safety-impacting AI and rights-impacting AI, or else stop using AI in their operations.[2] To ensure that federal contracts for the acquisition of an AI system or service align with the guidance in this memorandum, OMB should support the development of voluntary standard terms for AI contracts to make procurement more efficient and expand access to federal contracts to as diverse and large a pool of vendors as possible so federal agencies can access the best systems.

Comments

5. What access to documentation, data, code, models, software, and other technical components might vendors provide to agencies to demonstrate compliance with the requirements established in the AI M-memo? What contract language would best effectuate this access, and is this best envisioned as a standard clause, or requirements- specific elements in a statement of work?

The AI M-memo lays out several requirements for federal agencies to use AI systems, including completing an impact assessment covering the AI's purpose, risks, and data quality; testing the AI in real-world conditions; independently evaluating its performance; conducting ongoing monitoring and risk evaluations; mitigating emerging risks; training workers to understand the AI system’s output; and providing public notice and clear documentation about the AI system’s use.

To meet these requirements, federal agencies will likely need to have information on technical components and documentation—especially around data—from any vendors they contract with because vendors and agencies typically collaborate on developing AI solutions, from transforming raw data into AI solutions. The problem is there are not widely accepted definitions of the basic key terms in licensing agreements, as a 2022 report from the Global Partnership on AI (GPAI) explains.[3] It recommends having common definitions for what original input data, processed data, untrained models, and trained models are to help make it easier for contracting parties to reach agreements.

Beyond having standard terms for the nomenclature of contracts, it would be useful for agencies to have a standard set of provisions they could put into contracts with vendors to make contracting more efficient. The European Union has already done work on this that should be instructive to United States. Legal expert Jeroen Naves has collaborated with the Directorate-General for Internal Market, Industry, Entrepreneurship and SMEs (DG GROW) and Directorate-General for Communications Networks, Content and Technology (DG CNECT), to create standard AI procurement clauses that public organizations may use to contract with AI vendors and are in line with the EU’s AI Act.[4] For instance, one proposed clause public organizations could use is:

“The Supplier ensures that the Data Sets used in the development of the AI System are relevant, representative, free of errors and complete. The Supplier ensures that Data Sets have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the AI System is intended to be used. These characteristics of the Data Sets may be met at the level of individual data sets or a combination thereof.”[5]

This clause is to help organizations ensure the systems they get from vendors would align with Article 10 of the approved EU AI Act, which states a high-risk AI system must have training, validation and testing data sets that are relevant, representative, free of errors, and complete.[6] But the United States has not passed similar legislation with these requirements, and in many cases, providing error-free or complete data is not feasible, practical, or necessary.

OMB should support the creation of standard clauses that align with the requirements in AI M-memo just as the EU is doing for the requirements of its AI Act. The EU is not creating its standard clauses from scratch but is instead building upon the standard clauses for the procurement of algorithmic systems that the City of Amsterdam developed in 2018. The authors note they are “drafting the European version of the Amsterdam Clauses” and the United States should build on their work for an American variant of these common clauses that aligns with the AI M-memo.[7]

Developing effective standard terms requires a collaborative, multi-stakeholder process, as the OECD AI policy lab points out.[8] The EU government is supporting external experts in peer reviewing the draft clauses through public workshops and requests for comments and OMB should follow suit for any standard clauses it creates.[9]

Finally, it is important to note that there will likely be a wide range of different arrangements and use cases for AI that federal agencies will be engaging vendors in. The goal should not be to create standard terms that are one-size-fits-all, but rather a menu of different provisions or agreements that federal agencies can use to help streamline the process of AI procurement that fit within the bounds laid out in the AI M-memo.

6. Which elements of testing, evaluation, and impact assessments are best conducted by the vendor, and which responsibilities should remain with the agencies?

When considering which elements of testing, evaluation, and impact assessments should be handled by the vendor versus the agencies, it's crucial to understand their respective roles within the broader AI deployment process.

The vendor, responsible for developing and supplying the AI system, is generally best positioned to conduct certain aspects of testing and evaluation. This includes performance testing to ensure the system meets specified criteria, such as accuracy, reliability, and efficiency. Additionally, the vendor is in a prime position to provide insights into the system's design, functionality, and limitations, as well as the provenance of the datasets used to train and validate the system.

On the other hand, agencies tasked with deploying and utilizing the AI system within their operational context have a unique perspective and responsibilities. They are generally best suited to conduct impact assessments that evaluate the system's potential implications and risks within their specific domains. This includes assessing the system's compatibility with existing workflows, its potential impact on stakeholders, and any ethical or regulatory considerations that may arise. Agencies are also responsible for evaluating the system's performance in real-world scenarios and determining its suitability for their operational needs.

10. How might OMB ensure that agencies procure AI systems or services in a way that advances equitable outcomes and mitigates risks to privacy, civil rights, and civil liberties? Many of the requirements outlined in the AI M-Memo, such as impact assessment, real-world testing, and ongoing risk monitoring will allow agencies to assess the performance of AI systems to detect and respond to potential threats to privacy, civil rights, and civil liberties.

In addition, the larger and more diverse the pool of vendors that federal agencies can contract with, the better access they will have to the best systems, including those that mitigate risks to privacy, civil rights, and civil liberties.

Most federal contracts for AI services are awarded to companies concentrated on the East Coast, close to where federal agencies are located, despite many top U.S.-based AI firms being based on the West Coast. According to a 2021 report from the Government Services Administration (GSA), approximately 87 percent of the federal contracts awarded for robotic process automation went to companies in Virginia and New York. This is the case because companies located close to federal agencies are more likely to be those already familiar with the intricacies of contracting with the federal government. That doesn’t mean they necessarily have the best systems, just that they better understand how to deal with the notoriously complex process.

OMB should expand access to new contractors as much as possible by encouraging the AI Center of Excellence (COE) within GSA to develop an all-encompassing procurement website for federal AI contracts. One-stop e-procurement websites and e-quoting allows private sector firms to easily locate and apply for government contracts. OMB should also encourage AI COE to collate and make available more recent data on the number of AI contracts awarded and where the companies were located.

Endnotes

[1] “Request for Information: Responsible Procurement of Artificial Intelligence in Government,” Federal Register, March 29, 2024, https://www.federalregister.gov/documents/2024/03/29/2024-06547/request-for-information-responsible-procurement-of-artificial-intelligence-in-government.   

[2] “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence,” Memorandum for the Heads of Executive Departments and Agencies, the White House, March 28, 2024, https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf.

[3] Lee Tiedrich et al., “Preliminary Report on Data and AI Licensing,” (The Global Partnership on Artificial Intelligence, November 2022), https://gpai.ai/projects/innovation-and-commercialization/intellectual-property-expert-preliminary-report-on-data-and-AI-model-licensing.pdf.

[5] “Roundtables on Procurement Clauses of AI,” Living in EU, accessed April 25, 2024, https://living-in.eu/events/roundtables-procurement-clauses-ai.

[6] “Article 10: Data and Data Governance,” EU Artificial Intelligence Act, accessed April 25, 2024, https://artificialintelligenceact.com/title-iii/chapter-2/article-10/.

[7] Georges Lobo, “Interview of Lydia Prinsen from the City of Amsterdam and Jeroen Naves from Pels Rijcken,” Public Sector Tech Watch, European Commission, September 30, 2022, https://joinup.ec.europa.eu/collection/public-sector-tech-watch/news/interview-ai-procurement-clauses.

[8] Lee Tiedrich, “When AI generates work, standard contractual terms can help generate value and clarity,” June 6, 2023, https://oecd.ai/en/wonk/contractual-terms.

[9] “Procurement of AI,” European Commission.

Editors’ Recommendations

Back to Top