The US Office of Management and Budget Requests for Comments on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence Draft Memorandum

The Office of Management and Budget (OMB) is seeking public comment on a draft memorandum titled Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence (AI). As proposed, the memorandum would establish new agency requirements in areas of AI governance, innovation, and risk management, and would direct agencies to adopt specific minimum risk management practices for uses of AI that impact the rights and safety of the public. The full text of the draft memorandum is available for review at https://www.ai.gov/​input and https://www.regulations.gov.

Through this Request for Comment, OMB hopes to gather information on the questions posed below. However, this list is not intended to limit the scope of topics that may be addressed. Commenters are invited to provide feedback on any topic believed to have implications for the content or implementation of the proposed memorandum.

OMB is requesting feedback related to the following:

1. The composition of Federal agencies varies significantly in ways that will shape the way they approach governance. An overarching Federal policy must account for differences in an agency's size, organization, budget, mission, organic AI talent, and more. Are the roles, responsibilities, seniority, position, and reporting structures outlined for Chief AI Officers sufficiently flexible and achievable for the breadth of covered agencies?

2. What types of coordination mechanisms, either in the public or private sector, would be particularly effective for agencies to model in their establishment of an AI Governance Body? What are the benefits or drawbacks to having agencies establishing a new body to perform AI governance versus updating the scope of an existing group (for example, agency bodies focused on privacy, IT, or data)?

3. How can OMB best advance responsible AI innovation?

4. With adequate safeguards in place, how should agencies take advantage of generative AI to improve agency missions or business operations?

5. Are there use cases for presumed safety-impacting and rights-impacting AI (Section 5 (b)) that should be included, removed, or revised? If so, why?

6. Do the minimum practices identified for safety-impacting and rights-impacting AI set an appropriate baseline that is applicable across all agencies and all such uses of AI? How can the minimum practices be improved, recognizing that agencies will need to apply context-specific risk mitigations in addition to what is listed?

7. What types of materials or resources would be most valuable to help agencies, as appropriate, incorporate the requirements and recommendations of this memorandum into relevant contracts?

8. What kind of information should be made public about agencies' use of AI in their annual use case inventory?

Written comments must be received on or before December 5, 2023.