Skip to main content

AI Accountability and Governance in Focus: Complaint Filed in DC District Court for AI Transparency in Federal Government, and Senate NDAA’s Strategic Vision for AI — AI: The Washington Report

  • On October 1, the Democracy Forward Foundation, a nonprofit advocating for government transparency, filed a complaint in the US District Court for the District of Columbia against the Office of Personnel Management (OPM), General Services Administration (GSA), Department of Housing and Urban Development (HUD), and Office of Management and Budget (OMB).
  • The complaint alleges that these federal agencies failed to respond to Freedom of Information Act (FOIA) requests submitted in June and July, which sought records on the use of artificial intelligence in federal rulemaking and regulatory processes.
  • The lawsuit brings to light the broader opportunity, and risk, that AI presents in transforming federal governance. The complaint specifically points to the Trump administration’s “deregulation agenda supported by the use of artificial intelligence,” and how these federal agencies have been utilizing AI in processes such as rulemaking, regulatory streamlining, and internal agency functions.
  • On October 9, the Senate passed the National Defense Authorization Act (NDAA) for FY2026, featuring notable AI-related provisions and amendments. These reflect a growing recognition of AI as a foundational technology for national defense, particularly in cybersecurity, logistics, operational readiness, and strategic deterrence.
  • Despite pushback from David Sacks, the White House’s lead advisor on AI and crypto, who advocated for the removal of the Guaranteeing Access and Innovation for National Artificial Intelligence Act of 2026 (GAIN Act) from the NDAA, the amendment was ultimately retained, signaling strong congressional support for prioritizing domestic access to critical AI hardware. The GAIN Act would require chipmakers to prioritize domestic customers before selling advanced semiconductors abroad. 
     

Democracy Forward Files Complaint for AI Transparency in Federal Government

On October 1, the Democracy Forward Foundation, a nonprofit advocating for government transparency, filed a complaint in the US District Court for the District of Columbia against the Office of Personnel Management (OPM), General Services Administration (GSA), Department of Housing and Urban Development (HUD), and Office of Management and Budget (OMB). The complaint alleges that these agencies failed to respond to Freedom of Information Act (FOIA) requests submitted in June and July, which sought records on the use of artificial intelligence in federal rulemaking and regulatory processes. Despite acknowledging receipt of the requests, the agencies have allegedly not provided the requested information within the legally mandated timeframe. This legal action highlights a growing tension in public administration: the promise of AI to modernize governance versus the risks of deploying such tools without adequate transparency or oversight.

The lawsuit brings to light the broader opportunity, and risk, that AI presents in transforming federal governance. The complaint specifically points to the Trump administration’s “deregulation agenda supported by the use of artificial intelligence,” and how these federal agencies have been utilizing AI in processes such as rulemaking, regulatory streamlining, and internal agency functions. This aligns with the Trump administration’s AI Action Plan, which explicitly calls for the removal of “onerous regulation” and encourages rapid AI adoption across government to accelerate innovation and efficiency, as we’ve previously covered. However, the lawsuit also exposes a critical tension: while the administration’s AI Action Plan promotes AI as a tool for national competitiveness and administrative modernization, it does so with limited emphasis on transparency or public accountability. This raises concerns that AI is being used not only to optimize governance but also to quietly reshape regulatory frameworks in ways that may sideline public input and weaken institutional checks, as the Democracy Forward complaint points out.

The case also underscores significant risks. Chief among them is the lack of transparency: AI systems often operate as “black boxes” with limited public insight into their algorithms, training data, or decision logic. Democracy Forward points out that this is especially troubling in contexts such as OPM’s use of AI to summarize public comments on the proposed “Improving Performance, Accountability and Responsiveness in the Civil Service” rule, which would reclassify certain policy-influencing roles into a new “Schedule Policy / Career” category, effectively making them at-will positions. The nonprofit also cites examples where GSA has been “rolling out an artificial intelligence evaluation suite that enables federal agencies to experiment with and adopt artificial intelligence at scale.” Additionally, OMG has an AI program, SweetREX Regulation AI Plan Builder, which is “developed by affiliates of the Department of Government Efficiency (“DOGE”) working out of HUD, which is meant to expedite the process for reviewing and updating regulations.”

The agencies’ alleged failure to respond to FOIA requests only deepens concerns about accountability. Legal and constitutional questions also arise, particularly about whether delegating aspects of rulemaking to AI violates the Administrative Procedure Act (APA), which mandates public participation and reasoned decision-making.

The implications of this case extend to other stakeholders in the AI industry as well. For regulated industries, the opaque use of AI introduces regulatory uncertainty and the need to closely monitor evolving compliance obligations. Legal and policy professionals face a new frontier in administrative litigation, with opportunities to shape emerging standards for algorithmic transparency and accountability. Meanwhile, technology vendors may see increased demand for AI tools in federal procurement but may also face reputational and legal risks if those tools are misused or lack safeguards. As Skye Perryman, president and CEO of Democracy Forward, stated in the organization’s press release: “The public has a right to know the extent to which the administration has used unreliable and unproven AI tools to expand its agenda of undermining regulations that protect people.” The complaint serves as a critical inflection point in the evolving relationship between AI, governance, and democratic accountability.

The four agencies will have to respond to the lawsuit within 30 days of the summons service.

Senate Advances NDAA FY2026 with AI-Related Provisions

On October 9, the Senate passed the National Defense Authorization Act (NDAA) for FY2026, featuring notable AI-related provisions and amendments. These reflect a growing recognition of AI as a foundational technology for national defense, particularly in cybersecurity, logistics, operational readiness, and strategic deterrence. The provisions aim to accelerate AI adoption, secure supply chains, and institutionalize oversight mechanisms.

The AI-related provisions fall into three main categories: AI Governance, Strategy, and Policy; AI for Specific Operations and Warfare; and AI Infrastructure, Resources, and Innovation.

As part of the Senate NDAA’s first pillar — AI Governance, Strategy, and Policy — Section 1623 introduces a provision establishing a cross-functional team tasked with developing a standardized framework for assessing AI models. This framework will guide the evaluation of all major Department of Defense (DoD) AI systems, which must be assessed by January 1, 2028. Additionally, the bill proposes the creation of an Artificial General Intelligence (AGI) Steering Committee to examine the military applications and strategic implications of AGI. The committee is required to deliver an adoption strategy for the DoD no later than April 1, 2026. These provisions reflect a shift from ad hoc experimentation toward institutionalized oversight and long-term planning. By setting clear deadlines and formalizing governance structures, the Senate signals its intent to treat AI not just as a tactical tool, but as a strategic capability requiring rigorous evaluation and policy alignment.

The Senate NDAA also emphasizes the integration of AI into specific defense operations and warfare scenarios. One provision mandates the use of commercial AI tools for logistics tracking, planning, and analytics in at least two DoD exercises. Another calls for accelerating the development and deployment of AI-driven software to support mission planning and assess the military implications of AGI. These measures reflect a broader strategic intent to embed AI into core operational functions, moving beyond experimentation toward practical, mission-oriented applications.

This approach aligns with the Trump administration’s AI agenda, as outlined in the White House AI Action Plan, which prioritized global leadership in AI capabilities. The Senate NDAA reinforces this vision by identifying AI as a key area in the strategic partnership between the United States and Taiwan, a move that not only underscores AI’s geopolitical significance but also signals a commitment to strengthening alliances through technological cooperation.

To support these ambitions, the bill includes provisions aimed at building the necessary infrastructure for AI innovation and deployment. It proposes the establishment of an Army program to advance robotic automation in munitions manufacturing, enhancing production efficiency and resilience. Additionally, the creation of an AI Sandbox Environment Task Force is intended to streamline experimentation within the DoD by consolidating requirements, cataloging existing solutions, and simplifying approval processes. These initiatives suggest a recognition that operationalizing AI at scale requires not only policy and strategy but also robust technical and organizational foundations.

Despite pushback from David Sacks, the White House’s lead advisor on AI and crypto, who advocated for the removal of the Guaranteeing Access and Innovation for National Artificial Intelligence Act of 2026 (GAIN Act) from the NDAA, the amendment was ultimately retained, signaling strong congressional support for prioritizing domestic access to critical AI hardware. The GAIN Act would require chipmakers to prioritize domestic customers before selling advanced semiconductors abroad. Its inclusion underscores a growing legislative consensus that supply chain resilience and strategic control over advanced AI technologies are essential to maintaining US leadership in AI, even amid internal policy disagreements within the executive branch.

At the same time, the Trump administration’s reversal of its initial export ban and repeal of the Biden-era AI Diffusion rule suggest a pivot toward a more industry-friendly posture. This shift appears aimed at addressing stakeholder concerns that overly restrictive export controls could hinder domestic innovation and global competitiveness. In this context, the inclusion of the GAIN Act in the Senate’s NDAA is particularly notable. While the amendment aligns with the administration’s stated goal of securing US dominance in AI by strengthening domestic capabilities, it also introduces a more protectionist mechanism, the “first right of refusal” provision, that restricts exports until domestic demand is met. This provision marks a departure from the administration’s recent deregulatory moves and reflects a deeper tension between promoting open-market innovation and safeguarding strategic technologies. The inclusion of the GAIN Act in the NDAA signals continued legislative resistance to policies that could enable strategic competitors to gain access to cutting-edge AI hardware.

The NDAA passed the Senate with strong bipartisan support in a 77-20 vote and now heads to a joint conference committee, where lawmakers will reconcile differences with the House version. Key areas of divergence include overall budget levels, acquisition reform proposals, and provisions related to personnel policy. Notably, the House NDAA does not contain a counterpart to the Senate’s GAIN Act, which prioritizes domestic access to AI chips over exports. This omission could become a focal point in negotiations, especially given the growing bipartisan concern over China’s access to advanced AI hardware.

We will continue to monitor, analyze, and issue reports on these developments. Please feel free to contact us if you have questions about current practices or how to proceed.

 

Sign up to receive email updates from ML Strategies/Mintz.
Subscribe Now

Content Publishers

Bruce D. Sokler is a Mintz antitrust attorney. His antitrust experience includes litigation, class actions, government merger reviews and investigations, and cartel-related issues. Bruce focuses on the health care, communications, and retail industries, from start-ups to Fortune 100 companies.

Alexander Hecht

Executive Vice President & Director of Operations

Alex Hecht is a trusted attorney and policy strategist with over 20 years of experience advising clients across a broad range of industries on how to navigate complex policy environments. His strategic insight and hands-on experience in both legislative and regulatory arenas empower clients to advance their priorities with clarity and confidence in an evolving policy landscape.

Christian Tamotsu Fjeld

Senior Vice President

Christian Tamotsu Fjeld draws on two decades of Capitol Hill experience to support clients in building relationships, shaping policy, and engaging effectively with the federal government. His experience working with Congress and his insights help clients anticipate federal developments and advance their priorities with clarity and confidence.

Nicole Y. Teo

Nicole Y. Teo is a Mintz Project Analyst based in Washington, DC.