Skip to main content

The National AI Commission Act — AI: The Washington Report

Welcome to this week’s issue of AI: The Washington Report, a joint undertaking of Mintz and its government affairs affiliate, ML Strategies (MLS). The accelerating advances in artificial intelligence (“AI”) and the practical, legal, and policy issues AI creates understandably have exponentially increased the federal government’s interest in AI and its implications. In our weekly reports, we hope to keep our clients and friends abreast of that Washington-focused set of potential legislative, executive, and regulatory activities. Other Mintz and ML Strategies subject matter experts will continue to discuss and analyze other aspects of what could be characterized as the “AI Revolution.”

Today’s report focuses on the National AI Commission Act, which was introduced on June 20, 2023 and is principally co-sponsored by Representatives Ted Lieu (D-CA-36) and Ken Buck (R-CO-4). Our key takeaways are:

  1. The National AI Commission Act would establish a bipartisan commission of 20 experts to draft a proposal for a comprehensive regulatory framework on AI.
  2. This National AI Commission would review existing and proposed regulatory efforts in the United States and abroad, selecting aspects to be incorporated into a single framework.
  3. The future of the National AI Commission Act is uncertain in light of Senate Majority Leader Chuck Schumer’s recent announcement of his SAFE Innovation in the AI Age legislative strategy and the lack of any indication of support for the legislation by the House leadership. 

Artificial Intelligence and Regulation Update – The National AI Commission Act

On June 20, 2023, a bipartisan and bicameral group of lawmakers introduced the National AI Commission Act. The bill would establish a commission of 20 experts drawn from a diverse array of fields and direct that commission to formulate a legislative framework on AI. Bill sponsor Representative Ted Lieu (D-CA-36) and co-sponsor Representative Ken Buck (R-CO-4) spoke at a June 22, 2023 forum on AI regulation at Georgetown Law School. Though the panel clarified some aspects of the National AI Commission Act, questions remain about the future of this bill in light of Senator Chuck Schumer’s (D-NY) announcement of the SAFE Innovation in the AI Age legislative strategy (“SAFE Framework”), as well as the lack of any indication of support for the legislation by the House leadership.

Analysis of the National AI Commission Act

The National AI Commission Act is not itself a regulatory framework for AI. The bill seeks to establish a “National AI Commission” (“Commission”), an independent body located in the legislative branch tasked with formulating a proposal for comprehensive regulation of AI. As articulated by Representative Lieu during the Georgetown Law panel, the complexity of AI technology and the pace of innovation necessitate Congress to adopt a period of deliberation and expert consultation prior to implementing AI regulation. “There’s so much innovation that’s happening so quickly that if we were to say right now, ‘Let’s regulate AI,’ I’m not sure we could even define it. We wouldn’t even know what we’re regulating.”

The bill directs the Commission to "ensure, through its review and recommendations...that through regulation the United States" is achieving three major aims related to AI.

  1. Mitigating the risks and potential harms associated with AI. As articulated by Congressman Buck, the private sector will “figure out how to innovate. [Lawmakers] just need to make sure that what they innovate doesn’t…infringe on our rights.”
  2. Protecting US leadership in AI R&D. With regard to AI innovation, Buck asserted that the US is “ahead of the rest of the world right now, and we’ve got to stay ahead of the rest of the world.”
  3. Establishing guardrails to ensure that AI systems align with American values. “A lot of what we’re going to use our AI to do is to counter the evil that will come from other places, and frankly from within our country also,” said Buck. 

Commission Responsibilities

To achieve these goals, the bill assigns the Commission three primary responsibilities.

First, the Commission would conduct a review of the “Federal Government’s current approach to artificial intelligence oversight and regulation.” Although the US does not yet have a comprehensive regulatory framework on AI, Congress has delegated AI oversight responsibility to a number of bodies, including the National Artificial Intelligence Initiative Office. Furthermore, as we have indicated in previous editions of this series, executive branch efforts on AI R&D planning have been underway since the Obama administration. The first responsibility of the Commission would be to survey these existing efforts and select aspects of these initiatives to be included in a comprehensive regulatory framework.

As an example of an existing initiative that the Commission could incorporate into its framework, Lieu singled out the National Institute of Standards and Technology’s (“NIST”) Artificial Intelligence Risk Management Framework. “NIST has a pretty good AI risk framework, and if experts...think that’s a pretty good framework we don’t really have to recreate the wheel. Maybe we try to make parts of that [framework] more mandatory…instead of voluntary.”

Second, the Commission would recommend “any governmental structures that may be needed to oversee and regulate [AI] systems.” The bill is ambiguous as to the possible forms of regulation this Commission may recommend. Some regulators have called for the creation of a new agency to oversee AI, while others advocate delegating AI oversight authority to existing agencies. Representatives Lieu and Buck indicated during their Georgetown Law panel that the National AI Commission Act is purposefully unspecific on this matter. “We’re creating this commission to…make recommendations, because there are a variety of different ways that” an AI bill could be enforced, Lieu stated.

Third, the Commission would develop a “binding risk-based approach to regulate and oversee artificial intelligence applications through identifying applications with unacceptable risks, high or limited risks, and minimal risks.” This “risk-based approach” directly mirrors the one taken by the European Union’s (“EU”) Artificial Intelligence Act, which uses the same phrase to describe its strategy of regulating AI use cases on the basis of their “risk to the health and safety or fundamental rights of natural persons.”

While the Commission would have the example of the EU’s approach in this regard, Rep. Lieu suggested that it may only selectively incorporate aspects of the EU’s Artificial Intelligence Act. According to Rep. Lieu, a benefit of establishing the Commission is that the body will be able to “look at what other places have done” with regard to AI regulation. With the Commission, the United States will have “time to assess” the performance of the EU’s Artificial Intelligence Act and, “if it’s really great then we might try to copy it,” incorporating aspects of the regulation that seem effective and rejecting those that are not.

Commission Reports

The Commission would formalize its findings through a series of reports to be submitted to Congress and the president. Lieu touted the report structure as a more transparent and effective alternative to closed-door meetings between individual members of Congress and AI experts. “I think this is a transparent way for the American public and members of Congress to…see the best ideas and have them vetted. Then Congress can look at those recommendations. We can adopt them, we can reject them, we can modify them.” The bill mandates that the Commission draft three reports, each to be submitted at six-month intervals.

  1. Interim report. Not more than six months after the appointment of all commissioners, the Commission would release an interim report containing its findings and “proposals for any urgent regulatory or enforcement actions.”
  2. Final report. Not more than six months after the submission of the interim report, the Commission would release its final report, which would “constitute the Commission’s findings and recommendations for a comprehensive, binding regulatory framework.”
  3. Follow-up report. Not more than one year after the submission of the final report, the Commission will release a follow-up report containing “any new findings and revised recommendations.”

Within 30 days of the submission of the follow-up report, the Commission would terminate.

Commission Structure

Much of the bill details the complex structure by which the president and senior leadership in the House and Senate would appoint the members of the Commission. The bill appears to take great pains to ensure that the Commission members possess a diverse variety of expertise and that the executive branch and Congressional leadership of both parties are invested in the success of the Commission.

In broad terms, the Commission would consist of 20 members, with each party appointing 10 members “to ensure bipartisanship.” The president, along with the senior-most Democratic and Republican leaders in both the House and the Senate, would have roles in appointing commissioners. Appointees to the Commission would have to possess a “demonstrated background” in at least one of the following four fields:[1]

  1. Computer science or a technical background in AI
  2. Civil society, including matters relating to the Constitution, civil liberties, ethics, and the creative community
  3. Industry and workforce
  4. Government, including national security

According to Lieu, the qualifications for serving on the Commission have been left intentionally broad so as to allow for a greater degree of diversity among commissioners. For Lieu, the bill gives “wide discretion to the president and our legislative leaders to come up with the right balance [in selecting commissioners]. There’s a level of trust there: we hope that they would do that.”

Conclusion: The Viability of the National AI Commission Act

In considering the future of the National AI Commission Act, factors both internal and external to the bill itself are germane.

Understanding political polarization and jurisdictional disputes between regulating bodies as major impediments to the success of legislative efforts, it appears as though the drafters of the National AI Commission Act have tried to produce legislation that includes as many relevant stakeholders as possible. First, the bill attempts to appeal to executive agencies that have already been doing work on artificial intelligence, such as NIST, by explicitly endorsing the incorporation of their efforts into a comprehensive regulatory framework on AI. Second, the bill attempts to appeal to the White House by providing the president and relevant cabinet members with eight of the twenty appointments to the Commission.[2] Third, the bill seeks to include the senior leaders of both parties in both houses of Congress insofar as these lawmakers also receive appointment slots to the Commission.

One outstanding question relevant to the success of the National AI Commission Act is the position of Republican leadership on this bill and other bipartisan AI initiatives. At the time of writing, it is not clear whether Congressional Republican leadership will come to support this and other bipartisan AI bills or champion Republican-led efforts. But absent clarification on this matter, and given that Congress is split and margins in both houses are narrow, producing a bill that has appeal beyond the confines of a single party currently appears to be a prerequisite for success.

Although Rep. Lieu and Rep. Buck’s bill appears to take great pains to achieve acceptance by his Republican colleagues, the National AI Commission Act may be frustrated by the efforts of a senior member of Lieu’s own party. On June 21, 2023, just one day after the release of the National AI Commission Act, Senator Schumer announced his SAFE Framework.[3] The SAFE Framework, like the National AI Commission Act, would leverage expert consultation to produce a comprehensive and bipartisan regulatory framework on AI within a relatively short period of time. But rather than establish a commission to achieve this goal, the SAFE Framework will convene a series of AI Insight Forums beginning in late 2023.

Prompted by an audience question to comment on the impact of the SAFE Framework’s announcement on the viability of the National AI Commission Act, Rep. Lieu appeared to downplay leader Schumer’s efforts. “He didn’t actually introduce any legislation,” Lieu commented in reference to Schumer’s announcement. Furthermore, Lieu asserted that Schumer’s intention to convene experts in order facilitate the creation of AI legislation “sounds like a commission to me.” How existing legislative efforts on artificial intelligence, including the National AI Commission Act, will figure into Senator Schumer’s SAFE Framework is an important yet unresolved question.

The National AI Commission Bill and Senator Schumer’s SAFE AI Framework both represent a legislative approach privileging a period of deliberation and study prior to the implementation of any substantial regulation on AI. Other lawmakers, including Representative Ritchie Torres (D-NY-15), have pressed ahead, advocating for the implementation of concrete AI regulation at this time. While seeming at odds, these approaches may come to co-exist, and indeed, to complement each other.

We will continue to monitor, analyze, and issue reports on these developments.



[1] The bill does not allow any one field to be represented by a majority of the commissioners.
[2] While the president would be allocated eight appointment slots, the president must choose four of these eight appointees from two lists of five candidates. These lists would be prepared by the senior-most leaders of the House and Senate belonging to the party “opposite the Administration.” This mechanism has presumably been put in place to maintain an even bipartisan split on the Commission.
[3] For a full analysis of the SAFE Framework, please reference our previous newsletter.


Sign up to receive email updates from ML Strategies/Mintz.
Subscribe Now

Content Publishers

Bruce Sokler

Bruce D. Sokler is a Mintz antitrust attorney. His antitrust experience includes litigation, class actions, government merger reviews and investigations, and cartel-related issues. Bruce focuses on the health care, communications, and retail industries, from start-ups to Fortune 100 companies.

Alexander Hecht

Executive Vice President & Director of Operations

Alexander Hecht is Executive Vice President & Director of Operations of ML Strategies, Washington, DC. He's an attorney with over a decade of senior-level experience in Congress and trade associations. Alex helps clients with regulatory and legislative issues, including health care and technology.

Christian Tamotsu Fjeld

Senior Vice President

Christian Tamotsu Fjeld is a Vice President of ML Strategies in the firm’s Washington, DC office. He assists a variety of clients in their interactions with the federal government.

Raj Gambhir

Raj Gambhir is a Project Analyst in Washington, DC.