Meta Platforms Inc.’s AI Policies Under Investigation and States Continue to Pursue AI Regulation - AI: The Washington Report
- Following a leaked 200-page document, Senator Josh Hawley (R-MO) is leading a Senate Judiciary subcommittee investigation into whether Meta’s generative AI products have enabled exploitation or harm to children, and if the company misled regulators about its safety measures.
- The investigation was triggered by reports of Meta’s chatbots engaging in romantic conversations with minors and disseminating false or biased information. Although Meta claims these policies were removed and inconsistent with its current AI standards, lawmakers are demanding a full record of internal policy changes and related decision-making.
- The aborted attempt to Congress and stay or limit AI earlier this summer has not slowed down continued state-level activity on AI regulation. So far this year, at last count, states have introduced 260 bills affecting AI across 40 states, with 22 enacted so far, addressing issues like deepfakes, bias, surveillance, and transparency. Key laws include Utah’s AI oversight office, Colorado’s high-risk AI audits, and Tennessee’s protections against AI voice impersonation. Meanwhile, future federal action remains murky, with no mention of preemption at the state level in the administration’s AI plan. There has been no concrete indication thus far that proponents of state preemption will try again to move legislation. There are narrower efforts pending in Congress, with bipartisan bills focusing on data privacy, deepfakes, and national security.
Senator Josh Hawley Launches Investigation into Meta’s AI Policy and Interactions with Children
Meta Platforms Inc.’s internal AI policies have sparked an investigation led by Senator Josh Hawley (R-MO), the top Republican on the Senate antitrust panel, after a 200-page document revealed guidelines that permitted chatbots deployed by Meta to have inappropriate exchanges with children. His letter to Meta indicated that the Senate’s Judiciary subcommittee on crime and counterterrorism would investigate “whether Meta's generative-AI products enable exploitation, deception, or other criminal harms to children, and whether Meta misled the public or regulators about its safeguards."
The Hawley letter highlighted that sexually explicit conversation and content with children was prohibited, and a Meta spokesperson confirmed the removal of such policies that “were and are erroneous and inconsistent” with their former and current AI policies. Lawmakers, however, believe there are flaws in the oversight of AI tools as reports have found chatbots engaging in romantic conversations with minors, perpetuating racial bias and discrimination, and providing false medical claims and advice.
"Your company has acknowledged the veracity of these reports and made retractions only after this alarming content came to light," Hawley wrote, saying it is “unacceptable that these policies were advanced in the first place."
Meta has been asked by Senator Hawley to provide documentation and correspondence that displays all versions of generative AI internal policies, public statements about the safety of minors on platforms, and reaffirming medical limitations surrounding its chatbots. This includes a list of all Meta products and models, authorizations, durations, and "documents sufficient to establish the decision trail for removing or revising any portions of the standards."
There has been bipartisan support for regulating AI systems and safety, arguing they could subject children to harmful exchanges, and other users seeking health information. The company states its developments in AI are centered around user protection, and that it is not responsible for the harmful content generated by chatbots.
A cybersecurity and data privacy partner stated that questions about the need for regulation of AI tools will rise “before more lives and communities are put at risk." The letter set a September 19 deadline for Meta to provide responsive materials to the subcommittee.
State AI Regulation Efforts Continue Despite Federal Opposition
Despite Congress’ aborted consideration of a 10-year moratorium on state level AI regulation, efforts among the states to assert jurisdiction over aspects of AI have continued; so far this year 260 AI-related bills have been introduced across 40 states in the first half of 2025. This includes areas like non-consensual deepfakes, election misinformation, and transparency, filling gaps left by federal inaction or opposition. Of the 260, 22 bills have been enacted so far, while dozens remain pending.
Enacted state laws include:
- Utah’s Artificial Intelligence Policy Act, which mandates generative AI disclosures and creates a new AI oversight office.
- Colorado’s Artificial Intelligence Act regulates high-risk AI, demanding fairness audits and annual assessments.
- New York City’s Local Law 144 requires employers that use automated employment hiring tools to go through bias audits.
- Tennessee’s Ensuring Likeness, Voice, and Image Security (ELVIS) Act protects performers from AI voice impersonation.
- Other important efforts include Montana’s HB 178, restricting AI use in surveillance and behavioral manipulation, and Virginia’s pretrial AI restrictions, which bar AI-based decision-making in criminal justice contexts.
There are also efforts to create sector-specific legislation. Illinois’ HB 3773 targets AI-based employment discrimination, particularly banning zip code targeting. California’s SB 942 and AB 2013, effective in 2026, require generative AI developers to offer free detection tools and publish summaries of training data. Florida has also enacted laws limiting AI involvement in health insurance claim denials and begun testing AI content provenance tools.
At the federal level, bipartisan efforts are shaping the contours of AI oversight, particularly in the areas of privacy, national security, and deepfake content. The AI Accountability and Personal Data Protection Act, introduced by Senators Josh Hawley (R-MO) and Richard Blumenthal (D-CT), aims to establish federal protections and allows individuals to sue tech firms that train AI using personal or copyrighted data without consent. It mandates transparency around data use and provides legal remedies.
Additionally, the No Adversarial AI Act, a bipartisan bill led by Raja Krishnamoorthi (D-IL), ranking member of the House Select Committee on the Strategic Competition Between the United States and the Chinese Communist Party, and committee Chairman John Moolenaar (R-MI), was introduced. It seeks to bar AI models from adversarial nations like China and Russia from being used by US executive agencies, unless specifically exempted, through a digital firewall enforcement via the Federal Acquisition Security Council.
The TAKE IT DOWN Act, signed into law in May 2025 by President Trump, compels platforms to remove non-consensual intimate imagery or deepfakes upon request, building on similar laws like the SHIELD and DEFIANCE Acts. In contrast to these protections, President Trump’s Executive Order 14179, issued in January 2025, repealed Biden-era AI safety measures and directed federal agencies to remove barriers to AI development.
As Congress returns in September, we will see which issues will be given priority and what bills might gain momentum toward passage.
We will continue to monitor, analyze, and issue reports on these developments. Please feel free to contact us if you have questions about current practices or how to proceed.
More Viewpoints
Content Publishers
Alexander Hecht
Executive Vice President & Director of Operations
Christian Tamotsu Fjeld
Senior Vice President
