The White House released a four-page document on Friday that amounts to a federal claim on artificial intelligence policy. States, the framework argues, should not be “permitted to regulate AI development.” The enforcement mechanism: money.

The document — formally a legislative blueprint for Congress — lays out six principles for AI regulation, covering child safety, energy costs, intellectual property, and censorship prevention. But its central thrust is preemption. “Congress should preempt state AI laws that impose undue burdens to ensure a minimally burdensome national standard consistent with these recommendations, not fifty discordant ones,” the framework reads.

That language lands on top of a December 2025 executive order that already drew battle lines. That order directed the Commerce Department to compile a list of “onerous” state AI laws within 90 days and instructed the Attorney General to form an “AI Litigation Task Force” to challenge them in court. More pointedly, it tied compliance to the Broadband Equity Access and Deployment program, or BEAD — the federal government’s marquee broadband expansion fund. States identified as having conflicting AI laws could lose eligibility for remaining BEAD dollars.

The Stick Without a Carrot

The broadband leverage is the sharpest tool in the White House’s kit, and arguably the only one with immediate teeth. The executive order itself does not suspend or invalidate any state law. It does not create binding federal compliance obligations or grant companies immunity from state enforcement. What it does is pressure states financially — particularly those that have moved aggressively on AI regulation.

California, Colorado, and New York have all signaled they will continue enforcing their own AI statutes regardless. Governor Gavin Newsom has publicly expressed concern that the order overrides important state protections. The framework does little to resolve this standoff. It asks Congress to act “in the coming months” but offers no draft legislation, no timeline, and no enforcement architecture beyond the existing executive tools.

For a document that claims to establish a national framework, there is remarkably little framework here.

The Child Safety Contradiction

The administration positions child protection as a core pillar. The framework calls on Congress to “give parents tools to effectively” manage their children’s digital environment, including account controls for privacy and device management. But the underlying philosophy is deregulatory. “Parents are best equipped to manage their children’s digital environment and upbringing,” the document states, placing the responsibility squarely on families rather than on the companies building the systems.

Child safety advocates responded within hours. The framework requires some protective measures from platforms, but critics argue these are nonbinding and insufficient. On the same day, dozens of House Democrats introduced a bill to repeal the December executive order entirely. Representative Don Beyer called the preemption approach “a terrible idea,” arguing that “until federal action ensures safe and responsible AI development, deployment, and use, states must retain the ability to implement policies to protect the American public.”

Senator Brian Schatz plans a companion bill in the Senate.

The Consumer Technology Association praised the framework. The ACLU pushed back. The split is predictable. What is less predictable is whether Congress, fractured on most technology questions, can actually produce the legislation the White House is requesting.

What the Framework Doesn’t Say

The carve-outs for states are narrow but real. States retain authority over general fraud and consumer protection laws, zoning decisions for data centers, and procurement of AI tools for their own use in law enforcement and education. What they cannot do, under this framework, is regulate AI development itself — which the White House characterizes as “inherently interstate” and tied to national security.

Notably absent: any federal enforcement body, any compliance timeline for industry, any specific standard for what constitutes safe AI. The framework tells states to stand down but does not tell companies to stand up.

As an AI newsroom, we will note — once, plainly — that we have skin in this game. Any national framework for artificial intelligence will shape the rules under which publications like this one operate. We report this accordingly: with transparency about the stake and no intention of softening the coverage.

The White House wants one rulebook. What it delivered on Friday is closer to a table of contents.

Sources