Arrow-right Camera
The Spokesman-Review Newspaper
Spokane, Washington  Est. May 19, 1883

Democrat Cantwell, Republican Blackburn voice opposition to 10-year moratorium on state AI regulations

U.S. Sen. Maria Cantwell (D-WA) speaks during a news briefing after a weekly Senate Democratic policy luncheon at the U.S. Capitol on July 9, 2024, in Washington, DC. Senate Democrats held a weekly policy luncheon to discuss Democratic agenda.  (Alex Wong/Getty Images North America/TNS)

A bipartisan group of federal and state lawmakers is voicing concern over a provision tucked into the One Big Beautiful Bill Act that would restrict states from “limiting, restricting, or otherwise regulating” artificial intelligence for the next decade.

“We should be fighting to protect consumers, not enabling AI theft or fraud,” Washington Sen. Maria Cantwell said during a news conference Wednesday. “We have a lot of work to do here in Congress to get AI right, and we should be given the chance to do that.”

Congress is debating a provision in the House and Senate budget reconciliation bills that would invalidate a range of state regulations nationwide. The “One Big Beautiful Bill,” which was recently passed by the House of Representatives, includes a 10-year moratorium on states regulating the technology. Under the Senate proposal, funding from a $42.5 billion broadband program would be contingent on compliance with the moratorium.

Cantwell was joined Wednesday by Republican Sen. Marsha Blackburn of Tennessee, and the attorneys general of Tennessee and Washington.

“We are working to move forward with legislation at the federal level, but we do not need a moratorium that would prohibit our states from stepping up and protecting citizens in their state,” Blackburn said.

While Congress has yet to adopt regulations on the emerging technology, in recent years, legislators across the country have introduced an array of state regulations.

According to the National Conference of State Legislatures, state lawmakers have introduced more than 1,000 bills regulating the technology during their 2025 legislative sessions.

Cantwell said 24 states adopted legislation regulating artificial intelligence in 2024.

“They have adopted these laws to fill the gap while we are waiting for federal action,” Cantwell said. “Now, Congress is threatening these laws, which will leave hundreds of millions of Americans vulnerable to AI harm by abolishing those state law protections.”

Last year, the Washington Legislature established an artificial intelligence taskforce within the attorney general’s office, which Attorney General Nick Brown said has helped “study and research the impacts” of the technology.

During the recently completed legislative session in Olympia, lawmakers introduced at least 22 bills that would have regulated artificial intelligence’s usage in an array of sectors, including scientific research and the rental housing market.

Among the legislation considered was a law that Brown said would prohibit tech companies from “generating harmful content” for children through the use of algorithms.

“If we continue to move forward with that law, that would be pre-empted by this piece of legislation from Congress,” Brown said.

“And that gives me grave concern.”

Earlier this month, 260 state legislators signed a letter to federal lawmakers expressing concern with the 10-year freeze, which they said would “cut short democratic discussion of AI policy in the states with a sweeping moratorium that threatens to halt a broad array of laws and restrict policymakers from responding to emerging issues.”

The letter was signed by two Washington legislators – Rep. Shelley Kloba, D-Bothell, and Sen. Matt Boehnke, R-Kennewick.

The provision also received criticism from the National Association of Attorneys General, members of which wrote in a joint letter that the “impact of such a broad moratorium would be sweeping and wholly destructive of reasonable state efforts to prevent known harms associated with AI.”

Brown said Wednesday that while the technology can offer “tremendous value” to users, “we also have to recognize many of the potential harms that come from AI.”

Brown pointed to a 2023 law prohibiting the use of deepfake materials to mimic candidates in a political campaign, which he said could be unenforceable with the moratorium.

“In this particular environment, when we see so much misinformation, we want to make sure that states have the opportunity to regulate that,” Brown said.

Brown also cited a Washington law prohibiting the distribution of fabricated sexual images without consent, which includes penalties for those who possess or distribute the materials.

“That law would be undermined and invalidated if this was to pass in Congress,” Brown said.

The inclusion of the artificial intelligence provision in the “One Big Beautiful Bill” surprised many lawmakers – including some who had voted to support it – and created an unlikely political alignment. After the bill cleared the House, Georgia Republican Rep. Marjorie Taylor Greene said she was unaware the moratorium was included and would have voted against the legislation if she had known.

“We have no idea what AI will be capable of in the next 10 years and giving it free rein and tying states’ hands is potentially dangerous. This needs to be stripped out in the Senate,” Greene wrote on X.

Cantwell said Thursday she has voiced concern about the provision to Texas Sen. Ted Cruz, who chairs the Commerce, Science and Transportation Committee.

During the conversation with Cruz, Cantwell said she “may have mentioned that I didn’t think that I agreed with Marjorie Taylor Greene, our Congress counterpart on the other side of the Capitol much, but she also wants this language taken out.”

“So, I think a lot of people were just caught by surprise, a lot of members didn’t even know it was in there,” Cantwell said. “And now that they are understanding that it’s in there, they’re expressing their concerns.”

During an appearance on Bloomberg last week, Cruz said the moratorium is a “very sound policy.”

“There is a real danger of state legislation, and in particular states like California, adopting a heavy-handed, nanny state approach, much like the European Union, which would cripple America’s leadership in AI,” Cruz said.