There’s a lot of AI legislation going on. Or at least, a lot of legislative buzz: working groups, frameworks, proposed bills. Less clear how much will actually make it into law.

Multistate.ai, which tracks state and local AI policy, has identified nearly a thousand bills introduced in state legislatures in 2025 – at least one in every state. That’s a big jump from 635 in all of 2024.

At the federal level, the Brennan Center reported that in the 118th Congress, over 150 AI-related bills were introduced; none were passed. An AI Working Group formed in January 2024 released a report in July, and an AI Task Force formed in February 2024 delivered its final report in December. A Senate AI Working group released its report in May of 2024, endorsing a number of AI-specific laws which, as far as I can tell, weren’t passed.

The 119th Congress appears set to keep pace, despite the dumpster fire currently burning through American government. There are two House subcommittees (under Judiciary and Financial Services, a working group (AI and Energy,) and a Senate AI Causus. And as of now, at least 83 bills have been introduced that mention artificial intelligence.

So when you’re calling on your Members of Congress to encourage them to act on AI safety, where do you start?

First, I figure that specific activity doesn’t matter that much. As I said in a previous post, the staff I can actually get access to are mostly just tallying. If I come saying I’m concerned about AI, the mark goes beside ‘AI’; if I come to say I’m concerned about a particular piece of legislation, the mark goes beside the bill number (I’m not sure this is literally what’s happening, but it feels that way). So just mentioning AI will increase the salience of AI by increasing the tally for that issue. But because there are so many AI bills out there, and the landscape is changing, mentioning a specific bill will probably mostly just increase the salience of AI.

Still, there are better and worse ways to do AI safety, so why not through my weight behind a specific bill? And while I don’t feel as though I’m an expert by any means, and I do have informed opinions about what will make AI safer. More importantly, some legislation has been endorsed by people much better-informed than I am. So in my contacts, I’ve been advocating for two specific legislative proposals. I’m not wedded to them, so if the landscape changes I’ll change my tactics, but for now I think raising them increases my effectiveness.

FAIRMA

The Federal Artificial Intelligence Risk Management Act (FAIRMA, H.R.6936,S.3205) is a bill that died in the 118th Congress and has yet to be introduced in the 119th. Though it had broad bipartisan support, the change of administration created enough uncertainty that it was never brought to a vote. The goal of the bill was to require Federal agencies to use the AI Risk Management Framework created by NIST (AI RMF). The AI RMF is a set of guidelines that individuals and institutions could use to think through and manage risks associated with the adoption of AI technologies. It was created with broad input from industry, academia, and policymakers. The AI RMF was intended as a voluntary set of guidelines, and FAIRMA would make it obligatory within the Federal government.

I first became aware of this bill when researching the AI RMF, which had impressed me as addressing concerns about careless implementations of AI across institutions. That led me a to a press release on an AI safety policy report that endorsed the bill. Being in the legal profession and living in DC, the impementation of AI in government was something I’d considered and had concerns about, so it caught my eye immediately. That press release was also coincidently co-authored by CAIS Executive Direct Dan Hendrycks (who also wrote the book for the class that generated this project), which gave me some confidence that I wasn’t missing anything.

CREATE AI

Another concern that stands out to me is the concentration of power among AI companies. I tend to think that a diversity of AIs is better, and favor open source and broader access to tools and resources necessary to create AI models. That position is not shared by everyone in the AI safety space, and I recognize that it’s not without its risks. But I was reassured to find a bill that seems to share broad support: the Creating Resources for Every American To Experiment with Artificial Intelligence Act (CREATE AI, H.R.2385, S.2714).

This act was introduced in the 118th Congress and never made it to the vote, but has already been reintroduced in the House in the 119th Congress. It comes out of a task force within the National Science Foundaton created to advise on how to make a “National Artificial Intelligence Research Resource” to increase access to the infrastructure necessary to research and build AIs.

The bill was also strongly endoresed in an open letter signed by an impressive group of industry and academic institutions in the AI space, and the fact that it’s already been reintroduced in the 119th Congress suggests it could still have legs. NSF funding is in the sights of the current administration, but I think that could actually be an asset for this bill: for Democrats, it shows support for the NSF; for Republicans, it shows reform of the NSF. Something for everyone.

So these are the two bills I mention on my calls. I’ve also added the Members of Congress who work on them to my contact list, on the theory that recognizing them for what they’re doing will enourage them to keep doing it, which could be as effective of recruiting new supporters to the cause – more on my strategy around that in an futue post.


There is a Stenate hearing coming up on Thursday, May 8th, that I plan to attend, “Winning the AI Race: Strengthening U.S. Capabilities in Computing and Innovation”. Representatives from OpenAI, AMD, Core Weave, and Microsoft will be testifying. Should be fun!