New York state lawmakers want to impose new safety requirements for the world’s most advanced artificial intelligence models, but federal lawmakers are threatening to usurp the new rules before they’re ever implemented.
The state Legislature passed a bill last week that would require the developers of the largest, state-of-the-art AI models to come up with comprehensive safety plans laying out steps they’ve taken to reduce the risk of causing serious harm.
The measure — known as the Responsible AI Safety and Education, or RAISE, Act — would also require the developers to tell the state about major security incidents, such as a model acting on its own without a user prompting it. The bill would make developers liable for certain violations of the law, clearing the way for the state attorney general to seek civil fines worth tens of millions of dollars.
Tech companies and their trade organizations are already pressuring Gov. Kathy Hochul to veto the measure, arguing that it would stifle innovation in a transformative field. And it all comes as Congress considers prohibiting states from regulating AI for the next 10 years, a move that could effectively kill the measure before it has a chance to take effect.
But the bill’s supporters are urging Hochul to sign it, arguing that the feds aren’t moving quickly enough to regulate a fast-moving, rapidly changing AI industry that has enormous potential to impose change on the world.
“ I think in many ways AI policy should be at the federal level, but the feds are taking their time and they haven't done anything there,” said Assemblymember Alex Bores, a Manhattan Democrat. “The beauty of the federal system is that we can allow the states to experiment to try new things.”
The RAISE Act was among hundreds of bills lawmakers passed in the waning days of the annual lawmaking session at the state Capitol in Albany. Bores sponsored it alongside state Sen. Andrew Gounardes, a Brooklyn Democrat.
The bill would apply to developers of so-called frontier AI models — the largest, most-cutting-edge technology in the field — that are developed, deployed or set to operate in New York.
The developers would be required to put their safety protocols in place before deploying. They’d be subject to a third-party review and would be made available to the state Department of Homeland Security and Emergency Services as well as the state Attorney General’s Office.
The bill says the safety protocols would be designed to reduce the chance of “critical harm” — incidents that are “caused or materially enabled” by the developer’s AI model and result in 100 or more injuries or deaths or more than $1 billion in damage. It requires developers to account for the possibility of AI leading to the creation of chemical, biological or even nuclear weapons.
The attorney general would be able to seek fines of up to $10 million for a first violation, and up to $30 million for each subsequent violation.
California lawmakers made a previous attempt to regulate AI developers in a similar way last year, but the measure faced significant opposition from some corners of Silicon Valley, and Gov. Gavin Newsom ultimately vetoed it.
Tech:NYC — a trade group that includes Google and Meta, both of which have significant AI models — opposes the RAISE Act.
Julie Samuels, Tech:NYC’s president and CEO, said her organization is not opposed to state regulation, though it would prefer a national standard. But the group says regulation should be based on specific uses — such as rules for AI used in health care decision-making processes — rather than the RAISE Act’s blanket approach.
“We want to be doing all we can to incentivize the development of smart, responsible AI here in New York state, and I am fundamentally concerned that the passage of this bill would push us the other way,” she said.
Hochul has been a major supporter of the Empire AI initiative, a public-private consortium housed at the University at Buffalo that is dedicated to researching and developing artificial intelligence. Earlier this year, she and state lawmakers approved measures regulating AI companions, apps and characters that provide emotional support to people.
Avi Small, a spokesperson for the governor, said Hochul would review the legislation.
Congress, meanwhile, continues to debate if it should temporarily prohibit states from regulating AI altogether.
The House of Representatives passed a 10-year moratorium on such state regulation as part of President Donald Trump’s wide-ranking tax bill last month. The Senate hasn’t voted on its version of the bill yet, but a committee inserted language that softened the provision, effectively allowing states to regulate AI if they’re willing to give up federal broadband funding.
Hochul, a Democrat, wrote a letter last week to Senate Majority Leader John Thune and Minority Leader Charles Schumer opposing the House version of the measure, urging the Republican to reject it.
“If this federal prohibition remains in reconciliation, the impact is not merely a bureaucratic moratorium; it undermines the states’ fundamental right and responsibility to protect the safety, health, privacy, and economic vitality of its citizens,” Hochul wrote.