The Wrong Fix: Why New York’s RAISE Act Misses the Mark on AI Safety
New York lawmakers are considering legislation that, on the surface, appears to address a pressing concern: how to manage the risks associated with advanced artificial intelligence systems. But the RAISE Act, short for the Responsible AI Safety and Education Act, is the wrong solution. It would discourage responsible development, create legal uncertainty, and ultimately do little to enhance safety.
While the bill is framed as targeting only the most resource-intensive “frontier” models, the obligations it imposes go far beyond technical thresholds. Developers would be expected to bear legal responsibility for harms they did not cause and could not control. This is a deeply flawed idea, one that would stifle almost all AI development.
The core problem is this: the RAISE Act makes upstream AI developers responsible for how others use their models, even when those models are deployed independently or modified by third parties. If a model released in good faith for legitimate users is later misused, the original developer would have to prove they were not responsible for the outcome. That flips the traditional logic of liability and places an impossible burden on creators.
Imagine a world where every automobile manufacturer is liable every time a criminal misuses a car they built, with Ford being considered responsible if someone uses one of their trucks to drive to the scene of a crime. That is what the RAISE act proposes to do to artificial intelligence developers.
Developers already devote significant resources to testing, red-teaming, and safeguarding their models. But it is not possible to anticipate every future use case, especially when bad actors are involved. No amount of foresight can guarantee that a tool will never be used inappropriately. To treat model creators as guarantors of downstream behavior is to ask the impossible. Once that becomes the standard, the incentive will be to withhold innovation entirely rather than risk costly litigation.
The bill’s backers have attempted to address concerns about small developers by introducing a compute cost threshold. Under the revised proposal, only models trained with a certain level of computational expense would be subject to the bill’s requirements. On paper, that may sound like a reasonable compromise. In practice, it introduces new problems without fixing the old ones.
Compute cost is a poor proxy for risk. Some of the most powerful models developed in the past year have achieved high performance with relatively modest budgets. A well-executed training run using efficient techniques can now rival the output of older, much more expensive systems. That means smaller teams may still develop highly capable models, with all the same potential benefits and risks, but without triggering the bill’s oversight requirements.
If the concern is public safety, there is no good reason to exclude developers simply because they spent less money training their models. Doing so undermines the logic of the bill itself. It creates an arbitrary standard and invites developers to design around it. Worse, it suggests that some systems can escape scrutiny, not because they are less capable, but because they are more efficient.
Open-source AI development communities would also be placed in a difficult position under this legislation. These communities have contributed meaningfully to safety, transparency, and innovation. Researchers and small organizations have released tools that allow for greater scrutiny, wider participation, and independent oversight of AI systems. Yet they are often the least well-positioned to absorb legal risk.
The RAISE Act would create a chilling effect. It would encourage developers to lock up their work, limit distribution, and avoid sharing models that could otherwise be valuable to researchers, nonprofits, or smaller companies. In doing so, it would reinforce the dominance of the most powerful players in the AI space, those who can afford teams of lawyers and negotiate exemptions behind closed doors.
If lawmakers are serious about AI safety, they should focus on incentives for transparency, support for independent audits, and frameworks that encourage collaboration between the public and private sectors. Real accountability comes from engagement, not blanket liability. It comes from standards that reflect how these systems are actually built and used, not from headline-friendly rules that collapse under practical scrutiny.
There is no question that AI development carries serious responsibilities, but the right answer is not to make model creators legally liable for every action someone else might take. That will only slow progress, reduce transparency, and limit access to tools that can be used for good.
The RAISE Act, in its current form, fails to strike the balance between innovation and responsibility. New York should step back, take a broader view, and work toward a more thoughtful and realistic approach to AI governance, one that promotes safety without stifling the very innovation it seeks to protect.