New York’s RAISE Act Is Now Law: What It Means for New York Businesses


Dec 23, 2025
post featured image

By: Moish E. Peltz, Esq. and Kyle M. Lawrence, Esq. 

Governor Kathy Hochul has signed the Responsible AI Safety and Education (RAISE) Act into law, making New York a leading state in regulating the most advanced “frontier” artificial intelligence models. In short, the statute targets only the largest developers training extremely powerful AI models, requires public safety protocols, mandates rapid reporting of serious AI incidents, and authorizes state oversight and enforcement. For New York businesses, the immediate takeaway is twofold: (1) most companies will not be regulated directly as “large developers,” but they may feel the effects through vendor diligence and risk management expectations, and (2) the emerging national debate over federal preemption of state AI laws.

What the RAISE Act Covers and Who It Regulates

The RAISE Act amends the General Business Law to create a new Article 44‑B for “frontier” AI model safety and transparency. The Act defines a “frontier model” as either: (a) an artificial intelligence model trained using greater than 10^26 computational operations (e.g., integer or floating‑point operations), the compute cost of which exceeds $100 million; or (b) an artificial intelligence model produced by applying knowledge distillation to a model meeting clause (a). It scopes “large developers” to entities that have trained at least one frontier model, the compute cost of which exceeds $5 million, and have spent over $100 million in aggregate compute costs in training frontier models. Academic research at accredited universities is excluded. That scoping is intentional: the law is aimed at a handful of very large model developers, not the everyday user of AI‑enabled tools or the average New York startup. However, the Act can reach smaller developers that produce a model via knowledge distillation from a qualifying frontier model, potentially bringing regular enterprise users within the scope of the Act even if their own training spend or compute is significantly lower.

Substantively, the Act requires covered developers to implement and publish a written safety and security protocol, retain the unredacted protocol and testing records for as long as the model is in use plus five years, and implement safeguards to prevent an unreasonable risk of “critical harm,” a term focused on catastrophic outcomes such as mass casualty events or billion‑dollar property losses materially enabled by the model. The statute further prohibits deploying a frontier model if doing so would create an unreasonable risk of critical harm. It also compels rapid reporting of safety incidents within 72 hours of learning of the incident or facts establishing a reasonable belief that an incident occurred. In addition, the Act includes employee protections, such as anti‑retaliation safeguards for workers who raise AI safety concerns in good faith and mechanisms for internal reporting to designated, accountable personnel.

As enacted, the law authorizes enforcement by the New York Attorney General. News from the Governor’s office and contemporaneous reporting indicate New York also plans to stand up an oversight function, with annual assessments and transparency, and sets penalties at up to $1 million for a first violation and up to $3 million for subsequent violations. Those numbers contrast with earlier bill versions that contemplated higher maximums and third‑party audits; the Governor and legislative leaders signaled that agreed chapter amendments will formalize the oversight office and align penalties with the final deal that enabled signature while preserving the law’s core transparency and safety obligations.

Practical Implications for New York Businesses

The most immediate compliance burden falls on a small cohort of “large developers” training frontier models at scale. They must publish appropriately redacted safety protocols, document testing procedures, and stand up incident‑reporting processes capable of meeting the 72‑hour clock. They should expect regulators to be able unredacted materials upon request, and should plan to update posted protocols as models evolve and industry best practices mature.

Downstream, many New York companies may begin to experience the RAISE Act indirectly through commercial relationships and may begin to see new diligence questionnaires and incident‑notification covenants flow from large developers to enterprise customers. The Act’s definitions and benchmarks may begin to shape enterprise AI risk frameworks, even for businesses that are not directly covered.

It is also worth noting the broader New York landscape. Earlier this month, Governor Hochul signed separate “first‑in‑the‑nation” measures enhancing AI transparency in advertising and protecting post‑mortem rights of publicity. Businesses should anticipate more domain‑specific measures over the coming years.

How New York’s Law Compares to California’s SB 53

California’s Transparency in Frontier Artificial Intelligence Act (SB 53) is the closest U.S. analogue. Both laws are compute‑threshold statutes focused on the largest frontier model developers, and both require public, standardized disclosures and incident reporting tied to catastrophic risk. California’s SB 53 defines frontier models slightly differently (at a 10^26 compute threshold with revenue‑based tiers), requires public governance and cybersecurity frameworks with deployment transparency, sets 15‑day incident reporting (24‑hour for imminent harm), and pairs strong whistleblower protections with $1 million penalty caps.

New York’s RAISE Act hews to the same frontier‑model universe, but it sets a faster 72‑hour safety‑incident clock, requires publication of a model‑specific safety and security protocol and underlying test procedures sufficient to permit replication, and, on the face of the statute, bars deployment when unreasonable catastrophic risk is present. For multi‑state developers, the prudent path will be to harmonize to the stricter elements across both regimes.

Relation to the EU AI Act’s General‑Purpose AI Model Rules

The EU AI Act takes a different, much more stringent approach. It establishes a comprehensive, risk‑based framework for high‑risk AI systems and, crucially for frontier models, creates obligations on providers of “general‑purpose AI” (GPAI) models. All GPAI providers must maintain technical documentation, share specified information with downstream system providers, adopt copyright‑compliance policies, and publish a sufficiently detailed summary of training data. Compared to New York, the EU regime is broader in scope and more prescriptive about documentation, downstream transparency, and governance across the AI value chain. New York, by contrast, is targeted to catastrophic risk from the very largest frontier models and focuses on safety protocols, testing transparency, and rapid incident reporting inside the state. Companies operating transatlantically should expect EU‑level documentation and incident‑management practices to set a high baseline; satisfying those should materially ease alignment with New York’s and California’s disclosures and reporting.

The Federal Backdrop: Tension with the Trump Executive Order

The RAISE Act also lands amid escalating federal‑state tension. Days before New York’s signature, the White House issued an executive order instructing the Department of Justice to challenge “onerous” state AI laws, directing the Department of Commerce to identify state statutes for potential litigation or funding conditions, and asking federal regulators to consider preemptive national reporting standards. The order frames state transparency and safety regimes as a burdensome patchwork that threatens U.S. AI leadership and hints at Dormant Commerce Clause arguments. State officials and commentators immediately questioned the legality of broad preemption by executive order and predicted near‑term litigation. In practical terms, New York businesses should assume the RAISE Act and parallel state measures remain in force unless and until a court says otherwise.

Conclusion

While the action items are clear for covered AI developers, for the vast majority of New York businesses that are not “large developers,” the priority is governance by contract and enterprise risk management. Inventory where frontier‑class models touch your products and operations, expect that your vendors should comply with the RAISE Act over time, and ensure your AI policies are up to date, and that you have a playbooks that escalate and respond if a supplier reports a safety incident that could affect you. Contact FRB's Artificial Intelligence Practice Group at (212) 203-3255 or by filling out the form below to speak with an attorney about translating these obligations into practical, defensible governance for your business.

DISCLAIMER: This summary is not legal advice and does not create any attorney-client relationship. This summary does not provide a definitive legal opinion for any factual situation. Before the firm can provide legal advice or opinion to any person or entity, the specific facts at issue must be reviewed by the firm. Before an attorney-client relationship is formed, the firm must have a signed engagement letter with a client setting forth the Firm’s scope and terms of representation. The information contained herein is based upon the law at the time of publication.

Have Questions? Contact Us