A Narrowly Focused Bill Aimed at Preventing AI-Fueled Disasters

A digital representation of New York City skyline highlighting the intersection of urban development and AI technology.

New York state lawmakers have passed a bill that aims to prevent frontier AI models from contributing to disaster scenarios, including the death or injury of more than 100 people, or more than $1 billion in damages. The RAISE Act, which is now headed for Governor Kathy Hochul’s desk, represents a significant win for the AI safety movement, which has lost ground in recent years as Silicon Valley and the Trump administration have prioritized speed and innovation.

The bill, championed by safety advocates including Nobel laureate Geoffrey Hinton and AI research pioneer Yoshua Bengio, would establish America’s first set of legally mandated transparency standards for frontier AI labs. This would require the world’s largest AI labs to publish thorough safety and security reports on their frontier AI models, detailing potential risks and mitigation strategies. The reports would need to be made publicly available, allowing for greater scrutiny and accountability from regulators, researchers, and the public.

The RAISE Act also requires AI labs to report safety incidents, such as concerning AI model behavior or bad actors stealing an AI model. This would help to identify and address potential issues before they escalate into full-blown disasters. If signed into law, the RAISE Act would empower New York’s attorney general to bring civil penalties of up to $30 million against tech companies that fail to live up to these standards.

The bill’s transparency requirements apply to companies whose AI models were trained using more than $100 million in computing resources and are being made available to New York residents. This would include some of the world’s largest and most powerful AI models, such as those developed by OpenAI and Google.

While similar to California’s controversial AI safety bill, SB 1047, the RAISE Act was designed to address criticisms of previous AI safety bills. Unlike SB 1047, the RAISE Act does not require AI model developers to include a “kill switch” on their models, nor does it hold companies that post-train frontier AI models accountable for critical harms. However, the bill does require AI labs to implement robust safety and security protocols, including regular testing and auditing of their models.

Despite the bill’s narrowly focused approach, Silicon Valley has pushed back significantly on the RAISE Act. Andreessen Horowitz general partner Anjney Midha called the bill “stupid” and claimed that it would only hurt the US at a time when our adversaries are racing ahead. However, state Senator Andrew Gounardes, the co-sponsor of the bill, said that he designed the bill not to apply to small companies and that it would not limit innovation among tech companies.

The RAISE Act is now headed for Governor Hochul’s desk, where she could either sign the bill into law, send it back for amendments, or veto it altogether. If signed into law, the RAISE Act would be a significant step towards establishing AI safety standards in the US and would set a precedent for other states to follow. It would also provide a model for federal regulation, which is long overdue in the field of AI safety.

The passage of the RAISE Act is a major victory for the AI safety movement, which has been advocating for greater transparency and accountability in the development and deployment of AI systems. It is a recognition that the risks associated with AI are real and that they need to be addressed through a combination of technical, regulatory, and societal measures.

Leave a comment

Trending