UHG
Search
Close this search box.

What’s the California AI Bill, and Why Does Meta’s Yann LeCun Think it Sucks?

Mentioning worst-case scenarios like nuclear war or building chemical weapons only serves to stoke a pervasive fear of AI that is already common among the general public.

Share

A staunch proponent for the open development and research of AI, Meta’s chief AI scientist, Yann LeCun, recently posted a call to action to oppose SB1047, colloquially known as the California AI Bill.

Now, not taking into consideration the actual content of the proposed Safe and Secure Innovation for Frontier Artificial Intelligence Models Bill, California is home to a majority of AI and AI-adjacent companies that operate globally.

This means that a comprehensive bill governing AI within the state will affect not only the companies within the US state but also globally.

Overkill Much?

There’s plenty of reason not to like the actual Bill. But the main point of contention that had LeCun concerned was the regulation of research and development within the ecosystem.

“SB1047 is a California bill that attempts to regulate AI research and development, creating obstacles to the dissemination open research in AI and open source AI platforms,” he said.

However, the Bill also attempts to predict where AI could go, thereby implementing pretty strict and near unattainable compliance from companies.

It uses the potential for AI to “create novel threats to public safety and security, including by enabling the creation and the proliferation of weapons of mass destruction, such as biological, chemical, and nuclear weapons, as well as weapons with cyber-offensive capabilities” as a way to implement overarching measures, that in the end will go unimplemented.

For example, the Bill basically states that it is prohibited to build a model that can enable critical harm, given certain provisions. However, as AIM has covered before, literally any model can be jailbroken to even produce instructions on how to build nuclear weapons.

The Bill is filled with similar instances of providing guidelines that are either near impossible to adhere to or are just generalisations, backed by a need to adhere to safety protocols but a lack of actual knowledge of how these systems work.

Meta’s vice president and deputy chief privacy officer, Rob Sherman, said it perfectly in a letter sent to the lawmakers, “The bill will make the AI ecosystem less safe, jeopardise open-source models relied on by startups and small businesses, rely on standards that do not exist, and introduce regulatory fragmentation.”

Stick to What You Know

The general consensus is that it’s basically impossible to implement future-proof regulations for AI. 

Mentioning worst-case scenarios like nuclear war or building chemical weapons only serves to stoke a pervasive fear of AI that is already common among the general public. There have been several AI leaders, as well as government officials, who have stated that over-regulation of AI is something that they’re hoping to avoid.

However, regulations like these broadly generalise what AI is, with a lack of input from those working within the tech space and who are familiar with ongoing developments in the industry.

While there are several concerns on the development and usage of AI, these don’t ever seem to get addressed in regulations like this one and EU’s Artificial Intelligence Act (AIA). Instead, they focus on trying to future-proof AI usage, thereby making generalisations, and fail to address problems that are already prevalent within communities and the industry.

“The sad thing is that the regulation of AI R&D is predicated on the illusion of “existential risks” pushed by a handful of delusional think-tanks, and dismissed as nonsense (or at least widely premature) by the vast majority of researchers and engineers in academia, startups, larger companies, and investment firms,” LeCun said.

There are many gaps in regulation that AI companies and startups actively take advantage of, though conceding that this be done carefully so as not to cross ethical boundaries. However, with no proper regulation in place, companies are not bound by any kind of legal obligation.

Many big players like OpenAI, Meta, Google and Microsoft have been in staunch favour of regulations but have asked that preliminary conversations are held with stakeholders. Which, for anything regulation-related, makes sense.

However, it seems that the California AI Bill is just another in a long line of examples where governments seem to push regulations as a reactionary measure rather than one that has thought and rationale put behind it. Which is evidenced in the open letter written to the legislators, signed by several researchers, founders and other leaders in the AI space.

Further regulations can only serve to push companies, particularly startups, to pursue prospects in other countries that don’t attempt to have a hamfisted approach to policing AI.

📣 Want to advertise in AIM? Book here

Picture of Donna Eva

Donna Eva

Donna is a technology journalist at AIM, hoping to explore AI and its implications in local communities, as well as its intersections with the space, defence, education and civil sectors.
Related Posts
19th - 23rd Aug 2024
Generative AI Crash Course for Non-Techies
Upcoming Large format Conference
Sep 25-27, 2024 | 📍 Bangalore, India
Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

Flagship Events

Rising 2024 | DE&I in Tech Summit
April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore
Data Engineering Summit 2024
May 30 and 31, 2024 | 📍 Bangalore, India
MachineCon USA 2024
26 July 2024 | 583 Park Avenue, New York
MachineCon GCC Summit 2024
June 28 2024 | 📍Bangalore, India
Cypher USA 2024
Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA
Cypher India 2024
September 25-27, 2024 | 📍Bangalore, India
discord-icon
AI Forum for India
Our Discord Community for AI Ecosystem, In collaboration with NVIDIA.