UHG
Search
Close this search box.

AI Whistleblowers Stand in the Way of AGI?

AI is the most transformative innovation any of us will see in our lifetimes. While the concerns are real, there’s a good reason to think that we can deal with them.

Share

AI Whistleblowers Stand in the Way of AGI?

In an unconventional move, but not the first, a group of current and former employees, primarily from AI behemoth OpenAI, has urged their employers to prohibit non-disclosure agreements (NDAs) and empower whistleblowers to speak openly if the firms they work for prioritise growth and profit over safety.

Seven former and four current employees of OpenAI and two Google DeepMind employees, one present and one former have signed the open letter

They claim that since governments have little to no obligation to receive information about AI companies’ technologies and civil society has none, “current and former employees are among the few people who can hold them accountable to the public”.

Wide-ranging secrecy agreements, however, put whistleblowers at the risk of losing their equity in the business if they choose to come forward.

Yoshua Bengio, Geoffrey Hinton, and computer scientist Stuart Russell, who together provided crucial research that resulted in the development of modern AI and later became some of its most vocal detractors, all backed the letter. 

The letter’s authors expressed deep concern about the incentives of AI businesses to avoid governance and responsibility, underlining the importance of transparency and accountability in the industry.

Joshua Segren, co-founder of ShopCierge.ai. Source: X

A representative for OpenAI responded to the letter stating that the business is proud of its “track record of providing the most capable and safest AI systems” and believes in its scientific approach to addressing risk. 

The representative also said that the company acknowledges that “rigorous debate is crucial given this technology’s significance”.

Little Overboard, Perhaps?

In his interview, Daniel Kokotajlo, former researcher in OpenAI’s governance division, raised the concern that “some employees believed” Microsoft had improperly tested and distributed a new version of the GPT-4 on Bing in India

Microsoft refutes the claim.

While those calling for greater transparency and protections for whistleblowers need to be celebrated, it’s important to study the other side.

Some of the former employees are affiliated with the radical effective altruism movement, which tends to focus on the most catastrophic movement and emphasises the long-term effects of our actions. This includes the possibility that an out-of-control AI system could take over and wipe out humanity.

Critics have accused the movement of spreading apocalyptic scenarios regarding technology without proper backing.


Rachid Flih, co-founder of open-source platform Panora.

One of the signatories, Kokotajlo, said that before joining OpenAI, he predicted that artificial general intelligence (also known as AGI), an AI capable of human-like cognition, wouldn’t arrive until 2050. 

Now, he says there’s a 50 percent chance the tech will arrive by 2027. He also believes there’s a 70 percent chance that this advanced AI will destroy humanity.

AI scientists like Yann LeCun of Meta have called people concerned about the field’s rapid advancement “doomers”, claiming that their “misplaced sense of urgency reveals an extremely distorted view of reality”.

Leopold Aschenbrenner published a 165-page study outlining a path from GPT-4 to superintelligence, its risks, and the difficulty of aligning such intelligence with human aims. Aschenbrenner was a member of OpenAI’s super alignment team and was sacked for leaking confidential information in April.

This has been discussed extensively previously, but his statements still remain unproven (at least for now).

The signatories’ opposition to NDAs that keep business insiders from raising risk-related concerns is another key demand. Legal hazards associated with this procedure include infringement of intellectual property. For sensitive information such as private data, trade secrets, or creative ideas that provide you a competitive edge, NDAs offer an essential layer of protection. 

Profit Over Safety’

The letter comes after two of OpenAI’s senior staffers, co-founder Ilya Sutskever and key safety researcher Jan Leike, quit last month. Leike claimed that OpenAI had abandoned a culture of safety in favour of “shiny products” after he left.

A Ploy to Impede AGI? 

Former OpenAI board member Helen Toner claimed in an interview that aired last week that Altman frequently misled and concealed facts from the board, particularly on safety procedures. 

According to her, the board “was not informed in advance” about ChatGPT and actually learned about it on Twitter. (Although the corporation didn’t say it outright, it expressed disappointment in Toner’s continued revisiting of these concerns in a statement.)

Musk, who owns a rival chatbot and an AI business, will not be outdone. He is suing OpenAI on the grounds that it prioritises profits and its Microsoft partnership over human growth.

Joshua Achiam, research scientist at OpenAI, criticised the open letter on social media, arguing that employees going public with safety fears would make it harder for labs to address susceptible issues. 

In a post on X, he said, “I think you are making a serious error with this letter. The spirit of it is sensible. However, disclosing confidential information from frontier labs, well-intentioned, can be outright dangerous. This letter asks for a policy that would, in effect, give safety staff carte blanche to make disclosures at will, based on their judgement.”

He isn’t the only one. “It would be more helpful if they raise specific problems with current or upcoming systems than just vaguely point to process generalities,” Meta’s lead product manager, GenAI, Arun Rao, said. 

Similarly, former OpenAI employee and founder of Interdimensional.ai Andrew Mayne echoed the same. He said, “This has created a situation where people with good intentions could create a scenario in which the opposite of what they want to happen occurs.”

With both parties defending their positions on AI safety, it remains to be seen whether all this is merely noise on the path to AGI.

📣 Want to advertise in AIM? Book here

Picture of Anshul Vipat

Anshul Vipat

Anshul Vipat is a tech aficionado, enthusiastic about the latest innovations in the digital world. He also holds keen interest in traveling, exploring and cooking
Related Posts
19th - 23rd Aug 2024
Generative AI Crash Course for Non-Techies
Upcoming Large format Conference
Sep 25-27, 2024 | 📍 Bangalore, India
Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

Flagship Events

Rising 2024 | DE&I in Tech Summit
April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore
Data Engineering Summit 2024
May 30 and 31, 2024 | 📍 Bangalore, India
MachineCon USA 2024
26 July 2024 | 583 Park Avenue, New York
MachineCon GCC Summit 2024
June 28 2024 | 📍Bangalore, India
Cypher USA 2024
Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA
Cypher India 2024
September 25-27, 2024 | 📍Bangalore, India
discord-icon
AI Forum for India
Our Discord Community for AI Ecosystem, In collaboration with NVIDIA.