UHG
Search
Close this search box.

Open Source is the Future of AI

Open-source brings much-needed clarity to the often black box of AI development

Share

Recently, Hugging Face CEO Clem Delangue took up the fight for open-source AI in his post on LinkedIn saying that open-source AI may lag in some areas due to ethical transparency, but it’s ultimately safer and more sustainable.

Although, this comes at a cost, Delangue said. “Since open-source projects must be transparent, developers are forced to make responsible decisions a priority, which occasionally means sacrificing performance,” and saying “Long-term, open-source = safer, more ethical AI!”

Delangue isn’t the only one to say this. Rahul Roy-Chowdhury, CEO of Grammarly has said that open source AI has the potential to bring much-needed transparency. 

In a post on LinkedIn, he wrote, “Open source brings sunlight to what’s often a black box of AI development. To understand if an AI tool is safe, secure, and trustworthy, transparency is vital. And technology’s best model for transparency is the use of open source.”

Meta’s CEO Mark Zuckerberg has a different perspective on open-source AI. In a recent interview, he said, “I don’t view it as giving it away, I view it as you guys all making it better for me.”

The Bigger Picture 

Hugging Face’s CSO Thomas Wolf called open source models sports without doping

“Performances can be slightly lower but in the end, transparency is more sustainable, healthy and exciting to watch and participate in!,” he added.

There are also advocates of the idea that opening out AI models to a larger audience encourages rigorous analysis and responsibility. In a recent talk with Mark Zuckerberg, NVIDIA CEO Jensen Huang said, “Open-sourcing AI models allows for better safety and transparency, as the models can be scrutinised by the broader community”.

Don’t Be Too Open

Meanwhile, OpenAI CEO Sam Altman has a different view on open source. Altman wants AI models to have some kind of regulations. So much so that Altman recently wrote an opinion piece in The Washington Post titled “Who Will Control the Future of AI?”.

In the article, he argues that in order to ensure that AI benefits the greatest number of people possible, the world requires a US-led global coalition of like-minded countries, as well as an innovative new strategy.

Regulation Time?

Altman has been advocating for AI regulation since as early as 2015. 

Last month in a podcast, he said that he is keen on regulating AI with an international agency.  “I think there will come a time in the not-so-distant future, like we’re not talking decades and decades from now, where frontier AI systems are capable of causing significant global harm,” he said.

Altman has also been backed by OpenAI’s former co-founder Elon Musk, who recently said that AI has “great, great promise” but needs regulation. Musk added that AI is a “bigger risk to society than cars or planes or medicine”. 

Even Andrew Ng, founder of DeepLearning AI agrees with the notion. However, he said, “Regulation should be applied to AI applications, not to general-purpose AI technology.” 

Legislation to regulate the fast-changing technology is already underway. In March, the EU approved the Artificial Intelligence Act, which will categorise AI risk and ban unacceptable use cases. 

President Joe Biden also signed an executive order last year calling for greater transparency from the world’s biggest AI models. This year, the state of California has been leading the charge on regulating AI.

Fight For Open Source

Although, not many agree with Altman, Zuckerberg believes that one should control their own destiny and avoid becoming locked into a closed vendor. “Many organisations don’t want to depend on models they cannot run and control themselves,” he said.

Source: X

There are many like him. Andrew Ng emphasised the global stakes involved in open-source software. “If the attempts to squash open-source software succeed, almost all nations will simply be losers…the United States as well,” Andrew Ng said. His comments highlight how important open-source software is to encouraging creativity and guaranteeing that everyone has access to technology.

The Perils of Open Source AI

In a letter, Zuckerberg argued most of those concerns are unfounded and frames Meta’s strategy as a democratising force in AI development. 

“Open-source will ensure that more people around the world have access to the benefits and opportunities of AI, that power isn’t concentrated in the hands of a small number of companies, and that the technology can be deployed more evenly and safely across society,” he wrote. “It will make the world more prosperous and safer.” 

The main focus of Zukerberg’s statement was the launch of Llama 3.1, which the company claims is the first large-scale open-source language model to achieve the so-called “frontier” of artificial intelligence capabilities.

The Case For Open Source

One popular counterargument to open-sourcing AI is that hostile states could utilise the technology to compromise the national security of other nations or that bad actors could use it maliciously.

Zuckerberg in a recent interview, conceded that, but defended closed-source approaches by pointing out that their code is still susceptible to theft and exploitation.

According to Zuckerberg, open-source AI will be better at tackling “unintentional” sorts of harm than closed AI, because transparent systems are more subject to examination and modification.

For this reason, he argues, “open-source software has historically been more secure.”

Yaan LeCunn agreed with Zukerbeg’s assertion. He listed the many benefits of open source AI in a post on LinkedIn. It gives users the flexibility to modify and enhance models to meet particular requirements, exchange information with a wide audience, and create AI solutions more swiftly and cheaply.

Globally, open source AI fosters diversity by enabling individuals from many backgrounds to utilise and contribute to AI technology. It also makes AI more widely available by preventing the consolidation of power in a small number of powerful corporations.

Not Many Agree With The Notion

However, not many people share the opinion. For example, Geoffrey Hinton has gone so far as to say that open sourcing is similar to open sourcing nuclear weapons

Hinton said that models that are open source are quite risky, wishing that governments would prohibit businesses from releasing AI models as open source. Then, he continued, “bad actors can fine-tune them for all sorts of bad things.”

Hinton is attempting to make the case that it gets simpler for bad actors to create dangerous applications, such bioweapons.

Open-source models carry some risk since they can be altered to yield unrecallable negative results. The persistent threat to open-source libraries is highlighted by recent hacking cases, such as malicious packages uploaded to the Python Package Index and npm repository, which show similar vulnerabilities and resulted in millions of downloads of compromised code before detection.

These attacks highlight the importance of security when using open source libraries. Developers need to be careful about the libraries they use and make sure that they are from trusted sources. 

As the debate continues, the future of AI development may very well hinge on the balance between the openness of innovation and the need for security.

📣 Want to advertise in AIM? Book here

Picture of Anshul Vipat

Anshul Vipat

Anshul Vipat is a tech aficionado, enthusiastic about the latest innovations in the digital world. He also holds keen interest in traveling, exploring and cooking
Related Posts
19th - 23rd Aug 2024
Generative AI Crash Course for Non-Techies
Upcoming Large format Conference
Sep 25-27, 2024 | 📍 Bangalore, India
Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

Flagship Events

Rising 2024 | DE&I in Tech Summit
April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore
Data Engineering Summit 2024
May 30 and 31, 2024 | 📍 Bangalore, India
MachineCon USA 2024
26 July 2024 | 583 Park Avenue, New York
MachineCon GCC Summit 2024
June 28 2024 | 📍Bangalore, India
Cypher USA 2024
Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA
Cypher India 2024
September 25-27, 2024 | 📍Bangalore, India
discord-icon
AI Forum for India
Our Discord Community for AI Ecosystem, In collaboration with NVIDIA.