UHG
Search
Close this search box.

Nobody is as Responsible as Microsoft & Google in AI

Microsoft has developed extensive policies to support responsible AI, collaborating with OpenAI and independently managing a safety review process, putting them ahead in the race

Share

Illustration by Nikhil Kumar

Recently, on The Decoder podcast, when asked about OpenAI’s Sora potentially being trained on YouTube videos, Google CEO Sundar Pichai agreed it would be inappropriate and implied that such an action would violate YouTube’s terms and conditions.

“We don’t know the details. Our YouTube team is following up and trying to understand. We have terms and conditions and we would expect people to abide by those terms and conditions,” Pichai added.

In the backdrop of this, AIM looked at the ongoing development of big tech companies in building a responsible AI discussion. Google and Microsoft appear to be making significant strides in AI use and addressing ethical concerns. 

From the above scores given by AIM, Microsoft and Google rank the highest in terms of responsible AI

Pay the Dues Where Needed

Pichai, expressing his empathy towards creative content creators, said, “I can understand how emotional a transformation this is, and I think part of the reason you saw even, through Google I/O, when we’re working on products like music generation, we have really taken an approach by which we are working first to make tools for artists. So the way we have taken that approach in many of these cases is to put the creator community as much at the centre of it as possible.”

Exactly a month ago, YouTube CEO Neal Mohan confirmed that using YouTube videos for training AI models violates the platform’s terms of service. However, Mohan couldn’t be sure if OpenAI had indeed used YouTube videos.

“From a creator’s perspective, when they upload their hard work to our platform, they have certain expectations… Lots of creators have different sorts of licensing contracts in terms of their content on our platform,” Mohan said. 

Additionally, Pichai added that YouTube is essentially a licensing business where Google licenses a lot of content from creators and pays them back through its advertising model. He said the music industry has a huge licensing relationship with YouTube that is beneficial for both sides. 

Contrastingly, last year the New York Times filed a lawsuit against OpenAI, alleging unauthorised use of its published work to train their AI, citing copyright issues related to its written content. 

However, OpenAI has since partnered with several news agencies to train its AI models using content from these organisations.

Similarly, Apple has licensed AI for training data from Shutterstock. The deal was closed between $25 million and $50 million for their entire image, video, and music database. 

Last year, Apple also began negotiations with major news and publishing organisations, seeking permission to use their material in developing generative AI systems.

Is Openness a Big Factor for Tech Companies?

In a recent Wall Street Journal interview, OpenAI CTO Mira Murati was asked about the kind of data the company had used in Sora. Murati’s response went viral, where she said, “Actually, I am not sure,” elaborating that they had stuck to “publicly available data and licensed data.”

With the new GPT-4o model, OpenAI has come under scrutiny due to allegations that it used actress Scarlett Johansson’s voice without permission for one of the model’s voices Sky. The voice was quickly pulled after users noted its striking similarity to Johansson’s voice in the 2013 film Her.

This highlights that OpenAI currently lacks full transparency regarding its training data, although they are gradually improving in this area.

As mentioned before, Open AI recently signed content and product partnership agreements with The Atlantic and Vox Media, helping the artificial intelligence firm boost and train its products.

Also, a few days ago, OpenAI gained access to News Corp publications, granting OpenAI’s chatbots access to new and archived material from the Wall Street Journal, the New York Post, MarketWatch, Barron’s, and others.

This time, closing the deal at $250 million marks a significant increase from just a few months ago, when OpenAI offered a mere $1 million for media licensing to train its extensive language models.

Meanwhile, Meta AI chief Yann LeCun recently confirmed that Meta has obtained $30 billion worth of NVIDIA GPUs to train their AI models. As the necessity of acquiring GPUs, the current AI activities of Meta are all about refining and training more advanced editions of their Llama-3 models.

In doing so, reports also suggest that Meta is considering paying news organisations to better train its AI language model to make its gen AI model including Meta AI more effective and competitive in the market of gen AI. 

On a similar note to Microsoft, AI startup Karya employs and pays over 30,000 rural Indians to create high-quality datasets in speech, text, images, and videos for training LLMs in 12 Indian languages.

AI Safety Policies So Far

Recently, OpenAI released its safety policy, which states, “We believe in a balanced, scientific approach where safety measures are integrated into the development process from the outset. This ensures that our AI systems are both innovative and reliable and can deliver benefits to society.” 

Similarly, Microsoft developed policies to support responsible capability scaling and collaborated with OpenAI on new frontier models using Azure’s supercomputing infrastructure. They also independently managed a safety review process and joined in OpenAI’s deployment of a safety board to review models, including GPT-4.

While Apple doesn’t have an AI safety policy as such, it seems like they are trying to correct this with the recent hiring plans. Also, Apple would likely partner with OpenAI in the next couple of weeks which can also spur a potential AI safety policy.  

At Google Cloud’s Next ’23, VP of Cloud Security Sunil Potti unveiled GCP’s security strategy built on leveraging Mandiant expertise, integrating security into innovations, and providing expertise across environments. 

This expands on the Security AI Workbench, introduced in April, with Google’s Sec-PaLM. Potti emphasised generative AI’s potential to tackle evolving threats, tool proliferation, and talent shortages, enhancing security operations in various applications.

Similarly, at AWS, their policy said, “We are committed to developing AI responsibly, taking a people-centric approach that prioritises education, science, and our customers, to integrate responsible AI across the end-to-end AI lifecycle.”

Meanwhile, the responsible AI policy at Meta focuses on five pillars – privacy and security, fairness and inclusion, robustness and safety, transparency and control, and accountability and governance. 

Open AI Has a Safety Board, What About the Others? 

Recently, OpenAI formed a safety and security committee responsible for making recommendations on critical safety and security decisions for all OpenAI projects. The discussions revolved around the likely early arrival of GPT-5 and how the committee will serve as a safety bunker for OpenAI. 

In addition to being led by OpenAI Board directors, the group will also include technical and policy experts to guide them. However, this announcement came right after OpenAI disbanded its super alignment team led by Ilya Sutskever and Jan Leike.

Similarly, as part of safeguarding AI responsibility, Google has established the Responsible AI and Human-Centred Technology (RAI-HCT) team. This team is tasked with conducting research and developing methodologies, technologies, and best practices to ensure that AI systems are built responsibly.

Recently, a Bloomberg report stated that Microsoft has increased its Responsible AI team from 350 to 400 members to ensure the safety of its AI products. Microsoft also released its Responsible AI report, highlighting the creation of 30 responsible AI tools over the past year, the expansion of its Responsible AI team, and the mandate for teams developing generative AI applications to measure and map risks throughout the development cycle.

Additionally, Microsoft has its new member, Inflection AI and DeepMind co-founder Mustafa Suleyman, to ethically steer its AI initiatives.

At last year’s AWS re:Invent conference, AWS’s responsible AI lead Diya Wynn highlighted the importance of using AI responsibly. She emphasised creating a culture of responsibility and a holistic approach to AI within organisations. 

She cited a recent survey that shows 77% of respondents are aware of responsible AI, and 59% see it as essential for business. However, younger leaders, aged 18 to 44, are more familiar with the concept than older leaders or those over 45, and only a quarter of respondents have begun developing a responsible AI strategy, with most lacking a dedicated team.

Similar to OpenAI, Meta dispersed its Responsible AI team last year, reallocating members to various groups within the company. However, unlike OpenAI, most team members transitioned to the generative AI sector to continue addressing AI-related harms and support responsible AI development across Meta.

Microsoft Leads The Way

Microsoft has developed extensive policies to support responsible AI, collaborating with OpenAI and independently managing a safety review process, putting them ahead in the race. However, companies like Meta and Google are doing equally as much to help ensure their AI tech is safe and ethically built. Soon, with the tide changing, most companies, including Apple and OpenAI, may strengthen their teams to ensure a responsible approach to AI.

📣 Want to advertise in AIM? Book here

Picture of Gopika Raj

Gopika Raj

With a Master's degree in Journalism & Mass Communication, Gopika Raj infuses her technical writing with a distinctive flair. Intrigued by advancements in AI technology and its future prospects, her writing offers a fresh perspective in the tech domain, captivating readers along the way.
Related Posts
19th - 23rd Aug 2024
Generative AI Crash Course for Non-Techies
Upcoming Large format Conference
Sep 25-27, 2024 | 📍 Bangalore, India
Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

Flagship Events

Rising 2024 | DE&I in Tech Summit
April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore
Data Engineering Summit 2024
May 30 and 31, 2024 | 📍 Bangalore, India
MachineCon USA 2024
26 July 2024 | 583 Park Avenue, New York
MachineCon GCC Summit 2024
June 28 2024 | 📍Bangalore, India
Cypher USA 2024
Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA
Cypher India 2024
September 25-27, 2024 | 📍Bangalore, India
discord-icon
AI Forum for India
Our Discord Community for AI Ecosystem, In collaboration with NVIDIA.