UHG
Search
Close this search box.

OpenAI CTO Mira Murati is an Absolute PR Disaster

OpenAI has a history of bad PR but knows how to turn a crisis into an opportunity. 

Share

During a recent podcast at Johns Hopkins University, Mira Murati, the chief technology officer of OpenAI, acknowledged the criticism that ChatGPT has received for being overly liberal and emphasised that this bias was unintentional. 

“We’ve been very focused on reducing political bias in the model behaviour. ChatGPT was criticised for being overly liberal, and we’re working really hard to reduce those biases,” said Murati. 

However, no specific details or measures on the redressal efforts have been provided yet. This is all part of their ongoing effort to improve the model and make it more balanced and fair.

However, in an interview back in March, Murati was asked where the video data that was used to train Sora came from. The CTO feigned ignorance, claiming to not know the answer, making her the talk of the town on social media.

Netizens were quick to create memes highlighting her as “an absolute PR disaster”.

OpenAI Needs No Safety Lessons

OpenAI has a history of bad PR, but it knows how to turn a crisis into an opportunity. In a previous discussion moderated at Dartmouth, Murati focused on safety, usability, and reducing biases to democratise creativity and free up humans for higher-level tasks.

In a recent post on X, she said that to make sure these technologies are developed and used in a way that does the most good and the least harm, they work closely with red-teaming experts from the early stages of research.

“You have to build them alongside the technology and actually in a deeply embedded way to get it right. And for capabilities and safety, they’re actually not separate domains. They go hand in hand,” she added.

Notably, her optimism on AI stems from the belief that developing smarter and more secure systems will lead to safer and more beneficial outcomes for the future. However, she is now facing questions about ChatGPT’s perceived liberal bias.

Meanwhile, OpenAI’s former chief scientist Ilya Sutskever launched Safe Superintelligence shortly after leaving the company in May 2024, allegedly due to disagreements with CEO Sam Altman over AGI safety and advancement.

In an apparent response to this and to ward off safety concerns, OpenAI formed a Safety and Security Committee led by directors Bret Taylor, Adam D’Angelo, Nicole Seligman, and Altman.

Murati to the Rescue 

In a July 2023 discussion with Microsoft CTO Kevin Scott, Murati expressed concerns about the prevailing uncertainty in the AI field, emphasising the need for clear guidance and decision-making processes. 

She highlighted the challenge of determining which aspects of AI to prioritise, develop, release, and position effectively. “When we began building GPT more than five years ago, our primary focus was the safety of AI systems,” said Murati.

Highlighting the risks of letting humans directly set goals for AI systems—due to the potential for complex, opaque processes to cause serious errors or unintended consequences—Murati and her team shifted their focus to using RLHF to ensure AI’s safe and effective development.

Briefly, after GPT-3 was developed and released in the API, OpenAI was able to integrate AI safety into real-world systems for the first time.

An Accidental PR 

Murati’s acknowledgement of ChatGPT’s perceived liberal bias and her emphasis that this bias was unintentional represent a significant and positive step towards the responsible use of AI. 

Her addressing criticisms openly demonstrates a commitment to transparency and accountability, which are crucial for the ethical development of technology. 

Murati’s approach not only seeks to rectify past concerns but also underscores a proactive stance on refining AI systems to better serve diverse user needs. This openness fosters trust and shows that OpenAI is dedicated to addressing issues constructively. 

Murati’s tryst with responsible AI is not new-found. In a 2021 interview, she discussed AI’s potential for harm, emphasising that unmanaged technology could lead to serious ethical and safety concerns. Some critics argued that Murati’s comments were too alarmist or did not fully acknowledge the positive potential of AI. 

While Murati aimed to promote responsible AI, the backlash led to broader debates on the technology’s future and its societal impacts.

Not to forget the ‘OpenAI is nothing without its people’ campaign started by Murati during Sam Altman’s ousting. One thing is for sure: Murati is truly mysterious, and no one knows what she’s going to say next to the media. We are not complaining! 

📣 Want to advertise in AIM? Book here

Picture of Tarunya S

Tarunya S

As a passionate enthusiast of caffeine and journalism, I transform tech into words. I enjoy mountain hikes as much as binge-watching new Netflix series.
Related Posts
19th - 23rd Aug 2024
Generative AI Crash Course for Non-Techies
Upcoming Large format Conference
Sep 25-27, 2024 | 📍 Bangalore, India
Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

Flagship Events

Rising 2024 | DE&I in Tech Summit
April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore
Data Engineering Summit 2024
May 30 and 31, 2024 | 📍 Bangalore, India
MachineCon USA 2024
26 July 2024 | 583 Park Avenue, New York
MachineCon GCC Summit 2024
June 28 2024 | 📍Bangalore, India
Cypher USA 2024
Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA
Cypher India 2024
September 25-27, 2024 | 📍Bangalore, India
discord-icon
AI Forum for India
Our Discord Community for AI Ecosystem, In collaboration with NVIDIA.