UHG
Search
Close this search box.

Why Amazon Q Deserves Another Chance 

Amazon Q hasn’t been released yet, and criticisms are already mounting.

Share

Illustration by Nikhil Kumar

At re:Invent, amid much fanfare, AWS introduced Amazon Q,  a generative AI chatbot specifically designed for businesses. The company claimed that unlike OpenAI’s ChatGPT, it is much safer and more secure. However, contrary to these assertions, Amazon Q has come under the searchlights for all the wrong reasons. 

Barely three days after the launch, concerns began to rise among employees regarding accuracy and privacy of the chatbot. Q is reportedly “suffering from significant hallucinations” and has been implicated in leaking sensitive data, such as the locations of AWS data centres, internal discount programs, and unreleased features.

Undoubtedly, Amazon quickly released a statement that said, “No security issue was identified as a result of that feedback. We appreciate all of the feedback we’ve already received and will continue to tune Q as it transitions from being a product in preview to being generally available.” 

A Case for Amazon’s Q

As highlighted at re:Invent, employees can use Q to complete tasks in popular systems like Jira, Salesforce, ServiceNow, and Zendesk, which is a unique thing about Amazon Q. For example, an employee could ask it to open a ticket in Jira or create a case in Salesforce.

Interestingly, Amazon Q hasn’t been released yet, and criticisms are already mounting. Being in preview, it’s expected to undergo corrections as necessary. 

“Companies need to realise that it is incredibly difficult to make an LLM that doesn’t hallucinate. At best they can minimise it to some degree, but won’t be able to get rid of it. What OpenAI did with GPT-4 is a herculean act that others may not be able to easily imitate,” said Nektarios Kalogridis, founder and CEO of DeepTrading AI, addressing concerns about Amazon Q.

Also, we cannot blame Q directly for hallucinating as it can work with any of the models found on Amazon Bedrock, AWS’s repository of AI models, which includes Meta’s Llama 2 and Anthropic’s Claude 2. 

The company said customers who use Q often choose which model works best for them, connect to the Bedrock API for the model, use that to learn their data, policies, and workflow, and subsequently deploy Amazon Q. Therefore, if there are instances of hallucination, it could stem from any of the aforementioned models.

Moreover, ChatGPT has also had its share of issues with leaking sensitive information. Most recently, it leaked private and sensitive data when asked to repeat the word ‘poem’ indefinitely. But that hasn’t deterred enterprises from using ChatGPT. 

Like Amazon Q, OpenAI’s ChatGPT Enterprise hasn’t been made available yet. OpenAIs COO, Brad Lightcap, in a recent interview, revealed that ‘many, many, many thousands’ of companies are on the waiting list for the AI tool (ChatGPT Enterprise). Since November, 92% of Fortune 500 companies have used ChatGPT, a significant increase from 80% in August.

Enterprise Chatbots are the Future 

Despite the concerns, Amazon Q comes with great benefits. 

Just like ChatGPT Enterprise, Amazon Q will also allow customers to connect to their business data, information, and systems, so it can synthesise everything and provide tailored assistance to help employees solve problems, generate content, and take actions relevant to their business. 

The above features are a result of RAG, which retrieves data relevant to a question or task and provides them as context for the LLM. However, RAG comes with a risk of potential data leaks, similar to Amazon Q.

Ethan Mollick, professor at Wharton, expressed that RAG has its own advantages and disadvantages. “I say it a lot, but using LLMs to build customer service bots with RAG access to your data is not the low-hanging fruit it seems to be. It is, in fact, right in the weak spot of current LLMs — you risk both hallucinations & data exfiltration.” 

OpenAI introduced something similar on Devday with Assistant APIs, which include a function called ‘Retrieval’, which is nothing but a RAG function. This enhances the assistant with knowledge from outside our models, such as proprietary domain data, product information, or documents provided by your users. 

Apart from OpenAI and AWS, Cohere is quietly collaborating with enterprises to incorporate generative AI capabilities. 

Cohere was one of the first ones to understand the importance of RAG as a method to reduce hallucinations and keep the chatbot updated. In September, Cohere introduced the Chat API with RAG. With this new feature, developers can combine user inputs, data sources, and model outputs to create strong product experiences.

Despite the concerns that are being raised about hallucination and data leaks, enterprises cannot completely ditch the generative AI chatbots as they are definitely going to get better over time. For this is just the beginning. 

📣 Want to advertise in AIM? Book here

Picture of Siddharth Jindal

Siddharth Jindal

Siddharth is a media graduate who loves to explore tech through journalism and putting forward ideas worth pondering about in the era of artificial intelligence.
Related Posts
19th - 23rd Aug 2024
Generative AI Crash Course for Non-Techies
Upcoming Large format Conference
Sep 25-27, 2024 | 📍 Bangalore, India
Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

Flagship Events

Rising 2024 | DE&I in Tech Summit
April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore
Data Engineering Summit 2024
May 30 and 31, 2024 | 📍 Bangalore, India
MachineCon USA 2024
26 July 2024 | 583 Park Avenue, New York
MachineCon GCC Summit 2024
June 28 2024 | 📍Bangalore, India
Cypher USA 2024
Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA
Cypher India 2024
September 25-27, 2024 | 📍Bangalore, India
discord-icon
AI Forum for India
Our Discord Community for AI Ecosystem, In collaboration with NVIDIA.