Amazon News, Stories and Latest Updates https://analyticsindiamag.com/news/amazon/ Artificial Intelligence news, conferences, courses & apps in India Fri, 26 Jul 2024 09:03:21 +0000 en-US hourly 1 https://analyticsindiamag.com/wp-content/uploads/2019/11/cropped-aim-new-logo-1-22-3-32x32.jpg Amazon News, Stories and Latest Updates https://analyticsindiamag.com/news/amazon/ 32 32 Generative AI Could End NEET Cheating and Paper Leaks for Good https://analyticsindiamag.com/industry-insights/ai-in-education/generative-ai-could-end-neet-cheating-and-paper-leaks-for-good/ https://analyticsindiamag.com/industry-insights/ai-in-education/generative-ai-could-end-neet-cheating-and-paper-leaks-for-good/#respond Fri, 26 Jul 2024 09:03:20 +0000 https://analyticsindiamag.com/?p=10130303

The fact that the NEET irregularities were caught after the blowout and not during the exam itself raises concerns about how effective NTA’s proctoring technology is.

The post Generative AI Could End NEET Cheating and Paper Leaks for Good appeared first on AIM.

]]>

The National Testing Agency (NTA) recently came under fire following allegations of malpractice in conducting the 2024 National Eligibility cum Entrance Test (NEET).

Several students scoring perfect marks in the test raised alarms, leading to allegations of cheating and paper leaks. While the NTA bears the brunt of the scandal, it also raises concerns about the reliability of the test itself.

The fact that these irregularities were caught after the blowout and not during the exam itself raises concerns about how effective NTA’s proctoring technology is. This is particularly timely as well, since this year, the testing agency made a big deal about using AI for surveillance.

“Generative AI can play a huge role in assessment content authoring. For example, our Ai-levate product includes an AI-powered service called item rewriter, which can create five different versions of the same question while maintaining its core essence and difficulty level,” Adarsh Sudhindra, the VP for growth and strategy, Excelsoft, told AIM.

This, he explained, helped ensure that questions weren’t repeated and made it difficult for potential anyone to memorise a specific question, in the case of leaks.

“A proper use of technology at various stages in the assessment life cycle can significantly improve the overall security of testing. There are points where human error can occur; if we can automate all those points and plug those leaks and bring in safe, secure practices of using technology, this would not happen,” said Sudhindra.

Where Does GenAI Fit into Proctoring and Assessment?

Mysuru-based edtech company Excelsoft has been providing AI-based proctoring services for about eight years. Ai-levate is just one of several products that the company offers, with plans to integrate GenAI within their e-learning offerings as well. 

With 10 to 20% of the company’s top line being AI-driven products, its proctoring solution easyProctor is used in several countries, including the US and the UK, to proctor large-scale examinations. 

Apart from a number of features, easyProctor is primarily an AI-powered solution which makes use of AI algorithms and real-time monitoring to help proctor different examinations and tests both offline and online.

Now, while easyProctor itself doesn’t use generative AI for proctoring, Sudhindra said that it had a huge scope for assessments, with services like the item rewriter.

It provided the benefit of ensuring that examinations with massive question banks, like NEET, would not have questions that had potential similarities or conflicts with each other. This, in particular, is quite common, with grace marks awarded every year due to poorly framed questions.

For proctoring purposes, the company also has its own set of models, fine-tuned specifically for the purpose of proctoring, switching over from using Amazon Bedrock and SageMaker. 

“Since 2021, we’ve been developing, training, and fine-tuning our models specifically for the proctoring use case. We used to use Amazon’s services a couple of years ago, but the cost and trainability were a challenge.

“So we have developed our own models and they are on par with Amazon’s models because we are able to train and fine-tune it for the proctoring use case. Whereas some of these open models out there are not built for proctoring,” he said.

This means multi-modal capabilities fine-tuned specifically for the purpose of proctoring, including multi-camera monitoring, environmental scanning, facial recognition, screen and audio monitoring, keystroke analysis, and malpractice detection, all powered by their AI models.

Their success has shown that AI does have a place in the proctoring field, especially since, according to Sudhindra, ExcelSoft’s easyProctor is currently being employed by 12 major universities in India.

Despite competing with major companies like ProctorU, Honorlock, and Eklavvya in India, the company has managed to bag several major government partnerships with European countries and state governments in the US to proctor their driver’s licence and civil service exams respectively.

Why Doesn’t the NTA Employ Similar Methods?

Well, they sort of do. The exact details on the NTA’s usage of AI in the NEET examinations is unclear. 

However, what we do know is that they made use of AI-powered CCTV surveillance, analytics tools, face recognition and post-examination analysis, with the command centre also using AI to assess issues of malpractice. In fact, the NTA also released details of malpractice issues being called out this year, thanks to the use of these AI-powered measures.

But, as Sudhindra said, there are still points of exposure that can be rectified when it comes to the entire process of proctoring and assessment. “Everywhere there is human exposure, wherever you can bring in technology and automate this, you are increasing the overall security of the assessment process. So, like I said, the construction of the exams can be all digitised,” he said.

Additionally, he suggested the use of a single central cloud server with limited access, increasing accountability and reducing the possibility of leaks. He also spoke about the costs of these measures. 

While the amount of AI integration would determine the cost, overall, it was less expensive to employ these methods than to hire more proctors to plug these gaps.

“The ratio of the proctor to the candidate also matters. It’s 1:8, which is the current industry standard. With augmentation, it can go to 1:16. With a little more AI, it can go to 1:32, thereby bringing the cost down and decreasing the bandwidth also,” he pointed out.

Potentially, with more time and as the NTA gets used to using AI-powered solutions, the accountability of major exams like NEET can improve, ensuring that students are given a fair chance when it comes to pursuing their career goals.

The post Generative AI Could End NEET Cheating and Paper Leaks for Good appeared first on AIM.

]]>
https://analyticsindiamag.com/industry-insights/ai-in-education/generative-ai-could-end-neet-cheating-and-paper-leaks-for-good/feed/ 0
Amazon Announces ‘Trusted AI Challenge’ for LLM Coding Security https://analyticsindiamag.com/ai-news-updates/amazon-announces-trusted-ai-challenge-for-llm-coding-security/ https://analyticsindiamag.com/ai-news-updates/amazon-announces-trusted-ai-challenge-for-llm-coding-security/#respond Mon, 08 Jul 2024 14:02:06 +0000 https://analyticsindiamag.com/?p=10126222 Amazon AI Challenge

Amazon’s AI challenge sort of mimics OpenAI’s method of building responsible AI.

The post Amazon Announces ‘Trusted AI Challenge’ for LLM Coding Security appeared first on AIM.

]]>
Amazon AI Challenge

Amazon announces a global university competition to focus on Responsible AI for LLM coding security. The “Amazon Trusted AI Challenge” is offering 250,000 in sponsorship and monthly AWS credits to each of the 10 teams that will be selected for the competition that begins in November 2024. The winning team will have a chance to get $700,000 in cash prizes. 

The students will participate in a tournament-style competition where they can either develop AI models or red teams to improve AI user experience, prevent misuse, and help users create safer code. 

Model developers will focus on adding security features to AI models that generate code, while testers will create automated methods to test these models. Each round of the competition will also involve multiple interactions, allowing teams to improve their models and techniques by identifying strengths and weaknesses.

Improving AI Through Red Teaming

“We are focusing on advancing the capabilities of coding LLMs, exploring new techniques to automatically identify possible vulnerabilities and effectively secure these models,” said Rohit Prasad, senior vice president and head scientist, Amazon AGI.

“The goal of the Amazon Trusted AI Challenge is to see how students’ innovations can help forge a future where generative AI is consistently developed in a way that maintains trust, while highlighting effective methods for safeguarding LLMs against misuse to enhance their security,” said Prasad.  

Amazon’s AI challenge is a promising way to build more robust and secure coding systems by collaborating with some of the deepest young minds in the industry. Similar methods have been adopted by other companies including OpenAI who run cybersecurity and bounty challenges. Their last competition invited people to help with framing ways to deploy responsible AI models.

The post Amazon Announces ‘Trusted AI Challenge’ for LLM Coding Security appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-news-updates/amazon-announces-trusted-ai-challenge-for-llm-coding-security/feed/ 0
Watch Out, Chatbots! Amazon Metis is Almost Here  https://analyticsindiamag.com/ai-trends-future/amazon-metis/ https://analyticsindiamag.com/ai-trends-future/amazon-metis/#comments Fri, 28 Jun 2024 12:30:00 +0000 https://analyticsindiamag.com/?p=10125286

With the anticipated launch of Metis, Amazon is poised to dispel any notion of 'falling behind in the AI technological race’.

The post Watch Out, Chatbots! Amazon Metis is Almost Here  appeared first on AIM.

]]>

With Anthropic’s Claude 3.5 Sonnet currently leading the chatbot race, tech giant Amazon wishes not to be left behind. The company has entered the chatbot competition, developing its own consumer-focused AI chatbot, Metis, which will be unveiled later this year. 

The chatbot will be available via a web browser and powered by one of the company’s proprietary AI models, Olympus. According to reports, Olympus outperforms Amazon’s publicly available AI model Titan.

The RAG Advantage

In a market packed with chatbots like ChatGPT, Microsoft Copilot, Anthropic’s Claude, Meta’s Llama, and Perplexity.ai, the most pressing question is, “What can Amazon do differently?” Amazon is late to the AI game, and with so many already established names, simply matching the competition will not suffice.

Amazon’s Metis will use retrieval-augmented generation (RAG) to tap into information beyond the data on which its model was built. For example, Metis should be able to provide the most recent stock values, which non-RAG chatbots cannot.

RAGs merge information retrieval with natural language selection, allowing AI to access and incorporate specific external facts into its replies, increasing their effectiveness and accuracy.

“The main advantage of RAGs over LLMs is that the former is based entirely on a proprietary dataset that the owner of the RAG can control, allowing for more targeted applications,” Renat Abyasov, the CEO of AI business, Wonderslide, said in an interview.

Using RAG also offers real-world advantages. According to a recent study published in the NEJM AI journal, RAG can significantly increase the performance of LLMs in answering medical inquiries.

Once a business like OpenAI, Google, or Amazon has trained an AI model on a large dataset for weeks or months, there is usually no way to update the model with fresh information. RAG overcomes this by supplementing an AI output with external data.

Roadblocks Continue? 

Despite ambitious expectations, reports indicate that Amazon’s AI-powered version of its virtual assistant, Alexa, still needs to be prepared. Former employees claim that Amazon needs more data and chips to run the LLM that powers the new Alexa.

However, Amazon has denied these allegations, claiming that these former employees are unaware of the company’s current Alexa AI activities.

Amazon’s founder and former CEO, Jeff Bezos, has said that Amazon needs to catch up in the AI race. He was concerned about keeping pace with rivals OpenAI, Microsoft, and Google.

Bezos has also been emailing Amazon executives to inquire why more AI startups are not using its cloud services.

While the debate over Amazon’s performance in AI continues, the corporation has made significant strides in AI service delivery.

In November, Amazon released a preview of Amazon Q, a generative AI assistant that can be tailored to specific organisations. According to reports, Amazon is working on an improved version of Alexa that will use its Titan AI model.

With the anticipated launch of Metis, Amazon is poised to dispel any notion of ‘falling behind in the AI technological race’. This AI chatbot, coupled with the recent progress in AI service delivery, could mark a significant turning point for the international tech behemoth.

The post Watch Out, Chatbots! Amazon Metis is Almost Here  appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-trends-future/amazon-metis/feed/ 2
Nobody is as Responsible as Microsoft & Google in AI https://analyticsindiamag.com/ai-insights-analysis/nobody-is-as-responsible-as-microsoft-google-in-ai/ Tue, 04 Jun 2024 11:21:20 +0000 https://analyticsindiamag.com/?p=10122452

Microsoft has developed extensive policies to support responsible AI, collaborating with OpenAI and independently managing a safety review process, putting them ahead in the race

The post Nobody is as Responsible as Microsoft & Google in AI appeared first on AIM.

]]>

Recently, on The Decoder podcast, when asked about OpenAI’s Sora potentially being trained on YouTube videos, Google CEO Sundar Pichai agreed it would be inappropriate and implied that such an action would violate YouTube’s terms and conditions.

“We don’t know the details. Our YouTube team is following up and trying to understand. We have terms and conditions and we would expect people to abide by those terms and conditions,” Pichai added.

In the backdrop of this, AIM looked at the ongoing development of big tech companies in building a responsible AI discussion. Google and Microsoft appear to be making significant strides in AI use and addressing ethical concerns. 

From the above scores given by AIM, Microsoft and Google rank the highest in terms of responsible AI

Pay the Dues Where Needed

Pichai, expressing his empathy towards creative content creators, said, “I can understand how emotional a transformation this is, and I think part of the reason you saw even, through Google I/O, when we’re working on products like music generation, we have really taken an approach by which we are working first to make tools for artists. So the way we have taken that approach in many of these cases is to put the creator community as much at the centre of it as possible.”

Exactly a month ago, YouTube CEO Neal Mohan confirmed that using YouTube videos for training AI models violates the platform’s terms of service. However, Mohan couldn’t be sure if OpenAI had indeed used YouTube videos.

“From a creator’s perspective, when they upload their hard work to our platform, they have certain expectations… Lots of creators have different sorts of licensing contracts in terms of their content on our platform,” Mohan said. 

Additionally, Pichai added that YouTube is essentially a licensing business where Google licenses a lot of content from creators and pays them back through its advertising model. He said the music industry has a huge licensing relationship with YouTube that is beneficial for both sides. 

Contrastingly, last year the New York Times filed a lawsuit against OpenAI, alleging unauthorised use of its published work to train their AI, citing copyright issues related to its written content. 

However, OpenAI has since partnered with several news agencies to train its AI models using content from these organisations.

Similarly, Apple has licensed AI for training data from Shutterstock. The deal was closed between $25 million and $50 million for their entire image, video, and music database. 

Last year, Apple also began negotiations with major news and publishing organisations, seeking permission to use their material in developing generative AI systems.

Is Openness a Big Factor for Tech Companies?

In a recent Wall Street Journal interview, OpenAI CTO Mira Murati was asked about the kind of data the company had used in Sora. Murati’s response went viral, where she said, “Actually, I am not sure,” elaborating that they had stuck to “publicly available data and licensed data.”

With the new GPT-4o model, OpenAI has come under scrutiny due to allegations that it used actress Scarlett Johansson’s voice without permission for one of the model’s voices Sky. The voice was quickly pulled after users noted its striking similarity to Johansson’s voice in the 2013 film Her.

This highlights that OpenAI currently lacks full transparency regarding its training data, although they are gradually improving in this area.

As mentioned before, Open AI recently signed content and product partnership agreements with The Atlantic and Vox Media, helping the artificial intelligence firm boost and train its products.

Also, a few days ago, OpenAI gained access to News Corp publications, granting OpenAI’s chatbots access to new and archived material from the Wall Street Journal, the New York Post, MarketWatch, Barron’s, and others.

This time, closing the deal at $250 million marks a significant increase from just a few months ago, when OpenAI offered a mere $1 million for media licensing to train its extensive language models.

Meanwhile, Meta AI chief Yann LeCun recently confirmed that Meta has obtained $30 billion worth of NVIDIA GPUs to train their AI models. As the necessity of acquiring GPUs, the current AI activities of Meta are all about refining and training more advanced editions of their Llama-3 models.

In doing so, reports also suggest that Meta is considering paying news organisations to better train its AI language model to make its gen AI model including Meta AI more effective and competitive in the market of gen AI. 

On a similar note to Microsoft, AI startup Karya employs and pays over 30,000 rural Indians to create high-quality datasets in speech, text, images, and videos for training LLMs in 12 Indian languages.

AI Safety Policies So Far

Recently, OpenAI released its safety policy, which states, “We believe in a balanced, scientific approach where safety measures are integrated into the development process from the outset. This ensures that our AI systems are both innovative and reliable and can deliver benefits to society.” 

Similarly, Microsoft developed policies to support responsible capability scaling and collaborated with OpenAI on new frontier models using Azure’s supercomputing infrastructure. They also independently managed a safety review process and joined in OpenAI’s deployment of a safety board to review models, including GPT-4.

While Apple doesn’t have an AI safety policy as such, it seems like they are trying to correct this with the recent hiring plans. Also, Apple would likely partner with OpenAI in the next couple of weeks which can also spur a potential AI safety policy.  

At Google Cloud’s Next ’23, VP of Cloud Security Sunil Potti unveiled GCP’s security strategy built on leveraging Mandiant expertise, integrating security into innovations, and providing expertise across environments. 

This expands on the Security AI Workbench, introduced in April, with Google’s Sec-PaLM. Potti emphasised generative AI’s potential to tackle evolving threats, tool proliferation, and talent shortages, enhancing security operations in various applications.

Similarly, at AWS, their policy said, “We are committed to developing AI responsibly, taking a people-centric approach that prioritises education, science, and our customers, to integrate responsible AI across the end-to-end AI lifecycle.”

Meanwhile, the responsible AI policy at Meta focuses on five pillars – privacy and security, fairness and inclusion, robustness and safety, transparency and control, and accountability and governance. 

Open AI Has a Safety Board, What About the Others? 

Recently, OpenAI formed a safety and security committee responsible for making recommendations on critical safety and security decisions for all OpenAI projects. The discussions revolved around the likely early arrival of GPT-5 and how the committee will serve as a safety bunker for OpenAI. 

In addition to being led by OpenAI Board directors, the group will also include technical and policy experts to guide them. However, this announcement came right after OpenAI disbanded its super alignment team led by Ilya Sutskever and Jan Leike.

Similarly, as part of safeguarding AI responsibility, Google has established the Responsible AI and Human-Centred Technology (RAI-HCT) team. This team is tasked with conducting research and developing methodologies, technologies, and best practices to ensure that AI systems are built responsibly.

Recently, a Bloomberg report stated that Microsoft has increased its Responsible AI team from 350 to 400 members to ensure the safety of its AI products. Microsoft also released its Responsible AI report, highlighting the creation of 30 responsible AI tools over the past year, the expansion of its Responsible AI team, and the mandate for teams developing generative AI applications to measure and map risks throughout the development cycle.

Additionally, Microsoft has its new member, Inflection AI and DeepMind co-founder Mustafa Suleyman, to ethically steer its AI initiatives.

At last year’s AWS re:Invent conference, AWS’s responsible AI lead Diya Wynn highlighted the importance of using AI responsibly. She emphasised creating a culture of responsibility and a holistic approach to AI within organisations. 

She cited a recent survey that shows 77% of respondents are aware of responsible AI, and 59% see it as essential for business. However, younger leaders, aged 18 to 44, are more familiar with the concept than older leaders or those over 45, and only a quarter of respondents have begun developing a responsible AI strategy, with most lacking a dedicated team.

Similar to OpenAI, Meta dispersed its Responsible AI team last year, reallocating members to various groups within the company. However, unlike OpenAI, most team members transitioned to the generative AI sector to continue addressing AI-related harms and support responsible AI development across Meta.

Microsoft Leads The Way

Microsoft has developed extensive policies to support responsible AI, collaborating with OpenAI and independently managing a safety review process, putting them ahead in the race. However, companies like Meta and Google are doing equally as much to help ensure their AI tech is safe and ethically built. Soon, with the tide changing, most companies, including Apple and OpenAI, may strengthen their teams to ensure a responsible approach to AI.

The post Nobody is as Responsible as Microsoft & Google in AI appeared first on AIM.

]]>
AWS and GenAI Help Fractal Analytics Reduce Call Handling Time by up to 15% https://analyticsindiamag.com/ai-news-updates/aws-and-genai-help-fractal-analytics-reduce-call-handling-time-by-up-to-15/ Fri, 24 May 2024 09:44:05 +0000 https://analyticsindiamag.com/?p=10121568

The pilot showed a 10-15% reduction in average data retrieval time and a 30% call deflection rate due to self-service capabilities.

The post AWS and GenAI Help Fractal Analytics Reduce Call Handling Time by up to 15% appeared first on AIM.

]]>

Fractal Analytics, a leading AI solutions provider for Fortune 500 companies, has effectively reduced call handling time by up to 15% using its latest innovation, dubbed Knowledge Assist, on AWS. 

Traditionally, data retrieval from multiple internal sources is time-consuming and often involves unstructured data, increasing the complexity of queries. With Knowledge Assist, Fractal aims to make knowledge retrieval more efficient within large enterprises.

It chose to build Knowledge Assist on AWS, leveraging Amazon Bedrock for its generative AI capabilities. In addition to it, Fractal utilised Amazon Elastic Container Service (ECS) for building connectors for Knowledge Assist and Amazon OpenSearch Service for vector/semantic search. The SaaS application layer runs on Amazon Elastic Kubernetes Service (EKS) and AWS Lambda serverless compute.

“The generative AI space is evolving rapidly. Being able to choose from various LLMs on Amazon Bedrock, which we can swiftly implement or experiment with, along with the ability to use the platform as an API without hosting concerns, helps us experiment and scale faster,” said Fractal Analytics Client Partner for Products and Accelerators Ritesh Radhakrishnan.

Knowledge Assist adheres to stringent security and privacy standards, protecting data within each client’s network through private endpoints and end-to-end encryption. Personally identifiable information is masked before storage in the analytics layer.

During a six-month pilot program, nearly 500 knowledge workers in contact centers adopted Knowledge Assist, handling hundreds of thousands of queries monthly and managing complex data from over 10,000 documents across pdf, doc, and ppt formats. The pilot showed a 10-15% reduction in average data retrieval time and a 30% call deflection rate due to self-service capabilities.

Clients reported improved customer and employee satisfaction, less supervisor involvement, and enhanced upsell opportunities due to more available time on each call. Radhakrishnan explained that customers received faster and better answers, leading to improved customer satisfaction scores (CSAT). Agents experienced less frustration as they no longer needed to search multiple systems for answers.

Knowledge Assist also enhances compliance by providing the latest information, reducing instances of customers receiving incorrect or outdated information. This leads to a higher level of first-time issue resolution.

Moving forward, Fractal plans to implement more automated LLM evaluations and generate fresh insights into calls to help clients proactively address recurring issues and reduce call volumes. This continuous innovation in AI-driven solutions underscores Fractal’s commitment to improving business outcomes through advanced technology.

Fractal has been riding the GenAI wave for a long time. The company entered the generative AI space last June by introducing Flyfish, a new all-round generative AI platform for digital sales. Then, it unveiled India’s first Indian languages-based text-to-image diffusion model Kalaido.ai. Currently, it’s also leveraging GenAI for insurance and even transforming the fashion value chain with vision intelligence

The post AWS and GenAI Help Fractal Analytics Reduce Call Handling Time by up to 15% appeared first on AIM.

]]>
Microsoft Eats into Amazon’s Cloud Market Share  https://analyticsindiamag.com/ai-origins-evolution/microsoft-eats-into-amazons-cloud-market-share/ Fri, 03 May 2024 12:39:11 +0000 https://analyticsindiamag.com/?p=10119562 Microsoft Amazon Google Cloud

Microsoft Azure inches closer to Amazon with 25% cloud market share.

The post Microsoft Eats into Amazon’s Cloud Market Share  appeared first on AIM.

]]>
Microsoft Amazon Google Cloud

The $76-billion global cloud infrastructure services market has once again been captured by the big three with a 67% combined market share. Amazon continues to dominate the cloud market with a 31% share, taking a 1% hit from the previous year. Microsoft, on the other hand, has been surging forward. 

Microsoft Azure is the King

Microsoft Azure has shown steady growth in the cloud sector, showing an increased capture of the market. The recent quarter’s cloud revenue was $35.1 billion, which was up 23% year-on-year (YoY). The company is closely trailing Amazon with a global cloud market share of 25%

Microsoft’s large spread of AI offerings across its enterprise suite is proving to be its golden egg (the goose being OpenAI). 

“Our AI innovation continues to build on our strategic partnership with OpenAI. More than 65% of the Fortune 500 now use Azure OpenAI Service,” said Microsoft chief Satya Nadella, in a recent earnings call.  

Nadella also confirmed that the quantity of Azure deals valued at over $100 million rose by over 80% compared to the previous year, while the number of deals exceeding $10 million more than doubled. 

Guided by Nadella’s strategic brilliance, Microsoft’s cloud share has been advancing by 1% each quarter, mirroring the deliberate steps of the king on a chessboard.

Copilot Mode ON

Microsoft’s Copilot is proving to be the backbone for AI-powered products for its customers. “30,000 customers across every industry have used Copilot Studio to customise Copilot for Microsoft 365 or build their own, up 175% quarter-over-quarter,” said Nadella.  

In the earnings announcements, Nadella spoke at length about Copilot’s applications across domains. He claimed that almost 60% of Fortune 500 companies use Copilot and have witnessed an accelerated adoption across industries such as Amgen, BP, Cognizant, Koch Industries, Moody’s, Novo Nordisk, NVIDIA, and Tech Mahindra purchasing over 10,000 seats.

“We’re not stopping there. We’re accelerating our innovation, adding over 150 Copilot capabilities since the start of the year,” said Nadella. 

While Microsoft skyrockets, Google has maintained its 11% share of the cloud market.

Source: X

Google Cloud Remains Resilient

Google witnessed staggering growth in the recent quarter with 15% revenue growth YoY and a net income of $23.7 billion, which is a jump of 57% from the previous year. The company attributes a considerable chunk of growth to Google Cloud. 

“Today, over 60% of funded GenAI startups and nearly 90% of GenAI unicorns are Google Cloud customers,” said Google chief Sundar Pichai. The company posted an operating income of $900 million on cloud services. The company even acknowledged that the growth across the cloud is underpinned by the benefits of AI.

In cloud, Google has announced over 1000 new products and features in the past eight months. 

AI Integration Continues for AWS 

Though Amazon saw a 1% dip in the recent results, Amazon is not backing down in any way. AWS’s segment sales increased 17% YoY to hit $25 billion, and the company has been extensively investing in bringing AI on their platform. 

Recently, AWS announced the general availability of Amazon Q, which is the company’s most advanced AI-powered assistant. Amazon Q will be available in three forms to assist developers, enterprises, and Q apps, enabling companies to build generative AI apps using their company data.

“The combination of companies renewing their infrastructure modernisation efforts and the appeal of AWS’ AI capabilities is reaccelerating AWS’ growth rate,” said Andy Jassy, Amazon’s president and CEO.  The company is at a $100 billion annual revenue rate. 

Amazon Bedrock, AWS’s generative AI service that allows users to leverage the latest LLMs for building AI applications, also witnessed remarkable numbers in the recent quarter. Amazon confirmed that thousands of organisations worldwide are using Amazon Bedrock. 

The post Microsoft Eats into Amazon’s Cloud Market Share  appeared first on AIM.

]]>
Fibe Leverages Amazon Bedrock to Increase Customer Support Efficiency by 30%  https://analyticsindiamag.com/intellectual-ai-discussions/fibe-leverages-amazon-bedrock-to-increase-customer-support-efficiency-by-30/ Wed, 24 Apr 2024 09:39:26 +0000 https://analyticsindiamag.com/?p=10118962

Anil Sinha, chief technology officer of Fibe, told AIM that the team finds Anthropic’s Claude 3, hosted on Bedrock, most useful for their work.

The post Fibe Leverages Amazon Bedrock to Increase Customer Support Efficiency by 30%  appeared first on AIM.

]]>

Pune based consumer lending startup Fibe is exploring generative AI applications in customer service and risk assessment. It recently released a chatbot supported by LLMs via Amazon Bedrock, which has improved its customer support efficiency by 30%. 

At AWS Fintech Forum held in Bengaluru earlier this month, AIM caught up with Anil Sinha, chief technology officer of Fibe. He said that the team finds Anthropic’s Claude 3, hosted on Bedrock, most useful for their work. The company also employs Amazon Comprehend for natural language processing, which is used to analyse sentiment and quality in customer calls. 

Amazon Bedrock provides the choice of top-notch models from both Amazon’s own first-party offerings (like Amazon Titan)and various third-party models. This includes families of foundational models from AI-focused companies like Meta, AI21 Labs, Anthropic, Cohere, AI21, Mistral, Stability AI, and more. Last year, Amazon invested $4 billion in Anthropic. 

“The investments made with Anthropic and others is helping us to bring this choice to customers, but then customers can also use open source models with Hugging Face set up on SageMaker”, Pandurang Nayak, head of startup solutions architects AWS India, told AIM at the event.  

Fibe’s Growth Story

“We have been using AWS for the past seven to eight years now which went from a single-product offering to a diverse portfolio that now includes personal loans, embedded finance, and more,” Sinha told AIM. He said that this expansion was facilitated by AWS’ robust infrastructure, which provided the scalability and security necessary to manage increased demand and complexity. 

Fibe primarily caters to young salaried professionals and has been able to disburse more than six million loans worth ₹20,000 crore in near real-time since its inception. The startup leverages AWS ML services to streamline KYC processing, employing features such as optical character recognition, face match, and selfie deduplication. 

Along similar lines, it has developed FibeShield, a proprietary algorithm-based product, crafted with AWS tools such as Amazon Neptune, AWS Lambda, and Amazon S3. FibeShield uses graph ML, device fingerprinting, and geo-fencing to effectively identify fraud by revealing hidden connections and duplications among users.

“AWS credits and the Activate program have been helpful in providing us with resources to experiment freely from the time we started out. The support from AWS, including technical advice and regulatory guidance, has enabled us to focus more on innovation rather than infrastructure management,” added Sinha. 

​​Since its inception in 2013, AWS Activate has provided $6 billion in credits to startups around the globe to help them build solutions in the cloud.

Why Enterprises Choose AWS

Besides Fibe, Nayak highlighted several success stories demonstrating the scalability AWS offers. “One such success story is with EaseBuzz, a Pune-based company that grew fourfold in two years using AWS Spot Instances, optimising their infrastructure costs while scaling their operations massively,” he added. 

Another example is fintech startup Ring, whose data science team has been using its tools like Amazon Rekognition and Amazon Textract to process customer data rapidly, reducing its NPA by almost 20-25% and improving overall collection efficiency by 30%. Its loan application processing times have also halved. 

“We’ve always worked backwards from what our customer needs are. In fact, 90% of the features in our products are developed from customer feedback, while the remaining 10% come from developments like AWS Lambda. We maintain this approach as part of our culture, continuously adapting to what customers need and offering them the flexibility to use technology in various ways,” added Nayak. 

Other startups include Yubi, inVOID, Decentro, Setu, PayU, Fibe, and so on. 

AWS has been instrumental in shaping early-stage startups through programs like the AWS Activate Program, Startup Architecture Challenge, AccelerateHer Program, Public Sector Startup Ramp, and more. 

Looking ahead, AWS is committed to continuing its support for startups by enhancing its service offerings and reducing the “undifferentiated heavy lifting” that often slows down innovation. The goal is to allow startups to focus more on their core products and customer experiences rather than on managing infrastructure.

The post Fibe Leverages Amazon Bedrock to Increase Customer Support Efficiency by 30%  appeared first on AIM.

]]>
Alexa Saves Young Girl from Monkey Attack, Aims to Aid Older Adults Too https://analyticsindiamag.com/ai-news-updates/alexa-saves-young-girl-from-monkey-attack-aims-to-aid-older-adults-too/ Mon, 15 Apr 2024 10:40:47 +0000 https://analyticsindiamag.com/?p=10118215 AWS IoT Alexa

Kid-friendly features of Alexa include interactive games, quizzes, nursery rhymes, animal sounds, and its ability to answer questions about spellings, general knowledge, history and science

The post Alexa Saves Young Girl from Monkey Attack, Aims to Aid Older Adults Too appeared first on AIM.

]]>
AWS IoT Alexa

Alexa’s ability to produce animal sounds through the Wild Planet skill recently helped save a 13-year old girl and her 15-month old niece from monkey attack in Basti, Uttar Pradesh. By asking “Alexa, kutte ki awaz nikalo”, the girl was able to scare away the monkeys. 

“The option to access a number of useful kid-friendly experiences with simple voice commands makes Alexa a great addition for a family with young kids. Parents often tell us how Alexa has become a companion in their parenting journeys,” says Dilip R.S., Director and Country Manager for Alexa, Amazon India

From listening to Indian folktales to playing animal sounds, Indian households with young kids who use Alexa at home are two times more engaged than other users. Parents of young kids take Alexa’s help in managing their day-to-day parenting tasks and keeping their kids engaged by asking Alexa for rhymes, stories, games, GK-related questions, and more.

Users enjoy the ease and convenience of giving simple voice commands to Alexa in Hindi, English, and Hinglish – making the AI a great aid for parents and companion for kids.

“While it is a great learning and entertainment tool for kids, Alexa can help parents manage their day-to-day tasks better. Whether it is controlling smart home appliances with voice while juggling numerous tasks or asking for a bedtime story as part of their child’s daily routine, Alexa’s right there to help them,” Dilip adds. 

Today, families across India are asking Alexa for information, games, quizzes, music, managing day-to-day tasks, stories, and much more. In fact, weekends are family time with Alexa – last year there was a 15% increase in requests to Alexa over the weekends in requests for music with many of them being for kids’ music. 

The top five, most popular songs for kids on Alexa are: Baby Shark, Lakdi Ki Kathi, Johnny Johnny Yes Papa, Wheels on the Bus, and Twinkle Twinkle Little Star. Indian folktales, like Akbar Birbal, Tenali Raman, and Panchatantra stories, see high interest from customers, especially in Hindi. In 2023, customers asked for these stories on an average of 34 times every hour.  

The post Alexa Saves Young Girl from Monkey Attack, Aims to Aid Older Adults Too appeared first on AIM.

]]>
AWS Teams Up with Minfy for Cloud and AI Boost through Global Expansion https://analyticsindiamag.com/ai-news-updates/aws-teams-up-with-minfy-for-cloud-and-ai-boost/ Thu, 28 Mar 2024 07:49:37 +0000 https://analyticsindiamag.com/?p=10117298

Over the next four years, the strategic partnership will support US$500 million in overall business growth through international expansion. 

The post AWS Teams Up with Minfy for Cloud and AI Boost through Global Expansion appeared first on AIM.

]]>

AI solutions and cloud-native system integrator Minfy Technologies has announced a multiyear strategic collaboration agreement (SCA) with hyperscaler AWS India to improve the former’s use of cloud services and AI. Over the next four years, the SCA will support US$500 million in overall business growth through international expansion. 

Minfy caters to large-scale enterprises, public sectors, and growing businesses. Now, the company will assist global enterprises in various sectors in utilising AI and cloud technologies effectively. One notable initiative is the Swayam.ai app store, which caters to industries like healthcare, aerospace, logistics, manufacturing, and the public sector, offering tools such as intelligent chatbots and sentiment analysis.

The company wants to expand its reach in the U.S., Australia, Malaysia, and the Philippines, focusing on enhancing market strategies, hiring local talents, and developing customer-centric solutions. With a track record of over 500 AWS projects,  Minfy targets a broader international presence.

Under this collaboration, Minfy will help transition clients’ workloads to AWS, particularly in healthcare, logistics, and manufacturing sectors. The goal is to facilitate AI integration, cloud-driven transformation, and the development of digital capabilities to improve operational efficiency.

The Hyderabad-based company plans to leverage AWS’s advanced AI and ML capabilities, such as Amazon SageMaker and AWS Inferentia, to power Swayam.ai’s generative AI solutions. Additionally, the partnership will utilise AWS’s cloud services like Amazon Elastic Kubernetes Service and Amazon RedShift for efficient database management and modernisation of IT infrastructure.

Over the next four years, Minfy intends to upskill its workforce in core AWS competencies, focusing on healthcare, data analytics, and ML. This includes training over 1,000 professionals and establishing a Cloud Centre of Excellence to centralise knowledge and improve solution access globally.

“AWS is committed to helping local partners like Minfy drive growth and expand internationally,” said Chris Casey, head, partner management, APJ, AWS.

Recently, Bengaluru-based agritech startup Cropin Technology and AWS India signed a Memorandum of Understanding (MoU) to enable Cropin to build an AI-powered solution to address the pressing issues of global hunger and food insecurity.

The post AWS Teams Up with Minfy for Cloud and AI Boost through Global Expansion appeared first on AIM.

]]>
Amazon Warns Employees Not to Use Generative AI Tools  https://analyticsindiamag.com/ai-news-updates/amazon-warns-employees-not-to-use-generative-ai-tools/ Fri, 23 Feb 2024 09:50:11 +0000 https://analyticsindiamag.com/?p=10113792

Amazon's spokesperson, said the company has been developing generative AI models for a long time, and employees use them every day.

The post Amazon Warns Employees Not to Use Generative AI Tools  appeared first on AIM.

]]>

Amazon has cautioned its employees against using third-party generative AI tools for work, according to multiple internal memos viewed by Business Insider.

“While we may find ourselves using GenAl tools, especially when it seems to make life easier, we should be sure not to use it for confidential Amazon work,” the company warned employees in a recent email. “Don’t share confidential Amazon, customer, or employee data when using 3rd party GenAl tools. Generally, confidential data would be data that is not publicly available.”

Amazon’s internal third-party generative AI use and interaction policy, viewed by BI, warns that the companies offering generative AI services may take a license to or ownership over anything employees input into tools like OpenAI’s ChatGPT.

“This means that any outputs such as email, PRFAQs, internal wiki pages, code, confidential information, documentation, pre-launch and strategy materials may be extracted, reviewed, used, and distributed by the owners of the generative Al,” the policy states. “All Amazonians must abide by our standard Amazon policies for confidential information and security for any inputs to generative Al.”

Amazon is not the first big company to impose restrictions on using generative AI tools internally. Samsung and Apple are some big league names that banned using ChatGPT and similar tools. 

Some of these companies are particularly sensitive about using this technology as their competitor Microsoft invested heavily in OpenAI and can claim the rights to the model’s results. But at a point, even Microsoft took away the in-house tool from its employees briefly. 

Amazon’s spokesperson, Adam Montgomery, said the company has been developing generative AI and large machine learning models for a long time, and employees use its AI models every day.

“We have safeguards in place for employee use of these technologies, including guidance on accessing third-party generative AI services and protecting confidential information,” Montgomery said.

The post Amazon Warns Employees Not to Use Generative AI Tools  appeared first on AIM.

]]>
NVIDIA’s Stock Momentum Swings, Google & Amazon Take Lead Again  https://analyticsindiamag.com/ai-news-updates/nvidias-stock-momentum-swings-day-before-q4-results/ Wed, 21 Feb 2024 05:41:43 +0000 https://analyticsindiamag.com/?p=10113395 NVIDIA’s Parakeet Surpasses OpenAI's Whisper v3 in Speech Recognition

"If you think that Nvidia's quarter is one and done, you're also thinking that AI is one and done."

The post NVIDIA’s Stock Momentum Swings, Google & Amazon Take Lead Again  appeared first on AIM.

]]>
NVIDIA’s Parakeet Surpasses OpenAI's Whisper v3 in Speech Recognition

On the same day when NVIDIA surpassed Elon Musk’s AI company Tesla as the most traded on the market, its stocks faced the worst day since October 2023, that too, a day before the highly anticipated fourth-quarter results. 

The American chip maker started this year with a 47% year-to-date gain, and the stock momentum came with gains in 2023 as well. But the tides have shifted, and the company lost $78 billion in market value.

On Tuesday morning, its stock price was $741 per share, with a market cap of $1.83 trillion (ahead of Amazon’s $1.75 trillion and Alphabet’s $1.78 trillion). Then Nvidia shares dipped as much as 8.6%, sending the market cap as low as $1.67 trillion before the company’s stock began to recover. Now, Alphabet and Amazon are once again ahead of of the chip maker.

CNBC’s Cramer reported, “Open your eyes, people,” Cramer said. “If you think that Nvidia’s quarter is one and done, you’re also thinking that AI is one and done.” The long-term supporter of the company is not wrong since Wall Street continues to bet big on AI, from which NVIDIA emerged as a winner. 

All eyes are on the AI company with the rising demand for chips in the market and the upcoming GTC conference in San Jose, CA. The upcoming event in March is expected to bring updates on Blackwell, NVIDIA’s next-gen architecture. The company has already confirmed that their roadmap for 2024-25 features Blackwell, namely the B100 and GB200, which have been listed. 

The teaser video gave a glimpse of generative AI’s features like the WPP/NVIDIA engine for digital advertising, the newly introduced Chat with RTX, an industrial metaverse powered by SyncTwin, AI art by Refik Anadol Studio, and OpenAI creating code for Blender animations.

Wall Street analysts are currently focused on the company’s demand outlook for its AI-enabled H100 GPU chips, which can sell for upwards of $40,000. 

The post NVIDIA’s Stock Momentum Swings, Google & Amazon Take Lead Again  appeared first on AIM.

]]>
Amazon Demos the Largest text-to-speech AI Model,  Big Adaptive Streamable TTS with Emergent Abilities https://analyticsindiamag.com/ai-news-updates/amazon-demos-the-largest-text-to-speech-ai-model-big-adaptive-streamable-tts-with-emergent-abilities/ Thu, 15 Feb 2024 06:26:30 +0000 https://analyticsindiamag.com/?p=10112943

This model sets a new benchmark for speech synthesis.

The post Amazon Demos the Largest text-to-speech AI Model,  Big Adaptive Streamable TTS with Emergent Abilities appeared first on AIM.

]]>

Amazon shared BASE TTS, a text-to-speech model. It was trained on 100,000 hours of public domain speech data, mainly in English but also including German, Dutch, and Spanish, making it a new standard for natural speech. 

The model uses a 1-billion-parameter Transformer and a convolution-based decoder for efficient text-to-speech conversion. This model introduces a new approach for analysing speech so as to distinguish between different voices. It also employs a technique called byte-pair encoding to reduce the size of the speech data to enhances the model’s efficiency and speed in processing and generating speech. 

BASE TTS shows new or ‘emergent’ capabilities as it’s trained with more data. With over 10,000 hours of training, it understands text better, allowing it to produce speech that sounds right for the context. The model can also handle complex language features like compound nouns and emotional expressions, showing its versatility. 

An example provided by the paper, ‘In the classroom, filled with the chatter of students sharing their holiday stories and the rustling of new textbooks, Mrs. Thompson, excited to embark on a new academic year, prepared a lesson that would challenge and inspire her students.’

The development of BASE TTS was developed from the idea that larger text-to-speech systems would get better with scale. BASE TTS not only has high-quality speech but also shows new skills, like pronouncing difficult texts correctly and using the right emotional tone. It performs better than other large text-to-speech systems, making it a leading model.

Another example where the audio changes the tone and whispers for the sentence, ‘A profound sense of realisation washed over Matty as he whispered, “You’ve been there for me all along, haven’t you? I never truly appreciated you until now.”’

BASE TTS could improve user experiences and help languages with few resources. It can mimic speaker characteristics with little reference audio, offering new ways to create synthetic voices for people who cannot speak. Amazon decided not to share BASE TTS openly to avoid misuse, highlighting ethical considerations in using advanced AI.

These capabilities which eluded speech models until now seems possible as demonstrated by BASE TTS.  The research team also highlights the importance of diverse speech data in representing different languages, ethnicities, dialects, and genders. They call for more research on how data affects the model and ways to make voice technology more inclusive.

Another similar model is MetaVoice, an open source 1.2B parameter foundational model for TTS. 

The post Amazon Demos the Largest text-to-speech AI Model,  Big Adaptive Streamable TTS with Emergent Abilities appeared first on AIM.

]]>
Ex-Nvidia & Ola Exec Launches RagaAI for testing and fixing AI https://analyticsindiamag.com/ai-news-updates/ex-nvidia-ola-exec-launches-ragaai-for-testing-and-fixing-ai/ Tue, 23 Jan 2024 13:20:51 +0000 https://analyticsindiamag.com/?p=10111376

Led by tech pioneer Gaurav Aggarwal, multimodal AI testing platform RagaAI emerges from stealth mode

The post Ex-Nvidia & Ola Exec Launches RagaAI for testing and fixing AI appeared first on AIM.

]]>

RagaAI, an AI-focused startup, has come out of stealth mode and has successfully closed a $4.7m seed funding round. Pi Ventures spearheaded the funding round, joined by international investors such as Anorak Ventures, TenOneTen Ventures, Arka Ventures, Mana Ventures, and Exfinity Venture Partners. 

RagaAI addresses the need to ensure performance, safety and reliability of AI models, by providing an automated and comprehensive AI testing platform for companies. RagaAI is backed by advisors from Amazon, Google, Meta, Microsoft and NVIDIA

RagaAI DNA

RagaAI DNA which is the foundational model of RagaAI uses automation to detect issues, diagnose and fix them. Offering over 300 different tests, the model is able to identify issues such as data drift, edge case detention, poor data labelling, bias in data, and many more. Furthermore, it is a multimodal platform that supports LLMs, images/videos, 3D, audio, NLP and structured data. It reduces 90% of the risks while accelerating AI development by more than 3x. 

Tech Pioneer

Coming from a rich technology background of computer vision and machine learning, RagaAI was founded by Gaurav Agarwal in January 2022. He has previously worked with Texas Instruments and moved on to head mobility business at Ola and computing giant NVIDIA. 

“At Ola & NVIDIA, I saw the significant consequences of AI failures due to lack of comprehensive testing. Our Foundation Models “RagaAI DNA” is already solving this problem across large fortune 500 companies,” said Gaurav Aggarwal, CEO and founder of RagaAI. 

The founding team at RagaAI has a collective AI expertise of over 50 years. The company has already provided solutions for companies in various sectors including ecommerce, automotive, and others, with multiple use cases. 

RagaAI’s funding round will be used to advance research and development, with a focus on improving AI testing tools. 

“Driven by their patent-pending drift detection technology, RagaAI, an AI testing platform, is well-suited to solve these massive problems for the AI deployments globally. At pi Ventures, we believe in backing founders who can create disruptive solutions for global impact. In our view, Gaurav and his stellar team at Raga are fulfilling that goal in a big way. We are pleased to be associated with them,” said Manish Singhal, founding partner of pi Ventures that spearheaded the funding round. 

The post Ex-Nvidia & Ola Exec Launches RagaAI for testing and fixing AI appeared first on AIM.

]]>
The Real Reason behind Big Tech’s Recent Mass Layoffs https://analyticsindiamag.com/ai-origins-evolution/the-real-reason-behind-big-techs-recent-mass-layoffs/ Sat, 20 Jan 2024 06:30:00 +0000 https://analyticsindiamag.com/?p=10111096

A result-driven, volatile hiring process has consistently resulted in mass firings

The post The Real Reason behind Big Tech’s Recent Mass Layoffs appeared first on AIM.

]]>

Recently, several big tech companies, including Google, Amazon and others, have been laying off a significant number of employees. While some, like Paytm and Dropbox, cite AI advancements as a reason for layoffs, this isn’t universally the case, and sometimes it is because of quarterly-driven results for cost-cutting and restructuring. 

Citing Salesforce’s layoff strategy, Zoho chief Sridhar Vembu said that such companies are constantly engaged in hiring and firing sprees driven by short-term, quarterly financial goals. He noted that this approach is common in publicly traded companies, which focus on immediate cost-cutting and restructuring, often leading to a volatile employment environment.

Quarterly-Driven Layoffs

At the start of 2023, Salesforce announced a 10% layoff of their workforce, which resulted in more than 7,000 employees losing their jobs. The two quarters following the firing showed promising numbers

Technology advisor and writer Gergely Orosz observed that post the firing of employees and posting good quarterly results, the company underwent a hiring spree. Shortly after, they went on a hiring freeze, preluding how Q4 2023 must have fared for the company.

Recently, Citigroup said it would be cutting 20,000 jobs, which is about 10% of the workforce, owing to a disappointing net loss in the fourth-quarter results. The CEO of Citigroup, Jane Fraser, had announced a restructuring in September, which would contribute to job cuts. The bank is trying to cut over $50 billion of expenses and believes the layoffs will save about $2.5 billion this year. 

This is the same for Indian IT as well. At TCS’ recent earnings call, the company confirmed hiring a number of freshers in the previous quarter. They will also go ahead with their plan of hiring 40,000 freshers in 2024. However, Milind Kakkad, chief HR officer of TCS said, “Depending on the overall condition and [in order to] drive efficiency, if the headcount needs to be reduced, then so be it.”

Similarly, American multinational investment management corporation BlackRock had announced its plans to lay off 3% of its workforce, i.e. 600 employees, so as to defend their profit margins. However, the publicly-traded company also said that they will continue to grow their workforce 

Situational-based Hiring and Firing 

When the pandemic led to many losing their jobs globally, specific domains thrived in the situation. With the high digital economic boom during the period, a number of corporations working in the relevant sectors resorted to mass hiring, and in many cases, over-hiring. With the pandemic waning, many companies then had to let go of their employees to sustain

Venture capitalists and investors also stepped up the pressure on their invested startups to either reduce their headcount or slow down the hiring process in order to conserve cash. 

Interestingly, the founder of Dukaan, a DIY e-commerce platform, got infamous last year for firing 90% of its support staff owing to an AI chatbot. Though the founder Suumit Shah claimed that an AI chatbot has replaced the support workforce, the company had already faced two rounds of layoffs before he could attribute it to AI. 

Not all job losses could be attributed to AI, as author and founder Dave Birss said, “AI is a fantastic scapegoat for unpleasant business decisions.” 

Lean wins 

Unlike other venture capital or investors-backed companies, Vembu’s bootstrapped company Zoho does not succumb to external forces that would dictate the firing demands. By adopting an agile and lean methodology, the company has been able to duck the mass layoff trend. 

In an earlier exclusive interaction with AIM, Vembu said that Zoho will never resort to layoffs, but only internally repurpose their employees. However, it will caution their employees on new technologies. 

Hiring pattern in big tech companies. Source: PragmaticEngineer

An engineer from Apple had mentioned that despite Apple being understaffed in almost all teams, the employees had to learn to achieve their tasks with the current workforce. The company also did not cave into giving high salaries owing to resignation pressures. 

Apple has been synonymous for having not resorted to any form of mass hiring or firing based on company results. Tim Cook had previously mentioned that layoffs are the last resort – an approach that is independent of quarterly results. 

The post The Real Reason behind Big Tech’s Recent Mass Layoffs appeared first on AIM.

]]>
Microsoft’s Generative AI Brilliance Reshapes Retail https://analyticsindiamag.com/ai-breakthroughs/microsofts-generative-ai-brilliance-reshapes-retail/ Fri, 12 Jan 2024 11:17:30 +0000 https://analyticsindiamag.com/?p=10110532

Microsoft’s growing collaboration with some of the biggest retail giants in the world is a big win for all

The post Microsoft’s Generative AI Brilliance Reshapes Retail appeared first on AIM.

]]>

At CES 2024, where interesting partnerships between big tech companies were announced, Walmart and Microsoft’s strategic partnership to bring generative AI-powered search features on the retail giant’s platform was unveiled. 

The new features are said to enhance the digital shopping experience on the platform, something every major retail giant is increasingly looking to adopt. In the process, also allow big tech companies to win in a big way. 

Microsoft : The Solid Pillar of Walmart

Doug McMillon and Satya Nadella at CES 2024. Source: CES

Walmart CEO and President Doug McMillon invited Microsoft’s CEO Satya Nadella onstage at the CES event to not only talk about their new AI-enabled features but also their long-standing commitment that started in 2018. 

Six years ago, the retail giant tied up with Microsoft to make it as its preferred and strategic cloud provider. The cloud innovation projects comprised AI and ML-based data-platform solutions, and they were designed to address customer-facing services and internal business applications. 

With increasing sales through ecommerce platforms, Walmart has also been witnessing the same. In Q3 2023, Walmart’s US online sales grew 24% and global online sales grew 15%

Net Sales of Walmart E-commerce. Source: Statista

Microsoft Loves All Retail

Microsoft has not only partnered with Walmart but a number of retail companies to bring advanced tech capabilities to their systems. In 2018, Microsoft announced its partnership with British retail company Marks and Spencer (M&S) to test out AI capabilities in a retail environment. 

A number of consumer goods companies have partnered with Microsoft to provide innovative retail solutions. Companies such as Unilever, Coca-Cola Bottling Company, Nestle, Pepsico and many others already have a strategic collaboration with these brands. Last year, Carrefour announced their partnership with Microsoft and OpenAI to bring an AI-powered chatbot to allow a smoother customer experience. 

Retail is the second biggest industry, (after technology) that utilises Microsoft’s suite of cloud-based business applications called Dynamics 365. With ChatGPT integration increasingly going up in retail companies, for mostly building personalised chatbots, Microsoft’s Azure OpenAI service finds its dominance. 

Amazon : The Tough Challenger

With the renewed collaboration of Microsoft and Walmart to use Azure OpenAI Service, on top of proprietary data from Walmart, the newly built AI-powered features are said to enhance the search experience to provide a more personalised experience on Walmart’s platform : a feature that Amazon has already mastered. 

Amazon’s advanced features on its marketplace platform not only works towards enhancing customer experience but also improves seller experience. Retailers use Amazon Personalise to analyse customer data, purchase history, market trends and preferences for personalised product recommendations. 

With generative AI capabilities, advanced customisation is achieved, which enhances the search experience, which is also Amazon’s USP. According to a consumer report of last year, online shoppers first start their search on Amazon. Over 50% of online shoppers use Amazon as their search destination. 39% consumers use search engines such as Google and Bing, closely followed by Walmart with 34% of online shoppers. 

In addition to being the supreme king of e-commerce retail, the company’s cloud service has been implemented by a number of retail companies. AWS offers technology solutions that aid companies to help with their customer data to boost engagement, supply chain distribution and other retail functions.  

Big Tech’s Retail Efforts Continue 

While Microsoft, Amazon are established players working with retail, new partnerships with tech companies are also emerging. Recently, IBM and SAP have collaborated to build AI solutions for the consumer packaged goods and retail sector. The solution looks to help these companies with operational and logistics such as product distribution, transportation planning, automating order settlements and more. 

With the promising adoption of big tech products in retail industries, the prowess of tech giants allows them to establish dominance in the sector. Offering solutions in the form of office products, analytics, operational assistance, and many more features, Microsoft is comfortably positioned to lead in the retail sector.

The post Microsoft’s Generative AI Brilliance Reshapes Retail appeared first on AIM.

]]>
Samsung Unveils New and Improved Home Robot Ballie https://analyticsindiamag.com/ai-news-updates/samsung-unveils-new-and-improved-home-robot-ballie/ Wed, 10 Jan 2024 07:47:15 +0000 https://analyticsindiamag.com/?p=10110316

Round, yellow and rolling around, Ballie acts as a home assistant with new projection features

The post Samsung Unveils New and Improved Home Robot Ballie appeared first on AIM.

]]>

The sweet, adorable titular robot from ‘Wall-E’ may not be real, but Samsung’s round robot ‘Ballie’ is. After being released in 2020, which resembled a small yellow bowling ball, Samsung unveiled the new version of Ballie, with the most striking feature to project things onto the floor and ceiling. 

The Rolling Robot 

The AI home robot, Ballie, comes with multiple features where the ball can project videos, calls and other media. Serving as a personal home assistant, Ballie can autonomously navigate through the home to accomplish various tasks. Through an integrated connection with other home appliances, Ballie can offer assistance in various situations. Unlike other home assistants from Google and Amazon, Ballie’s moving feature pushes it a notch apart. 

Ballie continuously learns user patterns and habits, allowing for more intelligent and personalised services. A user can interact with the robot via their phone. The launch video shows a user exercising to a workout video that is projected on the ceiling. Featuring a 1080p projector with two lenses, the  projection can be automatically adjusted based on wall distance and lighting conditions. 

This is probably the first of the many announcements in the field of Robotics for the year. Considering the number of announcements and developments that have been happening in the last few months, including Google DeepMind’s new research models that was announced at the start of the year, it is only obvious that 2024 will be the year of Robotics. 

Major tech players in the space of robotics, such as Tesla and Figure are also on path to unveil new features in the coming months. 

The post Samsung Unveils New and Improved Home Robot Ballie appeared first on AIM.

]]>
Will 2024 be the Year of Robotics? https://analyticsindiamag.com/industry-insights/robotics/will-2024-be-the-year-of-terminator/ Mon, 08 Jan 2024 12:30:00 +0000 https://analyticsindiamag.com/?p=10110195

With 2023 witnessing a number of upgrades to various robotics projects, what does 2024 hold for this field?

The post Will 2024 be the Year of Robotics? appeared first on AIM.

]]>

Robotics expert and founder of iRobot and Robust.ai Rodney Brooks recently mentioned that the AI hype is readying for a brutal reality check. As per Brooks, the 60+ year history of AI is following a typical hype cycle and will witness a lull soon. 

While the fade in AI hype is yet to happen, advancements in robotics are on an upward trajectory. It’s only been a week into the new year, and Google DeepMind has already released a flurry of updates in robotics.

Deep Into Robotics

Last week, Google DeepMind released three robotics research systems—AutoRT, SARA-RT and RT-Trajectory—that will aid robots to make faster decisions and better understand and navigate their environments. The models will help with data collection, speed, and generalisation. 

AutoRT helps in harnessing the potential of large language models by collecting more experiential and diverse data. SARA-RT (Self-adaptive robust attention for robotics transformers) employs an ‘up-training’ method to transform robotics transformer models into more efficient versions.

RT-Trajectory automatically adds 2D trajectory sketches to training videos, thereby helping the model in learning low-level robot control policies by providing practical visual cues. 

Sara RT-2 model for manipulation tasks. Source: Google DeepMind Blog

Robots for Utility

Google DeepMind is building these state-of-the-art robotics models with the vision to allow them to be integrated in future robots, and they are mostly focused on general purpose. 

Last week, Stanford University introduced Mobile ALOHA, a system designed to replicate bimanual mobile manipulation tasks necessitating whole body control. The project was provided by Google DeepMind and the technology addresses the limitations of traditional imitation learning from human demonstrations.  

These general purpose robots are demonstrated to help with multiple tasks such as cooking, cleaning, lifting weights, and other manual activities. Industrial robots have been the biggest use cases for robotics. The warehouse robotics market size is estimated to hit $7.93 billion in 2024, and is expected to reach $17.91 billion by 2029, with a CAGR of 17.7%.  

While general purpose robots are finding increasing use cases across industries, research and development of humanoid robots is not far behind. 

Multifaceted Humanoid Robots 

In the last few months, Tesla’s Optimus has received multiple upgrades, inching it closer to the vision that founder Elon Musk had built it for. When released in 2022, though impressive, the humanoid was unable to execute tasks and was only able to make a waving gesture in a diffident manner.

However, after almost a year, in September 2023, the humanoid was able to pick and sort objects, navigate around and even do yoga

Last month, further updates were announced as Optimus-Gen 2 incorporated new actuators and sensors that enables a 2 degree of freedom that allows more movement, and has improved hand movements. 

Tesla’s Optimus works on neural networks, whereas other humanoid robots such as those created by Boston Dynamics work on rule-based systems. Known for their robot dogs, their humanoid robot Atlas had received major updates last year too.

Furthermore, the company partnered with entertainment company Neon Group to create robotic-driven experiences for entertainment and educational purposes. AI robotics company Figure, recently released updates for Figure-01 humanoid where the robot demonstrates making coffee, something that was learned from watching humans make coffee. 

Amazon is also testing out humanoid robots in select warehouses in the US. Amazon is working on robots Digit that can imitate human movements and be used for lifting and handling items in factories. 

Year of Robotics? 

While 2024 may probably be termed as the year of Robotics, considering how researchers have already shared updates within a week into 2024, the real breakthrough is still awaited. Hoping for the GPT-4 moment of Robotics is not as easy as it seems. 

With massive investments and prolonged periods for testing and implementation of each individual task, developments in this field are slow. However, companies are not abandoning these ambitious projects. 

Interestingly, in November of last year, the Ministry of Industry and Information Technology (MIIT) that oversees the industrial sector of China stated that in 2025 the country will achieve mass production of humanoid robots. They are looking to achieve major breakthroughs to help hit a humanoid robot innovation system.  

The post Will 2024 be the Year of Robotics? appeared first on AIM.

]]>
SpaceX is All About Collaboration https://analyticsindiamag.com/ai-origins-evolution/spacex-is-all-about-collaboration/ Wed, 06 Dec 2023 09:10:01 +0000 https://analyticsindiamag.com/?p=10104249 Why SpaceX is All About Collaboration

The space is expensive, and Elon Musk knows it.

The post SpaceX is All About Collaboration appeared first on AIM.

]]>
Why SpaceX is All About Collaboration

Most recently, Amazon announced that it has procured three Falcon 9 launches from SpaceX to facilitate the deployment of its Project Kuiper mega-constellation, which is a direct competitor to Musk’s Starlink.

This development follows closely on the heels of a lawsuit against Amazon, which surfaced approximately two months ago, by shareholders challenging the company’s decision to exclude SpaceX, renowned as the most dependable rocket company globally, from its initial round of launch contract considerations.

Funnily enough, Elon Musk posted on X that launching competitors satellites is not an issue for him. “Fair and square,” he said. This is after Musk has launched his batch of 23 Starlink satellites on November 27.

Why Amazon finally chose SpaceX

In 2019, Amazon had ordered for launching 77 Kupier satellites from Blue Origin, United Launch Alliance, Arianespace and ABL. But delays in the development of those rockets to launch satellites have led Amazon to change plans. 

The company twice switched the rocket that its first pair of Kuiper prototypes would fly on, in an effort to expedite development, before the mission launched in October, this year. This strategic move has anticipated to cost billions of dollars to Amazon.

Then came a lawsuit against Jeff Bezos and Amazon in 2023, filed by Amazon shareholders Cleveland Bakers and Teamster Pension Fund. They claimed that Bezos did not even spend an hour discussing the possibility of any other space company, and chose Blue Origin, Bezos’s own space company. 

According to the legal complaint, Amazon management briefed the audit committee in July 2020 about ongoing discussions with Blue Origin, Arianespace, ULA, and an undisclosed fourth company for Kuiper launch contracts. The lawsuit alleges that, perplexingly, SpaceX, recognised as the world’s most famous, reliable, and obvious launch provider, was not even presented as an option during these discussions.

Meanwhile, scheduled for liftoff from mid-2025 onwards, the three Falcon 9 missions are integral to Amazon’s Kuiper’s ambitious plan of establishing a constellation comprising 3,236 satellites in low Earth orbit. The US Federal Communications Commission mandated that Amazon deploy a minimum of half of this satellite count by 2026.

Amazon is also expecting to invest upwards of $10 billion to build Kuiper. Earlier this year, the company broke ground on a $120 million pre-launch processing facility in Florida.

The SpaceX deal marks the latest shift in Amazon’s strategy, and possibly an acceptance of its fate amidst the timeframe, as the company pushes to get Kuiper to space in time to meet federal regulations. 

SpaceX ♥ Competition ♥ Collaboration

SpaceX has been launching satellites for its customers and competitors all this while. Just recently, it also launched Korea 425 reconnaissance satellite and 24 other rideshare payloads

Moreover, Capella Space, an American space technology company is also launching its satellites called Acadia-4 and Acadia-5, with SpaceX after continuously working with Rocket Lab for earlier satellite launches. 

In June, SpaceX also launched Indian startup Azista BST Aerospace’s satellites for remote-sensing capabilities. The founder, Sunil Indruti, said that he chose SpaceX Falcon 9 instead of ISRO’s PSLV because the former had a slot for the satellite in the rocket. 

On the other hand, in July, L&T, ISRO, and IN-SPACe, had together decided to compete with SpaceX and decided to focus on SSLV for on demand launch. This is exactly what SpaceX has been doing by launching satellites for other companies. 

Musk has been constantly appreciating ISRO’s efforts, which has been launching several other companies’ satellites in space. He has also been collaborating with Indian space companies for a very long time. In 2021, Musk announced that he would partner with Indian firms for building satellite communications equipment. 

This is not the first time that Musk is working with a competitor. Starting in January this year, SpaceX had deployed more than 40 satellites for OneWeb by March end. OneWeb is a British broadband operator, with a majority stake by Bharti Enterprises. OneWeb might be considered as a rival to Musk’s Starlink. But Musk doesn’t care.

SpaceX and OneWeb were not happy with working together earlier and had filed negative comments against the Federal Communications Commision (FCC) for sharing radio frequencies in space. But later in June 2022, the conflict between SpaceX and OneWeb was surprisingly resolved after both the companies decided to work together, without any discussions.

Reliance recently announced that it is working on its own satellite services called JioSpaceFiber to compete with Musk’s Starlink. It would be ideal for them to either launch them through ISRO, or maybe get SpaceX to launch the satellites. It is clear that Musk knows space is about collaboration, and not competition.

The post SpaceX is All About Collaboration appeared first on AIM.

]]>
Why Amazon Q Deserves Another Chance  https://analyticsindiamag.com/ai-origins-evolution/why-amazon-q-deserves-another-chance/ Tue, 05 Dec 2023 12:30:13 +0000 https://analyticsindiamag.com/?p=10104208

Amazon Q hasn’t been released yet, and criticisms are already mounting.

The post Why Amazon Q Deserves Another Chance  appeared first on AIM.

]]>

At re:Invent, amid much fanfare, AWS introduced Amazon Q,  a generative AI chatbot specifically designed for businesses. The company claimed that unlike OpenAI’s ChatGPT, it is much safer and more secure. However, contrary to these assertions, Amazon Q has come under the searchlights for all the wrong reasons. 

Barely three days after the launch, concerns began to rise among employees regarding accuracy and privacy of the chatbot. Q is reportedly “suffering from significant hallucinations” and has been implicated in leaking sensitive data, such as the locations of AWS data centres, internal discount programs, and unreleased features.

Undoubtedly, Amazon quickly released a statement that said, “No security issue was identified as a result of that feedback. We appreciate all of the feedback we’ve already received and will continue to tune Q as it transitions from being a product in preview to being generally available.” 

A Case for Amazon’s Q

As highlighted at re:Invent, employees can use Q to complete tasks in popular systems like Jira, Salesforce, ServiceNow, and Zendesk, which is a unique thing about Amazon Q. For example, an employee could ask it to open a ticket in Jira or create a case in Salesforce.

Interestingly, Amazon Q hasn’t been released yet, and criticisms are already mounting. Being in preview, it’s expected to undergo corrections as necessary. 

“Companies need to realise that it is incredibly difficult to make an LLM that doesn’t hallucinate. At best they can minimise it to some degree, but won’t be able to get rid of it. What OpenAI did with GPT-4 is a herculean act that others may not be able to easily imitate,” said Nektarios Kalogridis, founder and CEO of DeepTrading AI, addressing concerns about Amazon Q.

Also, we cannot blame Q directly for hallucinating as it can work with any of the models found on Amazon Bedrock, AWS’s repository of AI models, which includes Meta’s Llama 2 and Anthropic’s Claude 2. 

The company said customers who use Q often choose which model works best for them, connect to the Bedrock API for the model, use that to learn their data, policies, and workflow, and subsequently deploy Amazon Q. Therefore, if there are instances of hallucination, it could stem from any of the aforementioned models.

Moreover, ChatGPT has also had its share of issues with leaking sensitive information. Most recently, it leaked private and sensitive data when asked to repeat the word ‘poem’ indefinitely. But that hasn’t deterred enterprises from using ChatGPT. 

Like Amazon Q, OpenAI’s ChatGPT Enterprise hasn’t been made available yet. OpenAIs COO, Brad Lightcap, in a recent interview, revealed that ‘many, many, many thousands’ of companies are on the waiting list for the AI tool (ChatGPT Enterprise). Since November, 92% of Fortune 500 companies have used ChatGPT, a significant increase from 80% in August.

Enterprise Chatbots are the Future 

Despite the concerns, Amazon Q comes with great benefits. 

Just like ChatGPT Enterprise, Amazon Q will also allow customers to connect to their business data, information, and systems, so it can synthesise everything and provide tailored assistance to help employees solve problems, generate content, and take actions relevant to their business. 

The above features are a result of RAG, which retrieves data relevant to a question or task and provides them as context for the LLM. However, RAG comes with a risk of potential data leaks, similar to Amazon Q.

Ethan Mollick, professor at Wharton, expressed that RAG has its own advantages and disadvantages. “I say it a lot, but using LLMs to build customer service bots with RAG access to your data is not the low-hanging fruit it seems to be. It is, in fact, right in the weak spot of current LLMs — you risk both hallucinations & data exfiltration.” 

OpenAI introduced something similar on Devday with Assistant APIs, which include a function called ‘Retrieval’, which is nothing but a RAG function. This enhances the assistant with knowledge from outside our models, such as proprietary domain data, product information, or documents provided by your users. 

Apart from OpenAI and AWS, Cohere is quietly collaborating with enterprises to incorporate generative AI capabilities. 

Cohere was one of the first ones to understand the importance of RAG as a method to reduce hallucinations and keep the chatbot updated. In September, Cohere introduced the Chat API with RAG. With this new feature, developers can combine user inputs, data sources, and model outputs to create strong product experiences.

Despite the concerns that are being raised about hallucination and data leaks, enterprises cannot completely ditch the generative AI chatbots as they are definitely going to get better over time. For this is just the beginning. 

The post Why Amazon Q Deserves Another Chance  appeared first on AIM.

]]>
AI-Powered Innovation: Lentra’s Role in Shaping the Future of Indian Banking https://analyticsindiamag.com/intellectual-ai-discussions/ai-powered-innovation-lentras-role-in-shaping-the-future-of-indian-banking/ Thu, 16 Nov 2023 12:41:21 +0000 https://analyticsindiamag.com/?p=10103160

“What we've seen with generative AI is the ability for it to seemingly reason about test scenarios that that could be interesting but may have been overlooked,” said Rangarajan Vasudevan, CDO of Lentra

The post AI-Powered Innovation: Lentra’s Role in Shaping the Future of Indian Banking appeared first on AIM.

]]>

Dealing with highly critical data and being one of the most regulated industries, the digital lending space has massively transformed in India. While AI and ML models have been part of such platforms, the evolution and advancement of generative AI is now finding its way here as well.

With more than a decade-long experience in the digital lending space, digital lending SaaS platform, Lentra AI has been a prominent player in empowering major banks including HDFC, Standard Chartered, Federal Bank, and many more in India. 

AI Enabling Nuanced Personalisation

In an exclusive interaction with AIM, Rangarajan Vasudevan, chief data officer at Lentra, spoke about the involvement of machine learning models in the field of digital payments, which have been implemented for credit scoring and credit decisioning for over decades. “It’s not new. Citibank pioneered it a long time ago, and now everybody is caught up on it. However, I think what’s changing of late is the emphasis on how we create these persona-specific positioning models,” he said. 

From a generic approach that used to exist earlier, where a scorecard is created, that applies different ML models, and is pushed to specific demographics, it has become a more nuanced method now. For instance, a Gen Z from tier two or tier three city or town is different from a GenZ in the urban sector, so the score card you apply, will not be the same, and you have to make it tailor-made to individual personal types. This shows the importance of how to break down an ML model and create a “consortium of models’ that can be applied in different personnel categories. 

Vasudevan goes on to explain how nuanced models are already in place at Lentra, and spoke about how one of their flagship case studies which has not been released yet, is in the agri space. “There was a big push by the government earlier on what credit scheme to have for the Kisaan sector (farmers), and we were one of the pioneers to have worked with our major client in rolling out a version which is highly tailor-made in terms of positioning to that sector. The models are very different from what you would normally do when you try to push these kinds of products,” he said. 

Consortium of ML Models

A mix of models is something that Lentra has always worked on. “We are a VC-backed company, so our core USP is the innovation that we keep having to do, otherwise there’s not much to it. The models are ours and are proprietary to us, and it’s something that we have grown in-house,” he said. 

Vasudevan also mentioned that they work on top of open source platforms too. “There are platforms on top of which we build our own models. We use Sci-kit learn, Py-Spark, along with corresponding bindings to TensorFlow, Py-torch and others.”

Working in one of the most regulated spaces in the industry, Lentra has to ensure that their models are ethically fair, right and the models have to be explainable. For instance, the reasoning for lending to specific categories of population, should be explainable to a non-techie or regulator. Furthermore, keeping this motive behind, Lentra has restricted itself to using models such as XGBoost and random forests which make it easy for them to explain things. 

“A consortium of models where the models themselves are orchestrated using elaborate business logic, which makes it slightly more complex than just directly using an XGBoost,” he said. For cases where the regulatory burden is less, they resort to deep learning models where they don’t have to worry about explainability.   

Vasudevan concluded with the need for a collaborative innovative approach and bringing a vernacular angle to the models, so as to bring a far more meaningful and practical change for us in geography. “The vernacular angle  is just starting to get tapped into that too only because folks like Microsoft or Amazon are releasing expansions of those models to the local market,” he said.  

Generative AI With Caveats

While Lentra has been pioneering their in-house ML models along with continuous work on open-source platforms, the company has also experimented with generative AI. In addition to improved productivity among employees, the maximum use-cases for generative AI are for identifying test cases. “What we’ve seen with generative AI is the ability for it to seemingly reason about what could be interesting test scenarios that you might have missed,” said Vasudevan. 

Speaking about a request form where a user needs to enter an income range and age bracket, generative AI has helped in coming up with test cases. “It’s a very simple form and if I give that kind of a form to GenAI, it is exactly able to reason around 15 different test scenarios that you’ve got to work through and make sure that your product is capable of handling those test scenarios. For instance, what if we give an age group such as 15 to 18 where lending is not legally permitted in some countries, what would we do in this case?” explained Vasudevan. 

Discussing the limitations based on the experiments Lentra conducted on ‘ChatGPT family of GenAI tools’, consistency was the biggest problem. “So to be able to have a specific type of output consistently, for the same or similar type of input is like a given in the world of software like this one, the software is deterministic. We would give an input with a stimuli, and we’ll get some output. That’s very, very common, people just take it for granted. But with this particular experiment, what we saw was the same input in an experiment benchmark that we did back in February or March resulting in a sudden accuracy coverage of our test case which was then repeated in June, giving different results. The numbers were completely off,” said Vasudevan.  

The experiment that gave close to 80% accuracy earlier, gave only 10% accuracy when tested in June. “There was a lot of theorising at that stage because I think it is not just us, but a couple of other companies, who had also highlighted this, but nobody got an answer clearly from OpenAI. So we wouldn’t know if it’s a case of the model itself performing badly or OpenAI did something with transformer models and decided to compress them or, whatnot,” added Vasudevan. 

However, having said that, Vasudevan confirmed that they are in the middle of further experimenting and that in the long run, they should be building their own internally trained language models. 

The post AI-Powered Innovation: Lentra’s Role in Shaping the Future of Indian Banking appeared first on AIM.

]]>
Amazon is Building Olympus, to Compete with ChatGPT https://analyticsindiamag.com/ai-news-updates/amazon-is-building-olympus-to-compete-with-chatgpt/ Wed, 08 Nov 2023 04:30:46 +0000 https://analyticsindiamag.com/?p=10102707 Amazon is Building Olympus, to Compete with ChatGPT

Olympus is anticipated to outperform its predecessor, "Titan".

The post Amazon is Building Olympus, to Compete with ChatGPT appeared first on AIM.

]]>
Amazon is Building Olympus, to Compete with ChatGPT

Amazon was quiet for a while when it came to developing generative AI chatbots. Now, Amazon is intensifying its efforts in the realm of conversational AI software, positioning itself to rival industry leaders like OpenAI and Microsoft for a share of the corporate customer market. 

As per a report from the Information, the e-commerce giant is set to unveil its latest LLM, codenamed “Olympus,” which could also pave the way for enhanced features across multiple Amazon platforms, including its online retail store, the Alexa voice assistant on devices like the Echo, and AWS.

Read: AWS’ Generative AI Play for Bedrock

Expected to be officially announced by AWS in December, Olympus is anticipated to outperform its predecessor, “Titan,” a group of Greek-named LLMs currently offered to cloud customers. The delay of Titan’s launch last year was attributed to its inferior performance compared to OpenAI’s ChatGPT.

The timeline for Olympus’s development and deployment remains uncertain. Presently, AWS provides just one Titan model to its customers for the development of applications featuring personalization and search capabilities. 

Meanwhile, two additional Titan models, designed to empower customers in creating applications offering ChatGPT-like text responses or summarizing lengthy text passages, have limited availability. AWS also offers LLMs developed by other providers, including Anthropic.

Overseeing the development of Olympus is Rohit Prasad, Amazon’s Head Scientist for Artificial General Intelligence, as confirmed by a source with direct knowledge of the matter.

The post Amazon is Building Olympus, to Compete with ChatGPT appeared first on AIM.

]]>
Big Tech Eyes NFL’s Gold https://analyticsindiamag.com/ai-origins-evolution/big-tech-eyes-nfls-gold/ Mon, 30 Oct 2023 12:00:00 +0000 https://analyticsindiamag.com/?p=10102230

One of world’s biggest sporting events, the National Football League (NFL) has now partnered with all major tech companies including Amazon, Google and Apple

The post Big Tech Eyes NFL’s Gold appeared first on AIM.

]]>

Being one of the most watched sporting events in the world, the National Football League’s (NFL) championship game is broadcast in over 130 countries in more than 30 languages. The final game, Super Bowl, is the most watched broadcast every year in the US, with over 115 million viewers tuned in to this year’s edition in February. While the game has been dominating the country, it is being backed by not just the greatest brands in the world pouring in millions of dollars, but also supported by the biggest tech companies in the world. 

Amazon Brings AI to NFL

In Amazon’s Q3 earnings, of the many high points in the results, NFL has been a key player. The subscription revenue has increased by 14% to $10.2 billion and is driven by the NFL. The company is in the second season of a 11-year exclusive deal of $11 billion to distribute Thursday Night Football (TNF are NFL games scheduled to be played on Thursday evenings during the NFL season) through Prime Video

Amazon has brought AI to TNF, making it an interactive experience for their viewers. Features such as ‘X-Ray, gives fans real-time access to live statistics and data, ‘Rapid Recap’ generates 13 two-minute-long highlights to help viewers catch up on games, and many other features

‘Prime Vision with Next Gen Stats’ powered by AWS, provides insights by capturing real-time data on player’s location, speed, and acceleration using sensors hidden within their shoulder pads. Amazon collects over 300 million data points per season to train their machine learning models for gaining insights from every game. The insights on pass and position are shown real-time, giving viewers the ability to observe and predict the game strategy, akin to a quarterback (key player). 

As per Amazon’s Q3 results, the TNF season opener attracted 15.1 million viewers, and was the Prime Video’s most watched TNF game ever. The first six games brought an average of 12.9 million viewers, which was an increase of 25% from the previous season. Interestingly, Amazon has committed to paying $1 billion annually for the exclusive streaming rights to NFL games. 

Tussle for Broadcast Rights 

Last year, the National Football League revealed a multi-year deal with Google, giving Youtube TV and Youtube Primetime Channels exclusive rights to distribute NFL Sunday Ticket, that allows viewers to watch Sunday afternoon NFL games that are not typically available on local channels. 

The deal is said to be around $2 billion annually for seven years. DirecTV was previously paying $1.5 billion a year for the rights. However, as per a new report, Youtube is said to lose over $8.86 billion from now to 2029 with yearly declines of about $1.27 billion. 

Interestingly, Apple was one of the forerunners to bag the deal, however, the agreement did not fall through, as Apple wanted to reportedly pay less for the deal, so as to offer the product at lower prices. Though this deal failed, Apple was not completely left out. The NFL announced Apple Music as the new sponsor of the Super Bowl halftime show from 2023. Taking over Pepsi, who were the sponsors for 10 years, Apple will pay $50 million annually over a five- year span. 

Big ‘Technology’ Partners

In 2021, Cisco, an enterprise networking and security company, signed a multi-year deal to become the official technology partner for NFL. The partnership aims to create a unified platform and establish a robust technological foundation for NFL’s operations and communications with improved speed, intelligence and security measures. 

Every NFL stadium’s replay control room is built on Cisco technology, and almost all of the league’s official partners and two-thirds of NFL stadiums, including SoFi Stadium in LA and State Farm Stadium in Arizona, that hosted Super Bowls, is powered by Cisco technology.  

In Tech Mahindra’s recent Q2 earnings call, CP Gurnani, CEO and MD of the company announced that they are working with NFL. In 2018, the company signed a multi-year deal to be the technology, analytics and strategy partner for Jacksonville Jaguars (NFL team).  

The Best ‘Playground’

The first Super Bowl that was held in 1967 had close to 50 million viewers, and was the only year to have two networks broadcast it (CBS and NBC). The viewership has more than doubled with the latest edition (57th) in February, reaching 115.1 million viewers, making it the most watched Super Bowl of all time, and becoming the most popular TV program of all time. 

Tech firms understand that the NFL guarantees a large audience, and there’s no better platform than the game’s coverage. By partnering with such a major sporting event, companies leverage their technology and brand power : a probable win for them. 

The post Big Tech Eyes NFL’s Gold appeared first on AIM.

]]>
Bigtech Gets an AI Safety Guru https://analyticsindiamag.com/intellectual-ai-discussions/bigtech-gets-an-ai-safety-guru/ Wed, 25 Oct 2023 12:01:22 +0000 https://analyticsindiamag.com/?p=10101985 Bigtech Gets an AI Safety Guru Now

Anthropic, Google, Microsoft, and OpenAI have jointly revealed the appointment of the executive director of the Frontier Model Forum

The post Bigtech Gets an AI Safety Guru appeared first on AIM.

]]>
Bigtech Gets an AI Safety Guru Now

After uniting in July to announce the formation of the Frontier Model Forum, Anthropic, Google, Microsoft, and OpenAI have jointly revealed the appointment of Chris Meserole as the inaugural executive director of the forum. Simultaneously, they’ve introduced a groundbreaking AI Safety Fund, committing over $10 million to stimulate research in the realm of AI safety.

Chris Meserole brings a wealth of experience in technology policy, particularly in governing and securing emerging technologies and their future applications. Meserole’s new role entails advancing AI safety research to ensure the responsible development of frontier models and mitigate potential risks. Moreover, he would also oversee identification of best safety practices for these advanced AI models.

Meserole expressed his enthusiasm for the challenges ahead, emphasising the need to safely develop and evaluate the powerful AI models. “The most powerful AI models hold enormous promise for society, but to realise their potential we need to better understand how to safely develop and evaluate them. I’m excited to take on that challenge with the Frontier Model Forum,” said Chris Meserole.

Who is Chris Meserole?

Before joining the Frontier Model Forum, Meserole served as the director of the AI and Emerging Technology Initiative at the Brookings Institution, where he was also a fellow in the Foreign Policy program.

The Initiative, founded in 2018, sought to advance responsible AI governance by supporting a diverse array of influential projects within the Brookings Institution. These initiatives encompassed research on the impact of AI on issues like bias and discrimination, its consequences for global inequality, and its implications for democratic legitimacy.

Throughout his career, Meserole has concentrated on safeguarding large-scale AI systems from the potential risks arising from either accidental or malicious use. His endeavours include co-leading the first global multi-stakeholder group on recommendation algorithms and violent extremism for the Global Internet Forum on Counter Terrorism. He has also published and provided testimony on the challenges associated with AI-enabled surveillance and repression. 

Additionally, Meserole organised a US-China dialogue on AI and national security, with a specific focus on AI safety and testing and evaluation. He’s a member of the Christchurch Call Advisory Network and played a pivotal role in the session on algorithmic transparency at the 2022 Christchurch Call Leadership Summit, presided over by President Macron and Prime Minister Ardern.

Meserole’s background lies in interpretable machine learning and computational social science. His extensive knowledge has made him a trusted advisor to prominent figures in government, industry, and civil society. His research has been featured in notable publications such as The New Yorker, The New York Times, Foreign Affairs, Foreign Policy, Wired, and more.

What’s next for the forum?

The Frontier Model Forum is established for sharing knowledge with policymakers, academics, civil society, and other stakeholders to promote responsible AI development and supporting efforts to leverage AI for addressing major societal challenges.

The announcement says that as AI capabilities continue to advance, there is a growing need for academic research on AI safety. In response, Anthropic, Google, Microsoft, and OpenAI, along with philanthropic partners like the Patrick J McGovern Foundation, the David and Lucile Packard Foundation, Eric Schmidt, and Jaan Tallinn, have initiated the AI Safety Fund, with an initial funding commitment exceeding $10 million. 

The AI Safety Fund aims to support independent researchers affiliated with academic institutions, research centres, and startups globally. The focus will be on developing model evaluations and red teaming techniques to assess and test the potentially dangerous capabilities of frontier AI systems.

This funding is expected to elevate safety and security standards while providing insights for industry, governments, and civil society to address AI challenges.

Additionally, a responsible disclosure process is being developed, allowing frontier AI labs to share information regarding vulnerabilities or potentially dangerous capabilities within frontier AI models, along with their mitigations. This collective research will serve as a case study for refining and implementing responsible disclosure processes.

In the near future, the Frontier Model Forum aims to establish an advisory board to guide its strategy and priorities, drawing from a diverse range of perspectives and expertise.

The AI Safety Fund will issue its first call for proposals in the coming months, with grants expected to follow soon after.

The forum will continue to release technical findings as they become available. Furthermore, they aim to deepen their engagement with the broader research community and collaborate with organisations like the Partnership on AI, MLCommons, and other leading NGOs, government entities, and multinational organisations to ensure the responsible development and safe utilisation of AI for the benefit of society.

The post Bigtech Gets an AI Safety Guru appeared first on AIM.

]]>
Amazon is Testing Humanoids in its Warehouses https://analyticsindiamag.com/ai-news-updates/amazon-is-testing-humanoids-in-its-warehouses/ Fri, 20 Oct 2023 09:42:36 +0000 https://analyticsindiamag.com/?p=10101818 Amazon is Testing Humanoids in its Warehouses

Digit has a carrying capacity of up to 35 pounds (16kg).

The post Amazon is Testing Humanoids in its Warehouses appeared first on AIM.

]]>
Amazon is Testing Humanoids in its Warehouses

Amazon is experimenting with humanoid robots in select US warehouses, marking a significant step in its automation endeavors. The tech giant aims to optimise efficiency by introducing these robots, named ‘Digit’, which emulate human movements for tasks such as moving and handling items.

Created by Agility Robotics, a company supported by Amazon and headquartered in Corvallis, Oregon, Digit is a versatile robot. This 5 feet 9 inches (175cm) tall, 143-pound (65 kg) machine possesses the ability to walk in multiple directions, including forward, backward, and sideways, as well as the capability to crouch. Additionally, Digit has a carrying capacity of up to 35 pounds (16 kg).

Amazon’s decision to implement robotic workers comes amid concerns about its treatment of warehouse staff, with reports of grueling conditions and high turnover rates. The company has faced lawsuits and allegations of fostering a challenging work environment.

While labour unions express apprehension about the potential for job losses due to automation, Amazon contends that its robotic systems have created numerous new job categories, emphasising the integral role of human workers in the fulfilment process. The company has already deployed over 750,000 robots in its operations, working alongside human employees to address repetitive tasks.

Unlike conventional wheeled robots used in Amazon warehouses, Digit’s legged design enables it to navigate obstacles like steps and stairs. Amazon is currently conducting trials to evaluate its compatibility and safety when working alongside human workers.

Amazon Robotics’ chief technologist, Tye Brady, stresses the irreplaceable nature of human workers and dismisses the notion of fully automated warehouses, highlighting their problem-solving abilities and higher-level thinking.

Scott Dresser of Amazon Robotics describes Digit as a prototype, and the company’s experience suggests that new technologies create jobs and support growth, as they require human intervention for maintenance.

As part of its ongoing automation efforts, Amazon has previously introduced wheeled robots for goods transportation within its warehouses and initiated drone deliveries in select US regions. It plans on delivering within Italy and the UK by the end of 2024.

The post Amazon is Testing Humanoids in its Warehouses appeared first on AIM.

]]>
6 Most Exciting New Updates in PyTorch 2.1  https://analyticsindiamag.com/ai-mysteries/6-most-exciting-new-updates-in-pytorch-2-1/ Thu, 05 Oct 2023 12:03:39 +0000 https://analyticsindiamag.com/?p=10101163

PyTorch 2.1 released a host of updates and improved their library. They also added support for training and inference of Llama 2 models powered by AWS Inferentia.

The post 6 Most Exciting New Updates in PyTorch 2.1  appeared first on AIM.

]]>

PyTorch recently released a new update, PyTorch 2.1. This new update offers automatic dynamic shape support in compiling and distributing checkpoints for parallelly saving and loading distributed training jobs on multiple ranks, alongside providing support for NumPy API. 

In addition to this, it has released the beta version updates to PyTorch domain libraries for TorchAudio and TorchVision. Lastly, the community has added support training and inference of Llama 2 models powered by AWS Inferentia2.

This will make running Llama 2 models on PyTorch quicker, cheaper and more efficient. This release was the effort of 784 contributors with 6,682 commits. 

New Features of PyTorch 2.1 

  • The new feature updates include the addition of  AArch64 wheel builds which would allow devices with 64-bit ARM architecture to use PyTorch. 
  • Compile PyTorch on M1 natively instead of cross compiling it from x86 which causes performance issues. Compiling PyTorch natively on M1 would improve performance and make it easier to use it directly on Apple M1 processors. 

Improvements 

  • Python Frontend: the PyTorch.device can now be used as a context manager to change the default device. This is a simple but powerful feature that can make your code more concise and readable.
  • Optimisation: NAdamW is a new optimiser that is more stable and efficient than the previous AdamW optimiser. NAdamW is an improved version of AdamW, stands out for its stability and efficiency, making it a superior choice for faster and more accurate model training.
  • Sparse Frontend:  Semi-structured sparsity is a new type of sparsity that can be more efficient than traditional sparsity patterns on NVIDIA Ampere and newer architectures. 

PyTorch’s TorchAudio v2.1 Library

The new update has introduced key features like the AudioEffector API for audio waveform enhancement and Forced Alignment for precise transcript-audio synchronisation. The addition of TorchAudio-Squim models allows estimation of speech quality metrics, while a CUDA-based CTC decoder improves automatic speech recognition efficiency. 

In the realm of AI music, new utilities enable music generation using AI techniques, and updated training recipes enhance model training for specific tasks. However, users need to adapt to changes like updated FFmpeg support (versions 6, 5, 4.4) and libsox integration, impacting audio file handling.

These updates expand PyTorch’s capabilities, making audio processing and AI music generation more efficient and precise. With enhanced alignment, speech quality assessment, and faster speech recognition, TorchAudio v2.1 is a valuable upgrade. 

TorchRL Library 

PyTorch has enhanced the RLHF components making it easy for developers to build an RLHF training loop with limited RL knowledge. TensorDict enables an easy interaction between datasets (say, HF datasets), alongside RL models. It has added new algorithms, where it offers a wide range of solutions for offline RL training, making it more data efficient. 

Plus, TorchRL can now work directly with hardware, like robots, for seamless training and deployment. It has added essential algorithms and expanded its supported environments, for faster data collection and value function execution.

TorchVision Library

This new library in PyTorch is now 10%-40% faster. PyTorch achieved this thanks to 2x-4x improvements made to the second version of Resize. “This is mostly achieved thanks to 2X-4X improvements made to v2.Resize(), which now supports native uint8 tensors for Bilinear and Bicubic mode. Output results are also now closer to PIL’s!,” reads the blog. 

Additionally, TorchVision now supports CutMix and MixUp augmentations. The previous beta transforms are now stabilised, offering improved performance for tasks like segmentation and detection. 

Llama 2 Deployment with AWS Inferentia2 using TorchServe

Pytorch for the first time has deployed the Llama 2 model using inference using Transformer Neuron using Torch Serve. This is done through Amazon SageMaker on EC2 Inferentia2 instances. This features 3x higher compute with 4x more accelerator memory resulting in up to 4x higher throughput, and up to 10x lower latency. 

The optimization techniques from AWS Neuron SDK enhance performance while keeping costs low. The Llama deployment on PyTorch also shares the benchmarking results. 

The framework is integrated with Llama 2 through AWS Transformers Neuron, enabling seamless usage of Llama-2 models for optimised inference on Inf2 instances.

The post 6 Most Exciting New Updates in PyTorch 2.1  appeared first on AIM.

]]>
Genpact, AWS to Fight Financial Crime with Generative AI https://analyticsindiamag.com/ai-news-updates/genpact-aws-to-fight-financial-crime-with-generative-ai/ Fri, 29 Sep 2023 10:14:36 +0000 https://analyticsindiamag.com/?p=10100909 genpact aws

Genpact has actively engaged with numerous riskCanvas clients to enhance the detection, investigation, and prevention of various financial crime threats.

The post Genpact, AWS to Fight Financial Crime with Generative AI appeared first on AIM.

]]>
genpact aws

Genpact has announced an expanded collaboration with Amazon Web Services (AWS) aimed at revolutionising financial crime risk operations through the integration of generative AI and LLMs. The partnership entails the integration of Genpact’s cloud-based financial crime suite, riskCanvas, with Amazon Bedrock, yielding significant efficiencies and benefits for clients, including Apex Fintech Solutions.

Leveraging its existing relationship with AWS, Genpact is merging its intellectual assets and industry expertise with AWS’s generative AI capabilities. The seamless integration of Amazon Bedrock FMs into Genpact’s riskCanvas financial crimes software suite aims to unlock substantial value, enhance speed, and improve accuracy in detecting, investigating, and preventing financial crime threats for businesses.

This integration enables experts to review outputs and adopt a guided decision-making process, offering comprehensive summaries and analyses of potential financial crime activities. This results in improved efficiency and precision.

Genpact has actively engaged with numerous riskCanvas clients to enhance the detection, investigation, and prevention of various financial crime threats. As a result, Genpact is delivering accelerated efficiencies and significant impact to clients in the finance and capital markets sectors, with a notable example being their collaboration with Apex Fintech Solutions.

Justin Morgan, Head of Financial Crimes Compliance at Apex Fintech Solutions, emphasised the importance of staying ahead of financial criminals: “With the addition of generative AI features to Genpact’s riskCanvas, our analysts will be able to produce Suspicious Activity Report (SAR) narratives and case summaries at the click of a button using inputs from millions of data points. We expect this will reduce time spent on case summarizations by 60%, allowing our analysts to spend more time identifying truly suspicious financial activity.”

By utilising approved client data from the secure riskCanvas ecosystem and Amazon Bedrock’s secure data handling capabilities, highly accurate outcomes are generated while maintaining robust data protection standards.

Atul Deo, General Manager, Amazon Bedrock at AWS, emphasised the importance of responsible AI implementation: “Amazon Bedrock is rooted in secure data handling, encrypting all data and allowing users to customise models privately. Integrated with Genpact’s riskCanvas, this powerful combination enables our mutual customers to enhance productivity in investigating, detecting, and preventing financial crime threats.”

BK Kalra, Global Business Leader, Financial Services, Consumer and Healthcare at Genpact, highlighted the growing need for generative AI in financial crime operations: “Genpact’s expanded relationship with AWS represents a pivotal step in redefining the operations landscape for enterprises. Together we can unlock untapped value, and fuel significant growth opportunities for our clients, solidifying our commitment to delivering valuable business impact.”

The post Genpact, AWS to Fight Financial Crime with Generative AI appeared first on AIM.

]]>
After Amazon and Microsoft, Oracle Introduces Generative AI Healthcare Solutions  https://analyticsindiamag.com/ai-news-updates/after-amazon-and-microsoft-oracle-introduces-generative-ai-healthcare-solutions/ Thu, 21 Sep 2023 09:45:00 +0000 https://analyticsindiamag.com/?p=10100429

Oracle Corp. introduces healthcare innovations, including cloud-based EHR, generative AI, and public APIs, to streamline patient care and provider efficiency.

The post After Amazon and Microsoft, Oracle Introduces Generative AI Healthcare Solutions  appeared first on AIM.

]]>

Oracle Corp. announced several enhancements to its healthcare solutions at its flagship event being held in Las Vegas. This includes new cloud-based electronic health record (EHR) capabilities, generative AI services, public Application Programming Interfaces (APIs), and back-office enhancements designed for the healthcare industry.

The new Oracle Health EHR platform will offer a modern interface and intuitive, guided processes that improve patient and provider experiences with easy-to-use, consumer-grade applications. The platform will also provide convenient self-service options that empower patients while reducing provider burden and administrative workloads.

For providers, taking advantage of a host of new features will not only save time, but also increase efficiency. For instance, using generative AI services, providers will be able to create personalised treatment plans based on the patient’s medical history, current condition, and preferences. The platform will also leverage natural language processing and machine learning to extract relevant information from clinical notes and generate accurate documentation and billing codes.

“Our goal is to deliver one of the industry’s best, most functionally rich EHR systems to reduce wasted time, eliminate redundant processes, and add value every step of the way for practitioners and the patients they serve,” said Travis Dalton, executive vice president and general manager of Oracle Health. 

Oracle Health will be making its clinical and financial resources, such as vitals, appointments, and orders available via public Application Programming Interfaces (APIs). These new APIs will enable integration with Oracle’s clinical solutions and allow partners, customers, and third-party vendors to create more advanced customizations, as well as net new experiences and workflows :-

  • Generative AI capabilities: Clinical Digital Assistant enables providers to leverage generative AI together with voice commands to reduce manual work. For physicians, the multimodal voice and screen-based assistant participates in appointments using generative AI to automate note taking and to propose context-aware next actions, such as ordering medication or scheduling labs and follow-up appointments. 
  • Human resources enhancements: To help healthcare organisations support their complex staffing needs, Oracle added new AI-powered workforce management within Oracle Fusion Cloud HCM. With AI-powered healthcare scheduling and EHR insights, managers can match the best suited workers to the appropriate assignment based on real-time patient and workforce data.
  • Finance and supply chain enhancements: Using Oracle’s existing applications, Oracle will enable healthcare organisations to consolidate disconnected systems and automate critical processes while providing the flexibility needed to support new delivery models ranging from tele-health to home- and community-based care. 

Before Oracle, Amazon’s AWS launched HealthScribe and HealthImaging in July to improve the efficiency and accuracy of EHR using generative AI. In April, Microsoft partnered with Epic, America’s largest EHR to enhance medical records and improve patient-doctor interaction using ChatGPT-4. 

The post After Amazon and Microsoft, Oracle Introduces Generative AI Healthcare Solutions  appeared first on AIM.

]]>
Cloudera and AWS Forge Strategic Collaboration to Enhance Data Solutions https://analyticsindiamag.com/ai-news-updates/cloudera-and-aws-forge-strategic-collaboration-to-enhance-data-solutions/ Thu, 07 Sep 2023 07:15:20 +0000 https://analyticsindiamag.com/?p=10099644 cloudera

Cloudera and AWS are collaborating to make it easier for customers to use credits for faster cloud workload migration and get the Cloudera Data Platform (CDP) on AWS.

The post Cloudera and AWS Forge Strategic Collaboration to Enhance Data Solutions appeared first on AIM.

]]>
cloudera

Cloudera, the data company specializing in enterprise AI, has officially entered into a Strategic Collaboration Agreement (SCA) with Amazon Web Services, Inc. (AWS).  This agreement implies that Cloudera is committed to making cloud-based data management and analytics on AWS better and more widespread. Cloudera will harness AWS services to foster ongoing innovation and cost efficiency for customers using the Cloudera open data lakehouse on AWS, specifically tailored for enterprise generative AI.

The company is part of the AWS Independent Software Vendor (ISV) Workload Migration Program (WMP) Partner ecosystem. They also have a Cloudera Data Platform (CDP) Public Cloud listing on the AWS Marketplace. This makes it easier for customers to use credits for faster cloud workload migration and CDP procurement on AWS.

Their primary focus on elevating the open data lakehouse experience, has chosen AWS to manage critical components of CDP, such as data in motion, data lake house, data warehouse, operational database, AI/machine learning, master data management, and end-to-end security. This strategic decision enables customers to swiftly transition to CDP in the cloud without requiring application refactoring, while also supporting hybrid deployments. 

Furthermore, Cloudera has seamlessly integrated CDP with AWS services, including Amazon Simple Storage Service (Amazon S3), Amazon Elastic Kubernetes Service (Amazon EKS), Amazon Relational Database Service (Amazon RDS), and Amazon Elastic Compute Cloud (Amazon EC2), providing customers with a tightly woven platform that reduces costs and capitalizes on AWS’s latest innovations. Cloudera customers gain access to AWS native services without the need for self-managed integrations.

Paul Codding, Executive Vice President of Product Management at Cloudera, stated, “Deepening our collaboration with AWS gives customers even more reasons to choose to run the Cloudera Data Platform on AWS. With tighter hardware and AWS service integration, customers get the best possible experience with strong security and governance, along with new cost reduction options to support their most critical analytical workloads.”

David Wroe, Principal Software Engineer & Solution Architect for Be The Match, a global leader in cell therapy, noted, “Our move to CDP Public Cloud on AWS for Be The Match’s search and match platform has resulted in significant cost savings for the organization and a reduction in the infrastructure maintenance expense measured in millions of dollars. As a non-profit, this affords us tremendous operational flexibility that was not previously possible.”

AWS and Cloudera will partner to expand cloud-native data management and data analytics capabilities on AWS, in addition to jointly developing marketing and co-selling initiatives for customers.

PhonePe announced that it chose the Cloudera Data Platform (CDP) to improve operational efficiency in mid August this year. CDP will facilitate the migration of some workloads to the cloud while maintaining on-premise operations. As a growing fintech company, PhonePe explained their decision saying that it aimed to address data scaling challenges by transitioning to a hybrid data platform. 

The post Cloudera and AWS Forge Strategic Collaboration to Enhance Data Solutions appeared first on AIM.

]]>
Myntra’s New Generative AI Tool Will Surprise You https://analyticsindiamag.com/innovation-in-ai/myntras-new-generative-ai-tool-will-surprise-you/ Mon, 04 Sep 2023 12:52:26 +0000 https://analyticsindiamag.com/?p=10099446 Myntra’s New Generative AI Tool Will Surprise You

Myntra recently launched a first-of-its-kind solution in e-commerce in India and possibly globally called MyFashionGPT. It’s going to change the way you shop entirely.

The post Myntra’s New Generative AI Tool Will Surprise You appeared first on AIM.

]]>
Myntra’s New Generative AI Tool Will Surprise You

“I am going to Goa for a vacation, show me what I can wear,” we asked, and within seconds, this new tool called–MyFashionGPT was able to fetch results on shorts, t-shirts, sunglasses, hats and sunscreen. 

The brainchild of Myntra. This new tool enables users to search using natural language, alongside giving relevant suggestions based on customer queries. “This is a first-of-its-kind solution in e-commerce in India and possibly globally,” avered Myntra’s chief technology and product officer, Raghu Krishnananda, in an exclusive interaction with AIM. 

He said that they used ChatGPT for query understanding and then leveraged its own search infrastructure to fetch relevant and related products from its catalogue and show them as collections. It is working on more features that use generative AI, and will get launched in the near future on the platform.

Tech Stack 

Myntra has been using both proprietary and open-source algorithms based on user cases. Krishnanda believes that open-source algorithms provide a quicker path to market. “When proprietary data is involved, using a hosted model would be the right approach where we train the open source models on Myntra-specific knowledge such as the product taxonomy.” The company has also developed its own AI models and combined multiple models to solve for specific use cases, especially in image science applications.

Myntra is currently leveraging AzureAI services that give access to OpenAI models such as ChatGPT3.5, Dall-E, etc. “We are looking at privately hosted models as well as managed service models based on the use cases, and we will continue to have partnerships that serve this need.” 

Myntra’s latest tool MyFashionGPT, is integrated with ChatGPT3.5. “For text-related generative AI, we use ChatGPT3.5 and for image-related generative AI, we use Stable Diffusion-based models in conjunction with other internally developed models.” A number of other AI-based solutions in Myntra (non-generative AI) such as MyStylist have been developed in-house. 

Myntra’s Generative AI Prowess

Synonymous with fashion and lifestyle, Myntra have been aggressively pushing through to bring generative AI onto their platform with the big picture of enhancing customer experience. “We have been using AI for more than five years now and see huge benefits. In that sense Myntra is an AI-first company,” said Krishnananda. 

Myntra’s adoption of AI-based solutions has not only helped customers but also sellers. “AI-based solutions such as trend identification, demand prediction, and others are helping sellers bring the right merchandise and assortment and stay ahead of the trend,” said Krishnananda. Furthermore, its inventory and route optimisation algorithms have helped improve logistics.  

While Myntra may have carved an AI niche in the fashion segment, other e-commerce players have also dived into the generative AI wave with a number of use cases (see below). 

Source: Paxcom Report

Tech giant Amazon, who have already been implementing generative AI solutions on AWS and other services, are also working on bringing the same to its e-commerce vertical. The company is testing AI-generated customer review highlights that will present concise summaries of written reviews to aid a shopper in making quick purchasing decisions. 

To cater to small-scale sellers, last week Amazon launched its virtual assistant ‘सहAI’ (sahai). The AI tool will help sellers list their products online, analyse sales trends and thereby assist with improving sales. 

Challenges Galore

Training and inference for very large proprietary generative AI models is a challenge for any company, and it is easier said than done. “We are working to take ‘smaller’ open-source models and fine tune them on our own data,” said Krishnananda, emphasising the safety and cost benefits of training the model, without revealing the names of the smaller models (namely, Llama, Vicuna, etc.) being used. 

Confident Myntra believes that it faces fewer challenges when it comes to adopting generative AI in their workflow. Krishnananda also spoke about how they are bringing adoption not just on the platform front, but also within teams. “The tech team is taking measures to democratise the use of Generative AI by providing internal APIs that the broader tech team can play with, as well as organise tech talks and knowledge sharing sessions,” he concluded, saying that they are building in-house frameworks for low cost fine-tuning and inference using GPUs. 

The post Myntra’s New Generative AI Tool Will Surprise You appeared first on AIM.

]]>
Diamond Cut Diamond: Amazon Combats AI-Generated Reviews with AI  https://analyticsindiamag.com/innovation-in-ai/diamond-cut-diamond-amazon-combats-ai-generated-reviews-with-ai/ Thu, 17 Aug 2023 11:22:13 +0000 https://analyticsindiamag.com/?p=10098661

Amazon is leveraging AI to present review highlights and encouraging authentic feedback

The post Diamond Cut Diamond: Amazon Combats AI-Generated Reviews with AI  appeared first on AIM.

]]>

Amazon recently introduced AI-generated customer review highlights, which present concise summaries of common themes and sentiments from written reviews, helping shoppers quickly gauge if a product suits their needs. The summarisation tool, currently undergoing testing since earlier this year, is available to select mobile users in the U.S. 

A new AI-generated feature enables product insights and allows easy access to reviews highlighting specific attributes of products like “ease of use” or “performance.” 

By leveraging AI to present review highlights and encouraging authentic feedback. Amazon strives to make the shopping journey clearer and more transparent for its customers. 

Essentially, the technology derives its functionality from Amazon’s Community Guidelines, which act as parameters for its machine learning models in analysing multiple data points to detect risks and expert investigators in fraud-detection techniques to prevent fake reviews. The analysis encompasses various data points, such as account relationships, sign-in activities, review histories, and indicators of unusual behaviour.

“Our goal is to ensure that every review in Amazon’s stores is trustworthy and reflects customers’ actual experiences. Amazon welcomes authentic reviews—whether positive or negative —but strictly prohibits fake reviews that intentionally mislead customers,” said David Montague, Vice President of Selling Partner Risk, at Amazon. 

“We continue to innovate on our proactive technology to detect fake reviews and other indications of unusual behaviour,” he added.

This is on the same line as the company’s ‘Rekognition Content Moderation’ system which it uses to review harmful images in product reviews. The system combines machine learning and human-in-the-loop review, starting with a 40% automated image decision and gradually improving. Some self-managed models were transitioned to Amazon Rekognition Detect Moderation API for better accuracy.

This migration streamlined architecture, reducing effort and costs. The accuracy of Rekognition Content Moderation decreased human review needs and expenses, yielding significant benefits for product review moderation.

Amazon is strategically incorporating artificial intelligence into its product offerings. Instead of emphasising prominent AI chatbots or imaging tools, the company is concentrating on services that enable developers to build their own generative AI tools using its AWS cloud infrastructure.

Recently, it partnered with Meta to launch its LLaMa 2 and run it on AWS. Although Amazon wouldn’t share the details on its AI/ML models, the “review summarisation” tool could well be based upon Meta’s LLaMa 70B model.

Earlier this year, Amazon’s CEO, Andy Jassy, said that generative AI holds significant implications for the company’s future. This is evidenced by the ongoing generative AI initiatives across Amazon’s various business units.

More Questions,

However, this recent announcement about using AI to combat fake reviews raises questions about potential bias in the summary generation process. While AI can condense vast amounts of information into summaries, there’s concern that Amazon’s profit motives might influence how the AI presents information. 

This could lead to favouring high-margin products and established brands, potentially causing disadvantages to small-size sellers with limited marketing budgets. 

Legal Action

Amazon’s customer reviews have been a vital part of its platform since 1995, therefore, it’s smart to keep improving their utility with AI. However, with the advent of fake review brokers, the review has lost credibility to a huge extent. Reports indicate that up to 40% of reviews on the platform are potentially fake.  

Amazon’s commitment to combating fake reviews is further demonstrated by its recent legal action against brokers suspected of promoting the creation of fraudulent Amazon reviews.

“Another way we fight fake reviews is through legal action. Not only are we targeting the source of the problem but we’re sending a clear message that there’s no place for abuse in our stores and we will hold fraudsters accountable,” said Montague.

The Federal Trade Commission also recently proposed a rule to ban deceptive online reviews, aiming to enhance credibility. The rule’s development, starting in 2019, has involved cases against misleading claims and fake reviews. The proposed rule prohibits selling or soliciting fake reviews, including fabricated profiles, AI-generated content, and reviews from non-users, with penalties for violations. 

Other prohibited activities include buying positive/negative reviews for any product, allowing reviews from leadership/affiliates without proper disclosure, operating review sites as “independent” for one’s products, suppressing reviews through threats/intimidation, and selling fake engagement metrics like followers and video views.

Scope for More

Amazon’s efforts have yielded results, as the company reported blocking over 200 million suspected fake reviews in the past year using these methods. The retail platform acknowledges that a collaborative approach involving private sector entities, consumer groups, and governments is crucial for effectively addressing the problem.

Despite Amazon’s endeavours, consumer groups believe that more needs to be done to combat the widespread issue of fake reviews. While Amazon’s use of AI and legal actions against fake review operators have shown progress, the consumer group emphasises the need for stronger legislative measures and further cooperation to ensure a genuine and trustworthy online shopping experience.

The post Diamond Cut Diamond: Amazon Combats AI-Generated Reviews with AI  appeared first on AIM.

]]>