Innovation in AI | AIM https://analyticsindiamag.com/innovation-in-ai/ Artificial Intelligence news, conferences, courses & apps in India Fri, 09 Aug 2024 12:08:20 +0000 en-US hourly 1 https://analyticsindiamag.com/wp-content/uploads/2019/11/cropped-aim-new-logo-1-22-3-32x32.jpg Innovation in AI | AIM https://analyticsindiamag.com/innovation-in-ai/ 32 32 Google-Backed Cropin’s New AI Platform Could Tackle Food Crisis https://analyticsindiamag.com/innovation-in-ai/google-backed-cropin-launches-real-time-agri-intelligence-platform-sage/ https://analyticsindiamag.com/innovation-in-ai/google-backed-cropin-launches-real-time-agri-intelligence-platform-sage/#respond Wed, 17 Jul 2024 05:45:12 +0000 https://analyticsindiamag.com/?p=10129354 Cropin

Cropin Sage enables users to make informed decisions based on historical, present, and future data on cultivation practices, crops, irrigation, climate, and soil.

The post Google-Backed Cropin’s New AI Platform Could Tackle Food Crisis appeared first on AIM.

]]>
Cropin

In 2022, Cropin launched what it claimed was the ‘world’s first cloud for agriculture’. Now, it is adding generative AI into the mix. 

The Bengaluru-based agritech firm just launched Cropin Sage, which it claims is the ‘world’s first real-time agri-intelligence platform’.

Powered by Google Gemini, Cropin Sage enables users to make informed decisions based on historical, present, and future data on cultivation practices, crops, irrigation, climate, and soil.

To provide accurate data at such a scale, Cropin has partitioned the global map into 5×5 km grids, assigning a unique ID to each. Users can query Sage regarding crop cultivation feasibility within specific grids. Sage then provides data visualisation outputs in response to these inquiries.

The data is then processed by Sage, which sits inside the Cropin cloud, into different grid sizes and aggregated at different temporal frequencies, including yearly, seasonal, monthly, weekly, and daily, based on customer requirements.

(Cropin Sage)

Gemini Queries Large Datasets

Sage leverages Cropin’s advanced crop models, which analyse crop performance in detail. It also leverages a smart climate model, which integrates data spanning the last 40 years and current conditions and forecasts weather for the coming year. 

“We integrate Gemini with our climate models, crop models, and proprietary knowledge graphs. These elements are fused together to provide comprehensive solutions,” Krishna Kumar, chief executive officer at Cropin, told AIM. 

At its core, Sage makes sense of the enormous datasets spanning over terabytes of data. Each dataset could easily be over 100 gigabytes in size, depending on the country and the availability of historical data, besides other datasets such as temperature, climate, and even socioeconomic data.

In fact, Sage uses Gemini to query these large datasets and generate responses in a consumable manner within seconds. The startup’s proprietary data spans over 350 crops and 10,000 varieties in 103 countries.

(Cropin Sage)

Sage does not present the data in a text format but leverages a visualisation tool to transform complex datasets into intuitive graphs, charts, and interactive displays that enhance understanding and decision-making.

To do so, Sage leverages the Gemini Flash 1.5 model to convert user queries to SQL queries, which allows it to generate user-friendly, grid-based data in a visually appealing platform.

It also leverages Google Kubernetes Engine (GKE) services to scale its operations in real-time as demand increases, processing massive volumes of data. 

According to Kumar, Sage will initially provide information on 13 crops expanding to 15 by the end of the year. These crops, which include corn, paddy, soy, wheat, onion, potato, sugarcane, cocoa, coffee, and cotton among others, accounts for 80% of total global production.

Moreover, users can seek data on these 13 crops irrespective of where they are. “They could be anywhere, any country, but if they seek data on these crops, they will get the data,” Kumar emphasised.

Who Benefits from Sage?

Rajesh Jalan, the CTO and head of engineering at Cropin, who was also present during the conversation with Kumar, told AIM that the model is highly accurate. 

Hallucinations remain a persistent issue with LLMs. In fact, upon Gemini’s release, it generated controversial responses that led Google CEO Sundar Pichai to deem them ‘unacceptable’.

However, according to Jalan, it does not matter here because Sage provides all the data as per the query. “The customers can see all the data and if the data is wrong, the customer will be able to figure it out.”

So far, the company has nearly 250 B2B customers worldwide and has digitised over 30 million acres of farmlands, positively impacting over 7 million farmers worldwide. 

Cropin Sage will benefit CPG players, seed manufacturers, food processors, multilateral organisations, financial institutions and governments, according to Kumar.

Global food systems today are encountering substantial challenges, exacerbated by various adverse factors that significantly hinder farmers’ capacity to meet food production demands.

This issue is exemplified by the cocoa crisis, which has deeply affected chocolate manufacturers. Cocoa prices have surged by 400% within a year, posing significant affordability challenges for many in the chocolate industry.

“Consider an enterprise in the food and agribusiness sector with a supply chain spanning multiple countries like PepsiCo, for instance. Due to an unstable supply chain caused by changing climate changes, if they want to expand the production of potatoes to a new country, they can leverage Sage to predict suitable locations for sourcing and investments,” Kumar said.

Sage can also benefit multiple governments, especially in countries which lack good agricultural data. 

“For instance, we’re collaborating with the Kenyan government on corn production challenges, analysing grid-level data to understand climate impacts and crop trends, addressing food security concerns comprehensively,” he added.

Akṣara – Cropin Micro Agri Model 

Earlier this year, Cropin announced the launch of ‘akṣara’, the sector’s first purpose-built open-source micro language model (µ-LM) for climate-smart agriculture. 

Built on top of Mistral’s foundational models, the startup aims to make agri-knowledge accessible to everyone in the ecosystem. “We trained it with data for five countries and nine crops and now we are embedding the model with Sage,” Kumar revealed.

While Sage is a data-intensive platform, akṣara is a knowledge platform, and according to Jalan, they are complementary to each other. 

“We have complete clarity on how they can be used independently and also in combination to provide users with knowledge and data,” he said.

Sage is Multilingual; Voice Capabilities Coming Soon

Sage is currently available in English but does a decent job when prompted in Hindi as well. The startup revealed that more languages could be added going forward.

Moreover, at Google I/O, the tech giant unveiled Project Astra, a first-of-its-kind initiative to develop universal AI agents capable of perceiving, reasoning, and conversing in real-time. 

Project Astra is built on Gemini, and according to Kumar, the same voice capabilities would come to Sage as well.

“The adoption will rely heavily on voice interaction in natural language across different regions worldwide. You simply ask a question in your preferred language. This will be really beneficial for those less adept at typing or with limited literacy in agriculture,” Kumar adds.

However, he also emphasises that the startup will take action when the time is right, as it depends largely on their capacity to manage this transition effectively.

The post Google-Backed Cropin’s New AI Platform Could Tackle Food Crisis appeared first on AIM.

]]>
https://analyticsindiamag.com/innovation-in-ai/google-backed-cropin-launches-real-time-agri-intelligence-platform-sage/feed/ 0
As GitHub Begins Technical Preview of Copilot Workspace, an Engineer Answers How it Differs from Devin https://analyticsindiamag.com/innovation-in-ai/as-github-begins-technical-preview-of-copilot-workspace-an-engineer-answers-how-it-differs-from-devin/ Mon, 29 Apr 2024 15:58:00 +0000 https://analyticsindiamag.com/?p=10119225 github copilot workspace

There are fundamental differences between GitHub Copilot Workspace and Devin, even though they both are designed to solve similar problems

The post As GitHub Begins Technical Preview of Copilot Workspace, an Engineer Answers How it Differs from Devin appeared first on AIM.

]]>
github copilot workspace

At GitHub Universe 2023, CEO Thomas Dohmke introduced the world to GitHub CoPilot Workspace, which he believes has the ability to reimagine the very nature of the developer experience itself. 

Within Copilot Workspace, developers can ideate, plan, develop, test, and execute code using natural language.

Introduced in 2022, GitHub Copilot has emerged as the world’s most popular AI developer tool. The Microsoft-owned company now anticipates Copilot Workspace to be the next evolutionary step.

“There are various ways in which Copilot Workspace can help a developer throughout the software development journey. One of the most important benefits is its ability to help developers get started on a task,” Jonathan Carter, head of GitHub Next, told AIM in an exclusive interview. 

Research conducted by GitHub indicates that initiating a project is frequently one of the most daunting aspects of software development. 

“Particularly deciding how to approach a task, which files to look through, and how to consider the pros and cons of various solutions. Copilot Workspace reduces that cognitive burden by meeting developers where a new task often begins–a GitHub issue–and synthesising all of the information in that issue to inform a sequenced plan for developers to iterate through,” Carter said.

GitHub Workspace vs Devin

Earlier this year Cognition Labs announced Devin, dubbed as the world’s first AI software engineer.

Upon its announcement, Devin had the developer community talking, as it effectively cleared multiple engineering interviews at top AI companies and also fulfilled actual tasks on the freelance platform Upwork.

However, Carter believes there are fundamental differences between Copilot Workspace and Devin, even though they both are designed to solve similar problems. “At a high-level, Devin and Copilot Workspace are working towards similar goals–reimagining the developer environment as an AI-native workflow. 

“That said, we don’t view GitHub Copilot Workplace as an ‘AI engineer’; we view it as an AI assistant to help developers be more productive and happier,” Carter said.

Nonetheless, the biggest differentiator between the two AI tools is that Devin includes a build/test/run agent that attempts to self-repair errors. 

“We initially built a similar capability, which you can see in the demo we gave at GitHub Universe in November, but ultimately decided to scope it out for the technical preview to focus on optimising the core user experience,” Carter pointed out. 

“Our research has shown that developers value sequenced functions for AI-assistance, and we want to ensure Copilot Workspace meets developers’ needs to build confidence and trust in the tool before we invest in new features, including productising our build/run/test agent,” he said.

Opening GitHub Workspace for technical preview 

Similar to Devin, GitHub now plans to give developers early access to test the newest AI tool for software development. Starting today, GitHub will begin the technical preview for GitHub Copilot Workspace.

“We’re looking forward to getting Copilot Workspace into the hands of a diverse group of developers so we can understand where they’re getting the most value from it today and where we can make adjustments to make it even more valuable in the future,” Carter said.

Developers who had access to Devin spoke profoundly of the AI tool. Yet Devin landed in troubled waters after a software developer claimed, in a YouTube video, that the demo video that Cognition Labs released earlier this year was staged.

Although the startup provided some clarification, assessing the pros and cons of an AI tool is challenging until it undergoes extensive testing. With the technical preview, GitHub aims to accomplish precisely that.

“It’s hard to say what limitations developers will find with Copilot Workspace until they use it at scale, and that’s exactly why we do technical previews.”

Mobile compatibility, an advantage?

Copilot Workspace encourages exploration by allowing developers to edit, regenerate, or undo every part of their plan as they iterate to find the exact solution they need. 

It also increases their confidence by providing developers with integrated tools to test and validate that the AI-generated code performs as expected.

Copilot Workspace also “boosts collaboration with automatic saved versions and context of previous changes so developers can immediately pick up where their teammate left off,” Carter noted.

Moreover, the tool is also mobile-compatible, which GitHub believes is a huge advantage for developers. 

“Personally, I love taking walks in between meetings and I often find myself thinking through a new idea while I’m on-the-go. With Copilot Workspace on mobile, I can easily create a plan for bringing that idea to life, and even test and implement it, all from my phone. 

“We’re also excited about Copilot Workspace on mobile because it allows developers to collaborate from wherever they may be. If a colleague sends me a link to their Workspace, I can explore and review it from my phone just as easily as I could from my computer,” Carter added.

Making developers efficient 

Carter expects Copilot Workspace to provide immediate improvements for developers in terms of efficiency when “you consider how long it typically takes to read through an issue, explore related files, and put together an implementation plan without Copilot Workspace. What has historically taken hours can now be done in seconds.”

However, he considers the productivity improvements to be an incidental outcome of the broader advantages of Copilot Workspace in terms of enhancing clarity of thought, fostering exploration, and boosting confidence.

“For example, on my team at GitHub Next, I have front-end developers doing back-end work with the help of Copilot Workspace and vice-versa. 

“Being able to tackle projects outside of your specific area of expertise confidently is a huge benefit, and when you think of doing this at scale you can imagine how much more productive developer teams can be,” Carter said.

The post As GitHub Begins Technical Preview of Copilot Workspace, an Engineer Answers How it Differs from Devin appeared first on AIM.

]]>
T-Hub Supported MATH is Launching AI Career Finder to Create AI Jobs  https://analyticsindiamag.com/innovation-in-ai/t-hub-incubated-math-is-launching-ai-career-finder-to-create-ai-jobs/ Tue, 23 Apr 2024 07:26:40 +0000 https://analyticsindiamag.com/?p=10118863 ai jobs india

MATH will create 500 AI-related jobs by 2025.

The post T-Hub Supported MATH is Launching AI Career Finder to Create AI Jobs  appeared first on AIM.

]]>
ai jobs india

Over the years, T-Hub has grown to become the largest innovation hub in the world. Since its inception in 2015, T-Hub, led by the Telangana government, has nurtured over 2000 startups. 

Just last month, the Machine Learning and Artificial Intelligence Technology Hub (MATH) was established at T-Hub which aims to foster AI innovation by bridging the gap between startups, corporates, academia, investors, and governments.

In the midst of concerns about job displacement caused by AI, MATH aims to generate employment opportunities in the field of AI within the country. The initial objective is to create 500 AI-related jobs by the end of 2025.

MATH CEO Rahul Paith believes AI will likely do redundant jobs that involve repetitive and routine tasks, such as data entry, administrative work, and basic customer service roles.

“However, it’s essential to recognize that the rise of AI will also create a demand for new jobs, particularly those requiring human skills such as creativity, critical thinking, problem-solving, and emotional intelligence. 

“Roles that involve working alongside AI systems, such as AI trainers, data scientists, machine learning engineers, and AI ethicists, will become increasingly prevalent,” he told AIM.

Creating 500 AI jobs

One of the primary goals of MATH besides nurturing AI startups is to create AI and AI-related jobs in the country. “Our vision is to generate over 500 AI-related jobs by 2025 and support more than 150 startups annually,” he said.

To enable this, MATH aims to foster a supportive environment for AI-first startups, providing them with resources, mentorship, and access to networks crucial for growth. 

“In the first year, we are aiming to onboard over a hundred startups. These startups are AI-first, deeply involved in either building around AI, utilising AI, or contributing to the AI ecosystem,” he said.

Besides undertaking initiatives like talent development programmes, industry-academia collaborations, and targeted investment in AI research and development, MATH is launching its own job portal.

Called AI Career Finder, the platform is dedicated to nurturing and empowering the next generation of AI/ML talent. Paith said the platform is designed to serve as a central hub for connecting startups seeking top AI/ML professionals with candidates searching for exciting opportunities in the field. 

By leveraging AI Career Finder, MATH aims to streamline talent acquisition and placement, thereby strengthening its efforts to catalyse job creation in the AI sector.

“Additionally, MATH has identified key sectors such as healthcare and clean tech as prime areas for AI integration and growth. By facilitating collaborations and investments in these sectors, MATH aims to amplify job opportunities within AI-related fields.”

AI Programmes

To foster AI innovation, MATH has also launched a few programmes designed to foster AI innovation in the startup ecosystem. “MATH Nuage is our pioneering initiative, through which we provide comprehensive support and guidance to aspiring entrepreneurs navigating the complex landscape of AI innovation,” Paith said. 

Its key components include Virtual Partner Support, which connects startups with strategic partners for insights and resources. 

“Similarly, Mentor Desk Support offers guidance from seasoned professionals, Funding Desk Support facilitates securing investment, and Access to Data Lake enables startups to access a vast collection of data for AI model development,” Paith explained.

Another flagship programme launched by MATH is called the AI Scaleup Programme, which aims to accelerate AI innovation and entrepreneurship. 

“This initiative is geared towards supporting startups at the scale-up stage, providing them with the resources, mentorship, and networking opportunities needed to propel their growth and success in the AI market.”

A mini data centre 

MATH has also set up a mini data centre with GPU capabilities to help AI startups with AI training and inferencing. “In comparison to constructing a complete data centre, the mini data centre (called MINI DC) offers powerful computing abilities at a much lower price.”

The data centre helps startups meet their high-performance computing (HPC) needs and is loaded with NVIDIA A100 GPUs. “The mini data centre’s infrastructure ensures efficient deployment of trained models, enabling startups to bring their AI solutions to market quickly,” Paith said.

Closing the funding gap

Along with T-Hub, MATH also assists startups in securing funding through various channels, including venture capital firms, angel investors, and government grants. “We provide support in preparing funding proposals, pitching to investors, and negotiating investment terms,” Paith said.

However, he believes investors, incubators, and government agencies must collaborate to close the funding gap and foster a risk-tolerant climate. “Investors must acknowledge the extended value proposition of deeptech startups and their capability to make a significant social and economic difference. 

“Moreover, investing in specialised education and training programmes is essential to develop a strong deeptech talent pool. 
“Through creating a joint ecosystem, we can enable Indian deeptech startups to flourish and emerge as global pioneers in innovation,” he said.

The post T-Hub Supported MATH is Launching AI Career Finder to Create AI Jobs  appeared first on AIM.

]]>
Quora’s Poe Eats Google’s Lunch https://analyticsindiamag.com/ai-origins-evolution/quoras-poe-eats-googles-lunch/ Wed, 17 Apr 2024 12:45:30 +0000 https://analyticsindiamag.com/?p=10118444

Poe was also the first AI chatbot platform that allowed users to chat from a list of LLMs. Perplexity AI came in much later.

The post Quora’s Poe Eats Google’s Lunch appeared first on AIM.

]]>

Poe, the AI chatbot platform by Quora, recently introduced a new feature – multi-bot chat – which enables users to engage with several AI models concurrently within a single conversation thread. This comes as a boon for people who wish to chat with multiple models in a single go. 

This capability has two key components: context-aware recommendations with bots to compare answers to your query and the ability to call any bot on Poe into your chat simply by @-mentioning it. This lets you easily compare results from various bots and discover optimal combinations of models to use the best tool for each step in a workflow.

For instance, a user could leverage GPT-4 for analysis, Claude for creative writing, and DALL-E 3 for image generation — all within one thread. Poe aims to streamline how people find the optimal combination of bots for their needs. 

What about Perplexity?

Source – Reddit

Poe was the first AI chatbot platform to allow users to chat from a list of LLMs. Perplexity, which also follows a similar mode, came into the picture much later. Poe offers a comprehensive platform where users can engage with multiple AI models seamlessly. 

Perplexity on the other hand, excels as a potent search engine driven by large language models. With Perplexity, we have to go to different models to access it, but with Poe’s new advancement accessing different models can be done in one thread. 

Poe’s features include AI chat, model selection, and the integration of multiple models, ensuring a diverse and personalised experience. Additionally, Poe empowers users to craft their own chatbots, leveraging existing models as a base for personalisation and experimentation. Meanwhile, Perplexity features search, answering questions, and access to advanced language models. It enhances users’ ability to obtain relevant information efficiently.

Perplexity as a potent search engine helps while browsing the internet. If something is not clear, a feature or anything else, one can simply take a screenshot and ask Perplexity about it. 

The AI-powered answer engine has been in a quest to establish itself as a Google alternative. 

Quora marking up the future

A Reddit conversation appears, “If you’re solely concerned about GPT-4 and Claude, just focus on Perplexity. While both have similar writing capabilities, Perplexity stands out for its ability to search the internet, a feature Poe lacks.” 

With the ongoing debates on which option to select, Quora anticipates Poe evolving into a valuable platform for diverse applications, aiming to bridge this divide and significantly streamlining the effort required for AI developers to engage a broad user base, as noted by Adam D’Angelo.

The latest breakthrough in Poe captivates users, enticing them to embrace it instantly. If Quora continues to pioneer novel pathways, offering consistently engaging user experiences, it stands a chance at establishing itself as the quintessential platform for the generative AI era.

Source – Reddit

The post Quora’s Poe Eats Google’s Lunch appeared first on AIM.

]]>
Zoho Collaborates with Intel to Optimise & Accelerate Video AI Workloads https://analyticsindiamag.com/innovation-in-ai/zoho-collaborates-with-intel-to-optimise-accelerate-video-ai-workloads/ Mon, 08 Apr 2024 13:00:36 +0000 https://analyticsindiamag.com/?p=10117814 Zoho teams up with Intel for optimizing video AI workloads

Zoho has witnessed significant performance improvements in AI workloads with 4th Gen Intel Xeon processors.

The post Zoho Collaborates with Intel to Optimise & Accelerate Video AI Workloads appeared first on AIM.

]]>
Zoho teams up with Intel for optimizing video AI workloads

Recently, ManageEngine, the IT enterprise wing of Zoho, confirmed that it would be investing $10 million in NVIDIA, AMD and Intel, aiming to unleash generative AI offerings for its customers. 

Following this development, Zoho is now collaborating with Intel to optimise and accelerate its video AI workloads for users. This will empower efficiency, reduce total cost of ownership (TCO), and optimise performance. 

Interestingly, the Intel collaboration will most likely enhance Zoho’s existing team communication platform Cliq, which allows users to interact over audio and video calls. Further, last October, the company unveiled its smart conference rooms solution on Cliq, that allows customising room devices such as TV screens for taking video conferences. 

Why Intel? 

Santhosh Viswanathan, the vice president and MD at Intel, posted that the company has collaborated with Zoho to leverage Intel® Xeon® processors and the OpenVINO™ toolkit to empower its Video AI Assistant.

Source: LinkedIn

Zoho said it is working closely with Intel to accelerate key AI workloads, including CCTV surveillance video analytics and optical character recognition (OCR). Leveraging Intel’s expertise in hardware and software optimisation, Zoho achieved significant improvements in both performance and cost-effectiveness across these vital AI applications.  

Further, it said that it is using hardware accelerators, which include Intel® Xeon® Scalable processors and Intel® Distribution of OpenVINO™ toolkit to help organisations achieve faster processing speeds, less delay, and better scalability. These tools are crucial for quick decision-making and real-time processing in tasks like surveillance, digitising documents, and analysing text enabling organisations to attain an optimal total cost of ownership (TCO). 

Optimising Video Analytics to Manage AI Workloads

The team said that their solution revolves around the multifaceted nature of video analytics using CCTV surveillance cameras. This necessitates the implementation of sophisticated algorithms adept at various tasks, including enhancing recording quality, detecting objects, and tracking individuals. 

Likewise, Tesseract OCR, an OCR engine, plays a pivotal role in digitising text from scanned images, video frames, or documents while providing robust support for multiple global languages. The team said that Camera Image Quality Analyzer is a key tool that helps identify cameras with suboptimal recording quality due to environmental factors such as dust, fog, or spider webs. 

Post this, the system tickets the issue and reports for tracking and resolution. Moreover, to optimise operational efficiency and sustainability, AI video assistants facilitate counting functionality. In large spaces where sensor sensitivity may be reduced, the Video AI Assistant can count people using the camera feed and adjust the air conditioning accordingly. 

In terms of text digitisation, with Intel® Xeon® features and associated software optimisations, Zoho’s AI Assistant utilises Tesseract OCR technology to convert text into the digital format. This enables the AI assistant to generate relevant search terms, facilitating document management and streamlining information retrieval processes. 

Impacting the Real World

When implemented in real-world scenarios, harnessing the capabilities provided by the 4th Gen Intel® Xeon® processors, Zoho has experienced significant performance improvements for specific AI workloads. Through a benchmarking process, the camera Image Quality Analyzer scans all the cameras on the premises, identifying the wrong one and taking it to the respective teams, thereby reducing manual checks and errors.

Shailesh Kumar Davey, co-founder of ManageEngine, emphasised, “At Zoho, we collaborate with leading technology industry vendors in improving the TCO of our infrastructure and solutions to ensure we offer the best value to our customers.”

Zoho recognised the necessity for a robust platform to meet the high-performance requirements of video AI assistants. Hence, collaborating with Intel provides a comprehensive approach that serves as an ideal solution for accelerating AI workloads and overcoming challenges across various applications.

The post Zoho Collaborates with Intel to Optimise & Accelerate Video AI Workloads appeared first on AIM.

]]>
Rakuten Certified as Best Firm for Data Scientists for the 2nd Time https://analyticsindiamag.com/ai-highlights/rakuten-certified-as-best-firm-for-data-scientists-for-the-2nd-time/ Mon, 08 Apr 2024 06:30:00 +0000 https://analyticsindiamag.com/?p=10117731

The Best Firm For Data Scientists certification surveys a company’s data scientists and analytics employees to identify and recognise organisations with great company culture.

The post Rakuten Certified as Best Firm for Data Scientists for the 2nd Time appeared first on AIM.

]]>

Rakuten has once again been certified as the Best Firm for Data Scientists to work for by Analytics India Magazine (AIM) through its workplace recognition programme. 

The Best Firm For Data Scientists certification surveys a company’s data scientists and analytics employees to identify and recognise organisations with great company cultures. AIM analyses the survey data to gauge the employees’ approval ratings and uncover actionable insights. 

“I extend my deepest gratitude to our exceptional team of Data and AI professionals whose dedication and brilliance have led us to this recognition. With a culture fueled by innovation, usage of cutting-edge technology, collaboration and strong business communication, we’re proud to be the premier destination where AI talent thrives and revolutions begin”, said Anirban Nandi, Head of AI Products & Analytics (Vice President) at Rakuten India.

The analytics industry currently faces a talent crunch, and attracting good employees is one of the most pressing challenges that enterprises are facing.

The certification by Analytics India Magazine is considered a gold standard in identifying the best data science workplaces and companies participate in the programme to increase brand awareness and attract talent. 

Best Firms for Data Scientists is the biggest data science workplace recognition programme in India. To nominate your organisation for the certification, please fill out the form here.

The post Rakuten Certified as Best Firm for Data Scientists for the 2nd Time appeared first on AIM.

]]>
This Indian Logistics Company Developed an LLM to Enhance Last-Mile Delivery  https://analyticsindiamag.com/innovation-in-ai/this-indian-logistics-company-developed-an-llm-to-enhance-last-mile-delivery/ Tue, 02 Apr 2024 11:39:25 +0000 https://analyticsindiamag.com/?p=10117504 bulls.ai

Bulls.ai can significantly improve delivery quality by up to 60%, increase operational efficiency, and slash logistics costs by as much as 30%

The post This Indian Logistics Company Developed an LLM to Enhance Last-Mile Delivery  appeared first on AIM.

]]>
bulls.ai

Last-mile delivery is particularly tricky in India due to poor infrastructure, population density, vague or incomplete addresses and complex street layouts.

In India, a peculiar challenge arises with 80% of addresses depending on landmarks up to 1.5 kilometres away. This reliance on landmarks makes geolocation difficult for logistics companies, resulting in an average deviation of approximately 500 metres between the given address and the actual doorstep.

To solve this problem, Gurugram-based logistics company Ecom Express developed a solution powered by language models trained on data from nearly 2 billion parcels delivered by the company since its inception in 2012.

Bulls.ai improve delivery quality by up to 60%

Called Bulls.ai, the solution improves operational accuracy by correcting, standardising, and predicting geo-coordinates for addresses across the length and breadth of India. Not just in metros and Tier-1 cities but also into the hinterlands of Tier-2 cities and beyond where the address quality becomes inferior, according to Manjeet Dahiya, head – machine learning & data sciences, Ecom Express Limited.

“Bulls.ai helps in identifying the correct last-mile delivery centre to deliver the shipment based on the consignee address, reduce misroutes, determine junk addresses and correct incomplete addresses/PIN codes to route the shipment correctly. 

“It geocodes the address consignee’s location on the map, assisting our field executives,” Amit Choudhary, chief product and technology officer at Ecom Express Limited told AIM.

Bulls.ai can significantly improve the delivery quality by up to 60%, increase operational efficiency, and slash logistics costs by as much as 30%.

“It also helps in misroute reduction from 7% to 2%. Resulting in a reduction of 5% in shipments reaching the correct last-mile centre at the first go,” Dahiya added.

So far, the company has opened the API to its customers to validate their user addresses. “We have first started with our existing customers and have showcased Bulls.ai to them in our customer panel.”

What makes Bulls.ai unique

The solution is powered by three models – 354 million, 773 million and 1.5 billion parameters and has been trained on 8.4 billion tokens representing 80 million addresses and geo-coordinate pairs.

Ecom Express has a nationwide presence, spanning all 28 states of the country. It extends its services to over 2,700+ towns across more than 27,000 PIN codes, effectively reaching over 95% of India’s population. Over the years, the company has accumulated a substantial amount of data through its extensive operations.

“The architecture is built in a decoder-only transformer pattern, specifically GPT2. It has been trained from scratch and the dataset of the historic addresses that we have delivered in the past is the key data. The training approach is distributed data parallel,” Choudhary said.

Currently, there are no similar solutions in the market. What makes Bulls.ai unique is the training dataset, according to the company. Moreover, existing LLMs like GPT models or the LLaMA models are not tailored to address this particular challenge and do not have the capability to output the geo-coordinates of an address. 

“This is a domain specific LLM and no such LLM exists. For instance, the domain of GPT4/LLaMA is very different from the domain of address and location data. These models cannot tell the geo-coordinates of addresses. Achieving good results with these models will require fine-tuning with significantly large data, which would effectively be a pre-training,” Dahiya explained.

Choudhary said that his team encountered a few challenges when training the model. “For example, a number of optimizations were needed to improve the training speed and reduce the GPU memory footprint such as 8-bit optimizer, mixed prediction training, and gradient checkpointing. 

“This allowed us to train bigger models and faster. During inference, it is quite challenging to use bigger models for real-time predictions as they could be slow. We used pruning of the models to make these bigger models faster at the time of inference,” he said.

Can LLMs solve other last-mile delivery problems?

LLM can also play a pivotal role in addressing many other last-mile delivery challenges inherent in this crucial phase of the supply chain. LLMs can optimise route planning, enhance delivery scheduling, and streamline communication between drivers and customers.

“Apart from address and location intelligence, we see applications in understanding the descriptions of the goods to identify dangerous goods and not route them through the air,” Choudhary said.

Computer vision models are already being used by logistics and supply chain companies to identify dangerous or defective goods, however, multimodal LLMs could potentially do a much better job.

“Understanding of goods is also necessary to figure out the category of the goods. Fraud detection at consignee and seller is another important aspect from the logistics point of view that can be solved through generative AI,” Choudhary added.

The post This Indian Logistics Company Developed an LLM to Enhance Last-Mile Delivery  appeared first on AIM.

]]>
Perplexity AI Pro Reviews – Should you Buy the Pro Version? https://analyticsindiamag.com/innovation-in-ai/perplexity-ai-reviews/ Tue, 02 Apr 2024 09:35:20 +0000 https://analyticsindiamag.com/?p=10117471 Perplexity AI

Though with a few caveats, Perplexity has been on top of its game, and everyone is adoring it!

The post Perplexity AI Pro Reviews – Should you Buy the Pro Version? appeared first on AIM.

]]>
Perplexity AI

A few months ago, the company gave its Perplexity Pro access to AIM, predominantly for research and analysis purposes. The results have been promising. 

The biggest pain point for any kind of research is the source and authenticity of information. Unlike any of the available LLM that generate results, Perplexity substantiates the given information with the source link, and the veracity of the content is addressed, which AIM users found to be the biggest plus point. 

“I actually liked it. The sources are quite reliable,” said a research associate at AIM.

With the option to select from a list of the latest AI models, including Claude 3 Opus, Mistral Large and GPT-4 Turbo, users can experiment with any of them. A range of features such as ‘discover’, ‘focus’ and ‘attach’ allows users to explore and cater search to their specific needs. 

“Whenever I don’t understand a feature or anything else while browsing the internet, I just take a screenshot and ask Perplexity about it,” said a video journalist at AIM, who also uses the application to improve the workflow and check the grammar of his scripts. 

Interestingly, we observed that Perplexity generated answers faster than ChatGPT.

Perplexity Pro Lacks Depth, Hallucination Persists  

Perplexity Pro provided various features that worked extremely well for us; however, it was not perfect. When generating in-depth responses to a particular context, ChatGPT fared better than Perplexity Pro. The latter was unable to handle complex instructions. 

In a rare few occurrences, the chatbot even generated incorrect information. When prompted again about the incorrect information, it generated the right one. Like every LLM, Perplexity Pro is also not free from hallucinations. 

Perplexity Answer about apple m3 chips
Perplexity Answers on apple m3 chips with sources

Perplexity Pro Responses

It was also noticed that the order of search results closely resembled Google Search results. This sparked conversation around how Perplexity is possibly a Google wrapper. However, we have not observed an exhaustive list of search results that proves the same. 

Nothing like Perplexity AI

Despite its flaws, Perplexity has become the talk of the town. The AI-powered answer engine has been in a quest to establish itself as a Google alternative. In the process, the company is actively partnering with device-makers, especially smartphones.

The company recently partnered with Nothing, where the buyers of their latest smartphones get a free subscription of Perplexity Pro. 

Not just smartphone makers but even operators have become strategic partners. The company recently announced its partnership with Korea’s largest telecommunications company, SK Telecoms, where 32M+ subscribers will get access to Perplexity Pro. 

Perplexity also announced its pplx-online LLM APIs to power Rabbit R1, an AI-powered gadget that uses a large action model (LAM). 

Last month, Perplexity partnered with Yelp to improve local searches and help users find information on local restaurants and businesses, a probable step to combat Google reviews. 

Perplexity recently incorporated DataBricks’ latest open-source LLM DBRX, which is said to outperform GPT-4 and other powerful AI models like LLaMA and Mistral. 

Source: X

Not just Databricks, Perplexity has been open to embracing and offering all kinds of closed-source models through APIs and answer engines, be it the latest Claude 3 Opus, Mistral Large, or Google Gemma. Perplexity is quick at its game.

Further, Aravind Srinivas, Perplexity’s CEO and co-founder, recently announced that Copy AI, which is launching a GTM platform, is collaborating with Perplexity AI. “They chose to use our APIs for this, and we’re also providing six months of Perplexity Pro for free to current Copy AI subscribers,” he said. 

Even NVIDIA chief Jensen Huang had earlier mentioned that he uses Perplexity, a company they have invested in, ‘almost everyday’. 

The Future of Search? 

The testimonials of big tech leaders such as Huang and Jeff Bezos may sound inflated considering they have invested in the AI company, but going by the growing number of Perplexity users, the company is surely capturing a wide audience. 

The company has over 10 million monthly users. 

Further, they are even offering the model in various languages, such as Korean, German, French and Spanish. 

While they aim to compete with Google and Sundar Pichai, whom Srinivas admires, things are looking good for Perplexity AI and Aravind Bhai. AIM loves Perplexity and Google, equally.

Compare Perplexity AI Vs ChatGPT Vs Bing Chat

ComparisonPerplexity AIChatGPTBing Chat
Accuracyup-to-date informationMay not offer current dataReal-time internet access
ReliabilityProvides citationsGood for conversationalComprehensive responses
User ExperienceIdeal for in-depth researchEngaging and natural user experienceFlexible and wide application coverage
Best Use CaseFactual accuracyGenerating creative contentMultimodal inputs

The post Perplexity AI Pro Reviews – Should you Buy the Pro Version? appeared first on AIM.

]]>
Will StarCoder 2 Win Over Enterprises? https://analyticsindiamag.com/innovation-in-ai/will-starcoder-2-win-over-enterprises/ Wed, 20 Mar 2024 06:39:12 +0000 https://analyticsindiamag.com/?p=10116816 Code Generator

StarCoder 2 is trained on 619 programming languages

The post Will StarCoder 2 Win Over Enterprises? appeared first on AIM.

]]>
Code Generator

Code completion has been one of the most prominent use cases of large language models (LLMs). GitHub Copilot, the popular AI tool,  has been used by over a million developers and 200,000 enterprises. 

However, widely-used code generation tools like GitHub CoPilot, AWS CodeWhisperer or Google Duet AI are not open source.  Enterprises are unaware of the specific codes on which these models are trained, presenting a significant concern, especially for those in highly scrutinised industries.  

Thus, project BigCode, an open scientific collaboration run by Hugging Face and ServiceNow Research, was born. It recently released StarCoder 2, which is trained on a larger dataset (7.5 terabytes) than its predecessors and on 619 programming languages.

StarCoder 2 comes in three sizes – 3-billion,  7-billion and 15-billion-parameter models.

While there are a few open source code LLMs, the StarCoder 2 15-billion model developed by NVIDIA matches and at times even surpasses 33-billion parameter models, like Code Llama, on many evaluations.

How enterprises benefit from open source?

According to Leandro von Werra, machine learning engineer at Hugging Face and co-lead of the BigCode project, StarCoder 2 will empower the developer community to build a wide range of applications more efficiently with full data and training transparency. 

Besides the fact that StarCoder 2 is free to use, it also brings in added benefits for developers and enterprises, according to Werra.

“For many companies, using GitHub CoPilot is tricky from a security perspective, because it requires employing the endpoint that CoPilot uses, which is not retained in their environments. You’re sending parts of your code to that endpoint and you have no control over where exactly that code goes. 

“Given that code represents a crucial aspect of intellectual property for many companies, we’ve received numerous inquiries requesting an open version to utilise such services securely,” Werra told AIM.

Moreover, enterprises don’t know what codes went into the model during the training process. This lack of transparency poses a potential liability for the enterprise, especially if the model generates copyrighted code.

However, Werra adds that this is a problem which even his team has not been able to solve. “We’re doing licence detection, but it’s not 100% accurate. It’s nearly impossible to do it at that scale 100% correctly, but at least we provide full transparency in what went into it and how we filter data,” he said.


Fine-tuning StarCoder 2

While the above mentioned pointers were from a security perspective, the biggest benefit of StarCoder 2 for enterprises is that they can take the model and fine-tune it with their own enterprise data.

Indeed, many enterprises, for instance, emphasise their unique coding style or internal standards, which may differ from codebases used in training code LLMs.

“By leveraging their own codebase, they streamline processes, avoiding the need for extensive rewriting, such as fixing styles or updating docstrings, often accomplished effortlessly. 

Alternatively, they can fine-tune the model for specific use cases, catering to tasks like text-to-SQL code conversion or translating legacy COBOL code to modern languages. This ability to fine-tune models based on their data enables companies to address specialised needs effectively,” Werra said.

For example, while a dedicated model may be more comprehensive for a specific SQL use case, fine-tuning allows for customisation, providing flexibility to tackle various scenarios—a prospect that excites enterprises. 

So far, StarCoder 2 is already being used by ServiceNow, which also trained the 3-billion-parameter StarCoder 2 model. Besides, a dozen other enterprises have started leveraging StarCoder 2, according to Werra. 

Previously, VMware, an American cloud computing and virtualisation technology company, successfully deployed a fine-tuned version of StarCoder. 

Businesses subject to stringent security regulations, such as those in the financial or healthcare sectors, would most likely adopt open-source models. These companies face challenges in sharing data with third parties due to heightened scrutiny.

It is important to note that other code LLMs, like Code Llama, can be fine-tuned. However, Meta has not released the datasets besides stating that it has been trained on widely available public data.

Will enterprises pivot to open-source?

Using open-source technologies comes with its own set of challenges. Despite the promised benefits of StarCoder 2 and the adoption by a handful of enterprises, the question that arises is: will we see a wider adoption by enterprises? 

Werra believes that it is probable, as many enterprises initially opt for closed LLMs due to their accessibility and ease of use. However, as companies mature and streamline their use cases, there is a growing desire for models that offer total control. This trend holds true for code LLMs as well.

“Decades ago, software development primarily relied on off-the-shelf solutions. However, the landscape has changed, with many companies, especially IT firms, crafting their own software solutions at the core of their operations. 

“Similarly, a parallel trend is emerging with LLMs. While off-the-shelf models serve a broad range of tasks competently, for more specialised or dedicated applications, fine-tuning an open model remains the preferred approach,” Werra said.

Based on open-source principles 

The BigCode team has open-sourced the model weights and dataset. “We released Stack V1 a year ago, and now we have released Stack v2,” Werra said.

However, even though the models are supported by an open rail licence, there are some restrictions. 

For example, “You can’t use the model to extract Personally Identifiable Information (PII) from the pretrained data or generate potentially malicious code,” Werra warned. Nonetheless, StarCoder2 is available for commercial use.

The post Will StarCoder 2 Win Over Enterprises? appeared first on AIM.

]]>
Missing NVIDIA GTC 2024 Would be a Foolish Sin https://analyticsindiamag.com/innovation-in-ai/missing-nvidia-gtc-2024-would-be-a-foolish-sin/ Mon, 11 Mar 2024 08:55:16 +0000 https://analyticsindiamag.com/?p=10115318

NVIDIA GTC 2024 is only going to be bigger and bolder this year with high anticipation of next-gen GPUs and generative AI applications.

The post Missing NVIDIA GTC 2024 Would be a Foolish Sin appeared first on AIM.

]]>

“In six weeks, I am going to tell everybody about a whole bunch of things we are working on – the next generation of AI,” said NVIDIA chief Jensen Huang, last month at the World Governments Summit 2024, referring to the upcoming GTC 2024, that will be held from March 17 to 21 at San Jose, California. 

Showcasing AI Artwork and the Future of Work

The futuristic theme of NVIDIA GTC 2024 will touch upon the amalgamation of AI and art. Turkish-born, globally acclaimed media artist and pioneer in the aesthetics of machine learning, Refik Anadol, will display his art in a large generative AI installation called ‘Large Nature Model’. The art pioneer recently showcased his work at the World Economic Forum in Davos and even had it splattered across the Las Vegas Sphere

A ‘Poster Gallery’ will showcase over 80 research posters across myriad topics focusing on how accelerated computing and AI are transforming the way they work. People will have the opportunity to chat with the authors of these posters.

The event will have close to 1,000 sessions, 300+ exhibits and 20+ technical workshops, with something for everybody. The sessions, led by prominent leaders, cater to 20 industries, including agriculture, aerospace, construction, gaming, retail, semiconductor telecommunications, robotics, and many more.

In addition to the compelling reasons mentioned above, which should have already convinced you to attend the conference, the opportunity to meet NVIDIA’s witty and highly active X user and ML engineer, Bojan Tunguz, should seal this deal for you. 

Source: X

Riveting Sessions in the Pipeline

In his keynote, Huang will talk about how NVIDIA’s accelerated computing platform is driving the next wave in AI, cloud technologies and sustainable computing. 

The CEO will also host a panel session on ‘Transforming AI’ with some of the emerging AI startup leaders, including Aidan Gomez, co-founder and CEO of Cohere; Ashish Vaswani, co-founder and CEO of Essential AI; Niki Parmar, co-founder of Essential AI; Llion Jones, co-founder and CTO of Sakana AI; Lukasz Kaiser, member of technical staff at OpenAI; and other authors of the paper ‘Attention Is All You Need’. 

A must-attend session, indeed! 

Another session on ‘Beyond RAG Basics’ by developer advocate engineers of NVIDIA, will discuss the various architectural techniques that can be used to improve the quality of responses by a RAG system. If you wish to know about how agents, copilots and assistants are built, then this cannot be missed at any cost. 

The year 2024 is poised to be the one for robotics and NVIDIA’s varied sessions on the subject are sure to pique the interest of many. A session on ‘Using Omniverse to generate first-person experiential data for humanoid robots’ by Geordie Rose, co-founder and CEO of Sanctuary AI, will discuss how NVIDIA Omniverse can be used to create synthetic first-person experience for robots, and even talk about AGI.

There’s also something for people looking to learn about tools in the media and entertainment industry. ‘Lessons on Video Generation Models from Research to Production’, by Anastasis Germanidis, the CTO and co-founder of Runway, will talk about their Gen-1 and Gen-2 video generation models from research to production. 

While this is just the tip of the iceberg, an elaborate schedule of all the NVIDIA GTC sessions can be found here. If attending GTC in person is the concern, fret not because all the GTC sessions are virtually available. Make sure to plan well ahead of time. 

Wait, there is more… 

At last year’s GPU Technology Conference (GTC), NVIDIA unveiled a series of new products, including their new DGX Cloud, inference platforms, systems for the Omniverse platform, AI foundations for building LLMs and more. With the staggering upward trajectory of NVIDIA ever since, it is only obvious that the company’s focus on AI will continue. 

Here are some of the forecast launches for GTC 2024. 

Next-gen GPUs: NVIDIA’s next-generation GPU, codenamed ‘Blackwell’, is one of the most highly anticipated launches for GTC 2024. B100 and GB200 Blackwell graphics processors for enterprise computing are expected to be released. The B100 GPU is said to almost double the performance of the Hopper H200.

NVIDIA RTX 5000 series, which is expected to be based on the new architecture, Blackwell, is a high-performance graphics card that is said to leverage the latest GDDR7 (Graphics Double Data Rate 7) memory. 

Rumours suggest that it will mostly be ready before CES 2025, and a confirmation on this may come during GTC 2024. 

GPUs for Mining: The cryptomarket has been heating up in anticipation of NVIDIA’s GTC 2024. Jules Urbach, co-founder and CEO of OTOY, a leading global cloud rendering company, will give his first live talk since pre-COVID at the upcoming event. 

In a 2013 keynote speech with Huang, Urbach unveiled the first cloud GPU rendering pipeline. It is possible that dedicated GPUs for mining, similar to NVIDIA Cmp Hx, can be unveiled at GTC 2024, considering how OTOY has been closely associated with the crypto market.  

Patryk Pusch, CEO and founder of Gamer Hash, a decentralised computing network, tweeted that a major partnership announcement will happen. 

An Unconventional Collab: Samsung Electronics is expected to unveil its 12-layer High Bandwidth Memory 3E (HBM3E) at GTC 2024. HBM3E stacks are set to enhance NVIDIA’s H200 AI GPUs. It integrates 12 DRAM (dynamic random accessory memory) layers, with a density of 288 Gbit or 36 GB per stack. 

Soon, everyone will be a gamer: NVIDIA is rumoured to be working on a PC gaming handheld device. With its rival AMD’s dominance in handheld gaming PCs, NVIDIA’s re-entry into this space will differ from previous attempts (such as the NVIDIA Shield Portable), as the new device will be powered by NVIDIA’s GPU. While there is no confirmed release date at this time, it would be delightful if it were mentioned at GTC 2024.

Talking about how everyone will be able to code in the future, Huang also believes that AI and NVIDIA will facilitate an ecosystem that will make everyone a gamer. “Someday, everybody’s going to be a gamer,” he said.   

Focus on Sustainability: NVIDIA first announced its plans to build Earth-2 at the GTC 2022 conference. The goal of Earth-2 is to create a digital twin of Earth in Omniverse to predict and monitor climate change and develop strategies to mitigate and adapt to it. The platform has not been released yet, and improvements or modifications may be announced during the upcoming conference. 

Focus on Science : NVIDIA has made significant progress on the life sciences domain with the NVIDIA BioNeMo framework. Furthermore, Huang had also remarked about the use of ChatGPT for using generative AI to solve ‘real-world problems’ such as dissolving plastics, reducing carbon emissions, and so much more. It is highly likely that new announcements in this domain will be made.  

Robotics Thrives: Robotics is a key theme for the conference with a number of robotics innovators convening to discuss and showcase the use of AI for autonomous machines. Announcements around robotics for daily use will likely be made at the GTC.

Source: NVIDIA Blog

Quantum Madness: Foraying into quantum computing, NVIDIA released CUDA Quantum, a platform for integrating and programming quantum processing units (QPUs), GPUs and CPUs, last year. It is possible that this year’s GTC would see the release of more advanced versions of the quantum-enabling platform to take on the likes of IBM and Google that are making significant strides in the field. 

Chipmaker King: NVIDIA had earlier announced its plans to build semiconductor fabrication plants to boost the AI ecosystem in Japan with the help of startups there. The company is also venturing into the custom AI chips market to take on its competitors. It wouldn’t be a surprise if any significant announcement on personalised chips were made during the GTC 2024. 

With a futuristic vision, this year’s GTC is sure to not only push AI advancements but also bring AGI into the scene. Huang shared his views on AGI, predicting its achievement within five years under specific conditions (i.e. acing human-level competitive exams). It’s going to be quite exciting! 

The post Missing NVIDIA GTC 2024 Would be a Foolish Sin appeared first on AIM.

]]>
Voice Slowly Catching Up on Multimodal AI Features https://analyticsindiamag.com/innovation-in-ai/voice-slowly-catching-up-on-multimodal-ai-features/ Sat, 02 Mar 2024 05:30:00 +0000 https://analyticsindiamag.com/?p=10114889

The sudden growth of lip-sync and voice integrated features to complement AI-generated videos is helping ‘voice’ find prominence in a multimodal model.

The post Voice Slowly Catching Up on Multimodal AI Features appeared first on AIM.

]]>

Eleven Labs, a voice technology research company that works in developing AI for speech synthesis and text-to-speech software, recently added voice to Sora’s generated videos, showcasing a holistic example of what voice can bring to AI-generated videos. While this is not the first time for a development like this, voice modality is increasingly being brought to the forefront. 

It’s Not All Easy with Voice 

Voice in AI modality is considered to have a uniquely difficult interface mechanism as it employs probabilistic AI as opposed to deterministic machine learning-based voice services such as Apple Siri, and other home assistant products. 

Technology investor Michael Parekh believes that the time to implement perfect AI voice modality on devices will be a lengthy one. “It’s going to be a long road to get it right, likely as long as it took to even get the previous versions like Apple Siri, Google Next, and Amazon Alexa/Echo especially, to barely tell us the time, set timers, and play some music on demand,” he said

Voice has also been chosen as a mode of interaction, which is evident through its implementation as a primary user interface in devices such as Rabbit. The Humane Ai Pin, a futuristic, small wearable AI device that can be pinned to one’s clothing, works on finger gestures and voice for operating the device. 

SoundHound Inc, an AI voice and speech recognition company founded almost a decade ago, developing technologies for speech recognition, NLP and more, have predicted in 2020 itself-  “Although, voice does not need to be the only method of interaction (nor should it be), voice assistants will soon become a primary user interface in a world where people will never casually touch shared surfaces again.” 

Voice for Video 

The stream of AI voice integration announcements spiked in the last few weeks. Pika Labs, which creates AI-powered tools for generating and editing videos, came to limelight a few months ago with a $55M funding. They recently announced an early access to the ‘Lip Sync’ feature for their Pro users that enables voice and dialogues for AI-generated videos.  

Alibaba’s EMO AI generator (Emote Portrait Alive) that generates expressive portrait videos with audio2video diffusion models was released last week to give a direct competition to Pika Labs. The company released videos where images were made to talk/sing with expressive facial gestures. 

Voice feature has also been integrated to simplify podcasts. Eleven Labs had partnered with Perplexity to bring ‘Discover Daily’, a daily podcast that will be narrated by Eleven Labs’ AI-generated voices: another use case of how combining voice technology with other functionalities can create tangible use cases. 

Theme for 2024 

In the top three AI trends that Microsoft identified for 2024, multimodal AI was one of them. “Multimodality has the power to create more human-like experiences that can better take advantage of the range of senses we use as humans, such as sight, speech and hearing,” said Jennifer Marsman, principal engineer in AI (Office of the CTO) at Microsoft. 

Microsoft’s efforts in the same direction is reflected in their AI offering Microsoft Copilot. Catering to enterprises and consumers alike, Copilot’s multimodal capabilities can process various formats including images, natural language and Bing search data. Multimodal AI also powers Microsoft Designer, a graphic image tool for creating designs, logos, banners and lots more with a simple text prompt. 

Latest AI kid on the block, Perplexity, has also integrated the multimodal features, where a user can upload images from their Pro account and get relevant answers based on that. There is a common theme from all these functionalities. Is ‘voice’ truly an added feature? 

Big Tech’s Foray Into Voice 

With the release of ChatGPT’s voice feature, that allows one to easily converse with the model, almost 6 months after launching a multimodal GPT 4 model, voice capability was fully integrated on GPT-4’s model. Google Gemini, the most powerful AI model of Google, is also a multimodal model. 

While the advancements are promising, misuse risks related to implementing it still persists, with the most prominent one being deepfakes. With an increasing number of companies entering the space, adding voice to AI-generated videos only adds to the propensity of abuse, where stringent copyright and privacy laws will be the only saviour.  

The post Voice Slowly Catching Up on Multimodal AI Features appeared first on AIM.

]]>
Saving Lives One Beat at a Time, Dozee Redefines Hospital Safety https://analyticsindiamag.com/innovation-in-ai/saving-lives-one-beat-at-a-time-dozee-redefines-hospital-safety/ Tue, 27 Feb 2024 13:43:42 +0000 https://analyticsindiamag.com/?p=10114223

Dozee's sensors track patient health and movements wirelessly, making their hospital stays safer and easier.

The post Saving Lives One Beat at a Time, Dozee Redefines Hospital Safety appeared first on AIM.

]]>

Hospitalised patients are often at a heightened risk of fall-related injuries. The World Health Organization (WHO) has recorded patient falls to be among the most-frequent and serious mishaps in hospitals – with rates ranging from 3 to 5 per 1,000 bed-days. More than one-third of these incidents result in injury, thereby worsening clinical outcomes and increasing the financial burden on healthcare systems.

That is where Bengaluru-based Dozee comes to the rescue. Last week, the company added a new fall prevention alert feature to its health monitoring device, where it uses ballistocardiography (BCG) sensors to detect minor movements, as small as the heartbeat of the patient. 

In conversation with AIM, Gaurav Parchani, the co-founder and CTO of Dozee, said that the device was built to provide quicker treatment without additional effort from the nursing or encumbering the patient with wires. “The whole purpose is to reduce code blue, or emergency transfers to the ICU. We find out when the patient is in danger within minutes of the symptoms showing up in their body.”

Founded in 2015 by Parchani and Mudit Dandwate, the company offers contactless patient monitoring and early warning systems using BCG. With an accuracy rate of 98.4%, Dozee sensors have been installed in about 37 hospitals across the country and have been integrated with 17 Apollo Hospitals under their Enhanced Connected Care Programme.

Why BCG? 

A predecessor to ECG (electrocardiography), BCG measures the mechanical forces that take place when the heart beats or during respiration. If there’s any abnormal changes in the body, the machine will alert the nursing staff with visual or auditory signals. 

When ECG came about in the mid 1900s, it was more effective and accurate. “The primary reason was there is a lot of noise associated with mechanical vibration,” Parchani noted. 

This is where technologies like noise filtering, and wavelength analysis to enhance the desirable signals come in. Parchani explained, “In the first stage we clean the data and improve the quality. Then we identify the major body movements and remove them to establish a baseline. Each sensor captures the heart, respiration, blood pressure, early warning signs etc.” 

The raw data is then sent to a secure cloud service, Dozee predominantly uses AWS to convert it into valuable biomarkers. “There is just a lot of data which makes it difficult to scale. This is one of our biggest challenges. Imagine around 400 million API calls, taking your server on a daily basis,” Parchani exclaimed. 

According to him, the company could possibly be sitting on one of the world’s largest BCG data pile. 

The entire operation is time sensitive. To keep up, they use a lot of Long Short-Term Memory Networks (LSTM) and Recurrent Neural Network (RNN) models. “We have a plethora of models, like in the case of early warning systems, we use CNN models.” Dozee uses many open source models with modifications in the last few layers for their specific needs. 

The company is also partnering with Wellysis, a South Korean firm that builds wearable ECG devices to identify cardiac abnormalities in ECG data as well. 

Parchani told AIM that they rely on Python internally but for deployment use a hybrid of GoLang and Python to balance out the huge amount of data hitting the servers with new information. “GoLang worked very well for us in terms of scaling this whole infrastructure,” he added. 

With so many layers added, the machines are tweaked to the needs of each hospital. For example, the fall prevention alert isn’t installed on all the devices; it’s mainly for nursing homes, or for differently-abled patients. 

In an independent study, this novel device was proven to save lives, relieve nurses from additional work and reduce the pressure on hospitals. 

The post Saving Lives One Beat at a Time, Dozee Redefines Hospital Safety appeared first on AIM.

]]>
Now Open Source Projects Can Make Money https://analyticsindiamag.com/innovation-in-ai/now-open-source-projects-can-make-money/ Tue, 20 Feb 2024 10:30:27 +0000 https://analyticsindiamag.com/?p=10113275

Polar emerges as a game-changer in open source funding, giving developers new ways to monetise their work beyond traditional donations.

The post Now Open Source Projects Can Make Money appeared first on AIM.

]]>

Open source development has been the reason for the rapid growth of tech. Yann LeCun, the famous proponent of open source projects, takes every opportunity to elaborate on how vital open source development is. 

The sustainability of open source projects, however, is dependent on the financial returns it sees over time. “There is a lot of unnecessary friction today to sponsor specific features, issues or milestones for open source projects,” said Birk Jernström, the founder of Polar

The company, founded in 2022, is a platform that manages the subscriptions and payments for people who create and support open-source software. It also offers tools for working with data.

Many open source projects start with being freely available and eventually seek funding. Red Hat for example, known for its Linux distribution, monetised by selling subscriptions for technical support, updates, and training to businesses. This model helped fund continuous open-source development while providing enterprise-level services. 

Alternatively, Blender, a 3D creation suite, supports its development through the Blender Development Fund, donations, and paid services like professional training and Blender Cloud subscriptions. 

For smaller projects, platforms like Patreon or Open Collective let supporters donate monthly or per project. GitHub Sponsors allows direct donations to developers. These models rely on voluntary support, which may not match the actual effort needed for development.

However, Polar takes it a step further and allows funding for specific features, issues, or milestones. This drives the project in the direction that is valued by the customers. It motivates the developer who would know they’ll get paid for hitting clear goals. 

Jernström clarified the difference from existing funding platforms, pointing out, “There is no one-size-fits-all solution to this and that’s what we want to build, one platform for multiple solutions.”

Polar is changing open-source funding 

GitHub is keen on giving developers the options to choose how they want to monetise their work. In 2019, they launched GitHub sponsors, but as one user pointed out, it is nothing more than ‘coffee money’ between persons. Polar, according to Jernström, gives maintainers the option to be ‘entrepreneurs’. 

There have been donations in the past with platforms like Open Collective, and Stack Aid, among others, allowing individuals and companies to pledge financial support directly towards specific issues or feature requests in open-source projects. Polar intends to go beyond this ‘coffee money’ funding and provide a steady stream of income. 

Jernström explained, “Donations and sponsorships are great when they happen. Problem is, they rarely do. In order to drive meaningful (full-time work) capital to OSS initiatives, I believe it has to charge for add-on value and that such services and subscriptions are mutually beneficial.”

Polar facilitates the sale of add-on services, subscriptions, or premium features, and inturn maintainers can craft offerings that align with their project’s goals and community’s needs. The platform takes a 10% commission including the 5% charges for Stripe transactions. 

“As an ecosystem, we should be focused on how we can get 10x, 100x and then 1000x funding. Five percent of nothing is nothing. That’s the real problem in OSS today. Let’s fix that first,” he said. 

This could include anything from offering paid support, consulting, custom development work, access to premium features, or early access to new releases.

This is an upgrade from voluntary support to making it easy for backers to financially support the issues and features they care about. By handling the financial transactions, tax considerations, and potentially even compliance issues, Polar lets developers focus on working on the projects itself. 

Ease and transparency

Transparency has always been very important to open source funding. For example, the open collective for example is designed around transparency, with all financial transactions visible to the public by default. Expenses, income, and budgets are tracked and displayed on the platform, and contributors can see how funds are used and allocated.

Polar goes the same route and has complete control over which issues or features they want to highlight for funding through the platform. This ensures that they can align any external funding with their project’s roadmap and priorities. 

Maintainers can set goals for funding specific initiatives within their projects, providing clarity to potential backers about what their contributions will support.

Andreas Kling, a key contributor to the SerenityOS and Ladybird who uses Polar for funding, said, “We’ve been using Polar for funding GitHub issues for a couple of months now, and it always makes me super happy when I see someone collect a reward!”

SerenityOS is a Unix-like OS with a classic desktop interface and user-friendly design, supported by an active developer community. Ladybird is its companion lightweight web browser, offering fast and secure browsing seamlessly integrated with the OS. 

Kling added, “I’m super happy to see Polar take on the task of becoming a Merchant of Record and abstracting away much of the complexity for all developers.” 

By addressing these critical and often overlooked aspects of open-source project maintenance, Polar is setting a precedent for how platforms can support the sustainable development of open-source software. 

The post Now Open Source Projects Can Make Money appeared first on AIM.

]]>
NVIDIA Researchers Make Indic AI Model to Talk to their Spouses’ Indian Parents https://analyticsindiamag.com/innovation-in-ai/these-nvidia-researchers-made-an-indic-ai-model-to-talk-to-their-spouses-indian-parents/ Mon, 19 Feb 2024 05:30:00 +0000 https://analyticsindiamag.com/?p=10113142 These NVIDIA Researchers Made an Indic AI Model to Talk to their Spouses’ Indian Parents

The four researchers triumphed in the LIMMITS ’24 challenge, which tasked participants with replicating a speaker’s voice in real-time in different languages.

The post NVIDIA Researchers Make Indic AI Model to Talk to their Spouses’ Indian Parents appeared first on AIM.

]]>
These NVIDIA Researchers Made an Indic AI Model to Talk to their Spouses’ Indian Parents

NVIDIA researchers, Akshit Arora and Rafael Valle, wanted to speak to their wives’ families in their native languages. Arora, a senior data scientist supporting one of NVIDIA’s major clients, speaks Punjabi, while his wife and her family are Tamil speakers, a divide he has long sought to bridge. Valle, originally from Brazil, faced a similar challenge as his wife and family speak Gujarati.

“We’ve tried many products to help us have clearer conversations,” said Valle. This motivation led them to build multilingual text-to-speech models that could convert their voice into different languages in real time, which led them to winning competitions.

Arora, in an exclusive interview with AIM, shed more light on this. “When this competition came to our radar, it occurred to us that one of the models that we had been working on called P-Flow, would be perfect for this kind of a competition,” said Arora, which is also narrated in his latest blog.

Arora and Valle, along with Sungwon Kim and Rohan Badlani, triumphed in the LIMMITS ’24 challenge, which tasked participants with replicating a speaker’s voice in real-time in different languages. Their innovative AI model achieved this feat using only a brief three-second speech sample. 

Fortunately, Kim, a deep learning researcher at NVIDIA’s Seoul office, had been working on an AI model well-suited for the challenge for some time. For Badlani, residing in seven different Indian states, each with its own dominant language, inspired his involvement in the field. 

The Signal Processing, Interpretation, and Representation (SPIRE) Laboratory at IISc in Bangalore orchestrated the MMITS-VC challenge, which stood as one of the major challenges within the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) 2024. 

In this challenge, a total of 80 hours of Text-to-Speech (TTS) data were made available for Bengali, Chhattisgarhi, English, and Kannada languages. This additional dataset complemented the Telugu, Hindi, and Marathi data previously released during LIMMITS 23.

Never seen before

The competition included three tracks where the models were tested. “One of the models got into the top leaderboard on all the counts for one of the tracks,” said Arora. “In these kinds of competitions, not even a single model performs best on both the tracks. Every model is good at certain things and not good at other things,” explained Arora.

NVIDIA’s strategy for Tracks 1 and 2 revolves around the utilisation of RAD-MMM for few-shot TTS. RAD-MMM works by disentangling attributes such as speaker, accent, and language. This disentanglement enables the model to generate speech for a specific speaker, language, and accent without the need for bilingual data.

In Track 3, NVIDIA opted for P-Flow, a rapid and data-efficient zero-shot TTS model. P-Flow utilises speech prompts for speaker adaptation, enabling it to produce speech for unseen speakers with only a brief audio sample. Part of Kim’s research, P-Flow models borrow the technique large language models employ of using short voice samples as prompts so they can respond to new inputs without retraining.

One of the unique things about P-Flow is its zero-shot capabilities. “Our zero shot TTS model happened to perform the best in the zero shot category on the speaker similarity and naturalness course,” said Arora. They would also be presenting this model at GTC 2024.

A long project

Last year, the researchers also used RAD-MMM, developed by NVIDIA Applied Deep Learning Research Team, and developed “VANI” or “वाणी”,  which a very lightweight multi-lingual accent controllable speech synthesis system. This was also used in the competition. 

The journey began nearly two years ago when Arora and Badlani formed the team to tackle a different version of the challenge slated for 2023. Although they had developed a functional code base for the so-called Indic languages, winning in January required an intense sprint, as the 2024 challenge came onto their radar just 15 days before the deadline.

P-Flow is set to become a part of NVIDIA Riva, a framework for developing multilingual speech and translation AI software, included in the NVIDIA AI Enterprise software platform. This new capability will enable users to deploy the technology within their data centres, personal systems, or through public or private cloud services.

Arora expressed hope that their customers would be inspired to explore this technology further. “I enjoy being able to showcase in challenges like this one the work we do every day,” said Arora.

The post NVIDIA Researchers Make Indic AI Model to Talk to their Spouses’ Indian Parents appeared first on AIM.

]]>
Diggibyte Technologies is Certified as Best Firm for Data Engineers https://analyticsindiamag.com/ai-highlights/diggibyte-technologies-is-certified-as-best-firm-for-data-engineers/ Wed, 07 Feb 2024 06:30:00 +0000 https://analyticsindiamag.com/?p=10112143 Diggibyte Award

The Best Firm certification surveys a company’s data and analytics employees to identify and recognise organisations with great company cultures

The post Diggibyte Technologies is Certified as Best Firm for Data Engineers appeared first on AIM.

]]>
Diggibyte Award

Diggibyte Technologies is certified as the Best Firm For Data Engineers to work for by Analytics India Magazine (AIM) through its workplace recognition programme. 

The Best Firm certification surveys a company’s data and analytics employees to identify and recognise organisations with great company cultures. AIM analyses the survey data to gauge employee approval ratings and uncover actionable insights.

“We are delighted to share the exciting news that Diggibyte Technologies Pvt Ltd has been honoured as the Best Firm for Data Engineers. This prestigious recognition is a reflection of our exceptional team and the outstanding culture we’ve fostered. A sincere thank you to our remarkable Data & Analytics team, whose talent and dedication have been instrumental in achieving this milestone. It’s not just about data; it’s about our great culture that propels us forward.

Here’s to celebrating our success and the thriving culture that continues to inspire innovation and excellence” said Lawrance Amburose, CFO and Managing Director, India at Diggibyte Technologies.

“Our team is our biggest asset, and this recognition propels us towards our commitment to fostering an innovative and creative ecosystem within the Data & Analytics space. We take immense pride in being recognized as Best Firm for Data Engineers at Diggibyte Technologies, attracting exceptional young talent.

I extend my gratitude to our team and our valued customers. Together, we aspire to set the standards as thought leaders and market pioneers in Data & Analytics. Thank you, AIM, for selecting Diggibyte as the Best Firm for Data Engineers,” said Sekhar PVR – COO at Diggibyte Technologies.

The analytics industry currently faces a talent crunch, and attracting good employees is one of the most pressing challenges enterprises face.

The certification by Analytics India Magazine is considered a gold standard in identifying the best data science workplaces and companies participate in the programme to increase brand awareness and attract talent. 

The Best Firms certification is the biggest data science workplace recognition programme in India. To nominate your organisation for the certification, please fill out the form here.

The post Diggibyte Technologies is Certified as Best Firm for Data Engineers appeared first on AIM.

]]>
Synaptics is Open to Packaging its Chips in India  https://analyticsindiamag.com/innovation-in-ai/synaptics-is-open-to-testing-and-packaging-its-chips-in-india/ Sat, 23 Dec 2023 11:29:15 +0000 https://analyticsindiamag.com/?p=10106039

Everything from design and testing to implementation is seamlessly managed by its extensive team based here in India

The post Synaptics is Open to Packaging its Chips in India  appeared first on AIM.

]]>

Synaptics, a fabless semiconductor company that pioneered the development of touch-sensitive pads for laptops and computers, is betting on India’s semiconductor capabilities. AIM recently caught up with Michael Hurlston, CEO at Synaptics, during his visit to Synaptics’ base in Bengaluru, India.  

“If you look at the semiconductor companies like Qualcomm, AMD or Intel, which have a footprint in India, what they typically do is they do a piece of the solution here and they do a piece of the solution in the US and they do a piece of the solution in Europe,” Hurlston said.

But in Synaptics’ case, everything from design and testing to implementation is seamlessly managed by its extensive team based here in India. “This team oversees the entire process, including crucial aspects like customer engagement and design,” he added.

Synaptics’ underlying semiconductor technology is currently found on the laptops developed by the top three players in the PC market- Lenovo, HP and Dell. Over the years, the San Jose-based company has expanded into other markets, including wireless connectivity, Internet of Things (IoT) as well as AI with their human interface technologies.

Betting on India’s semiconductor capabilities 

“In India, we consider ourselves fortunate to have access to an exceptional talent pool for semiconductor engineering. The country boasts top-notch education, a highly competitive job market, and an abundance of skilled engineers, making it one of the best in the world,” Hurlston said.

Moreover, Synaptics believes it is contributing to India’s Design Linked Incentive (DLI) scheme in an indirect way. Contrary to other semiconductor companies, which only do the designing of their chips in India, that too, only in pieces, what Synaptics does in India is end-to-end. 

Though it’s not a ‘Made in India’ product since India does not own the Intellectual Property (IP) right, Synaptics is helping the talent pool in India get their hands dirty in doing end-to-end product designing. 

So far, Synaptics has already developed one wireless chip in India and is another one is in development. As they continue to grow their wireless business, ​​they will further consolidate their efforts to handle everything locally.

“Our expertise spans a comprehensive array of wireless semiconductor technologies, extending to diverse processing technologies, encompassing various microprocessors,” Hurlston continued, “We excel in developing different sensors, including touch sensors, capacitive sensors, and computer vision. A substantial portion of these technologies originates from India in various capacities.”

On the manufacturing side

Synaptics’ chips are manufactured by Taiwan Semiconductor Manufacturing Company (TSMC) and it’s too early to talk about the possibility of Synaptics fabricating their chips in the country, given India still does not have a fabrication unit. 

However, Hurlston adds that on the manufacturing side, Synpatics is open to getting the testing and packaging part of their chips done in the country. 

Micron, a US-based semiconductor company, announced its plans earlier this year to set up an outsourced semiconductor assembly and testing (OSAT) plant in India and others are expected to follow. 

But for Synaptics, the aim will always be to manufacture its product at the most economical cost. The company is presently assessing various OSAT vendors. If a vendor based in India comes up with a much lower cost proposition, then Synaptics is open to conducting testing and packaging operations in the country.

According to Hurlston, Synaptics has been approached by a few OSAT vendors seeking investments to set up their units in India; however, they had to decline. 

“We declined to invest because we believe it will take a considerable amount of time before these facilities are fully operational. Once they are ready, we have committed to working with and partnering with the groups involved to initiate test packaging and testing.” 

Hurlston adds that one of the companies that reached out to Synaptics for investment was ASIP. “While we may not currently invest, we are open to collaboration should they establish these facilities.”

Pivoting to IoT at an unlikely time 

In 2019, Hurlston was appointed as the CEO of Synaptics and under his leadership, the company has redirected its efforts towards expanding its IoT business. Interestingly, Synaptics’ pivot to IoT came at a time when major businesses like IBM and Google had shut down their respective IoT divisions.

In fact, in 2020, Synaptics acquired Broadcom’s wireless IoT business assets and manufacturing rights. Hurlston believes companies failed in IoT because they didn’t understand the market. 

However, Hurlston sees huge opportunities in two market segments- wireless connectivity and processors. The wireless connectivity market currently stands at roughly USD 9 billion whereas, roughly 9 billion. That’s the market size and the opportunity and the processor market is valued at 24 billion.

“For us, IoT encompasses a multitude of technologies. In our foray into the IoT landscape, we are honing our focus on processors, specifically embedded processors and endpoint processors, along with wireless connectivity. These two domains are our primary areas of concentration.

“We believe we possess the right technology in processors and wireless. By establishing the necessary infrastructure, we can effectively tap into these expansive markets,” he said.

Synaptics’ pivot to IoT has resulted in a 30% increase in their revenue, amounting to USD 1. 74 billion, with the IoT business growing by 80%. 

 Building a niche in AI 

Besides IoT, Synaptics also sees a huge opportunity in AI, according to Hurlston. The company sees an opportunity to run AI on the edge, especially in the PC, smart devices and automobile sectors. 

“We embed a neural network into our processors, facilitating a machine learning model to run on the chip itself. Recently, we released a chip designed for laptops, running a simple machine learning model that turns off the screen when no one is in front, conserving 30 to 35% of battery life for computer makers.”

Hurlston notes that Dell is a significant adopter of this technology, and the PC manufacturer aims to integrate it across their entire laptop range due to its substantial impact on battery life. Additionally, both HP and Lenovo have also embraced this technology.

This technology can be applied in automotive settings, featuring a computer vision model running on the edge. It detects and triggers an alarm if the driver shows signs of falling asleep. “While it may not be as prevalent in India, drowsy driving among long-haul truckers in the United States poses significant risks, leading to major accidents.”

Synaptics plans to take this technology to Tata Motors, one of the biggest automotive companies in India and globally. 

“These are very simple things and we’re not trying to do gene sequencing or some complex facial recognition. Instead, our emphasis is on executing very simple functions, running machine learning models directly on the chip at the edge. This approach provides us with a significant advantage,” Hurlston concluded.

The post Synaptics is Open to Packaging its Chips in India  appeared first on AIM.

]]>
Could Samsung Alter Market Dynamics with its 2nm Chips? https://analyticsindiamag.com/innovation-in-ai/could-samsung-alter-market-dynamics-with-its-2-nm-chips/ Fri, 15 Dec 2023 05:30:00 +0000 https://analyticsindiamag.com/?p=10104932

Samsung claims its 2nm chips provide a 25% increase in power efficiency at the same clock speeds and complexity.

The post Could Samsung Alter Market Dynamics with its 2nm Chips? appeared first on AIM.

]]>

After failing to attract top customers with its 3nm chips, Samsung now turns its attention to 2nm chips to gain significant market share. Samsung Foundry, the second biggest semiconductor chip manufacturer in the world, announced earlier this year that it will fabricate 2nm semiconductor chips in 2025, which will be the most advanced semiconductor technology to date.

Samsung claims its 2nm chips provide a 25% increase in power efficiency at the same clock speeds and complexity. Additionally, they boast a 12% performance improvement and a 5% reduction in die area compared to the second-generation 3nm chips introduced earlier this year.

Samsung has also outlined a detailed roadmap, indicating its commencement of mass production for the 2-nanometer process in mobile applications by 2025. Subsequently, the company plans to extend its use to high-performance computing in 2026 and automotive applications in 2027.

“Samsung is never satisfied with No. 2 as a business, as a company. We’re very aggressive,” Jon Taylor, Samsung’s corporate vice president of fab engineering, said earlier in an interview.

The 2 nm chips will possibly power most upcoming consumer devices such as laptops, PCs, tablets and mobile phones, as well as AI hardware such as Graphic Processing Units (GPUs).

Tough competition from TSMC

However, Samsung is not the only one making the 2 nm chips. Taiwan Semiconductor Manufacturing Company (TSMC) will mass produce its 2 nm chips during the same period. Interestingly, there is another player in the mix — Intel — which is reportedly planning to launch its 2 nm technology next year.

TSMC currently leads the global advanced foundry market with over 60% market share, according to consultancy firm TrendForce. Samsung, which holds around 25% market share, would want to attract more customers with its 2 nm technology and change the market dynamics.

Given it wants to gain significantly from its 2 nm chips, some speculate that Samsung is contemplating the possibility of bypassing extensive 3-nanometer production and directly advancing into the fabrication processes of 2-nanometer technology.

The South Korean company also revealed that it aims to achieve mass production of 1.4 nm chips by 2027, a goal that underscores its ambitious roadmap in semiconductor technology.

Moreover, to keep up with the global demand, Samsung has confirmed its commitment to expanding chip manufacturing capacity by establishing new manufacturing lines in Pyeongtaek, South Korea, and Taylor, Texas.

Luring customers with 2 nm tech

Both Samsung and TSMC have already shown their 2 nm technology to potential clients, according to reports. The Financial Times reported that TSMC has already shown its N2 technology to NVIDIA and Apple. Samsung, on the other hand, is trying to lure customers like NVIDIA by providing the 2 nm chips at a relatively low cost.

Very recently, FT also reported that Qualcomm could replace TSMC with Samsung’s 2 nm chips in its next-generation smartphone processors. Qualcomm’s Snapdragon processors power a significant number of flagship Android phones on the market.

Currently, Snapdragon 8 Gen 2 is Qualcomm’s most advanced mobile phone processor and is powered by TSMC’s 3 nm chips. Interestingly, Samsung was earlier involved in the development of the previous generation of Snapdragon processors.

Even though unconfirmed, landing Qualcomm again would be significant for Samsung. Nonetheless, it’s noteworthy that while Samsung was the first to announce 3 nm chips, TSMC secured the majority of orders from companies like Qualcomm, NVIDIA, AMD, and major tech giants such as Microsoft, AWS, and Google, all engaged in developing their own AI chips.

For Samsung, staying on course on its roadmap would be crucial for Samsung. Reports from China in September this year suggested that TSMC could delay its mass production of 2 nm chips till 2026, which could potentially give Samsung a head-start.

‘Multi-vendor strategy’ the best bet for Samsung

Earlier this year, South Korean media Chosun Biz, citing industry sources, reported that NVIDIA is considering subcontracting a part of its AI GPUs to Samsung for manufacturing due to growing constraints in capacity supply from TSMC.

Anticipating NVIDIA replacing TSMC with Samsung might be a stretch, but adopting a multi-vendor strategy seems to be the most prudent path for Samsung. NVIDIA buying chips from both TSMC and Samsung is the most favourable situation for the South Korean company.

Many analysts also predict companies opting for a multi-vendor strategy to mitigate supply chain constraints similar to those experienced by NVIDIA with TSMC.

Banking on Gate-All-Around technology

Samsung, utilising its proprietary gate-all-around (GAA) transistor architecture, succeeded in producing 3 nm chips ahead of TSMC. Despite this achievement, the company faced challenges in securing major customers and received criticism for its efforts.

However, Samsung Foundry CTO Jeong Ki-tae believes the GAA process is a technology that will last in the future and it will be very difficult to find any further improvements in FinFET technology.

TSMC plans to leverage the GAA architecture for its 2 nm chips. Samsung Electronics President Kyung Kye-hyeon, in a lecture at KAIST in Daejeon, said that once TSMC switches to GAA technology, Samsung will be on par with them.

The post Could Samsung Alter Market Dynamics with its 2nm Chips? appeared first on AIM.

]]>
Is AlphaCode 2 a Q* Moment for Google? https://analyticsindiamag.com/innovation-in-ai/is-alphacode-2-a-q-moment-for-google/ Wed, 13 Dec 2023 11:38:07 +0000 https://analyticsindiamag.com/?p=10104825

AlphaCode 2 from Google DeepMind reshapes competitive programming with its advanced AI, tackling complex challenges with a unique, efficient approach.

The post Is AlphaCode 2 a Q* Moment for Google? appeared first on AIM.

]]>

Google DeepMind last week released AlphaCode 2, an update to AlphaCode, along with Gemini. This version has improved problem-solving capabilities for competitive programming. Last year when AlphaCode was released, it was compared to Tabnine, Codex and Copilot. But with this update AlphaCode definitely stands way ahead. 

AlphaCode 2 approaches problem-solving by using a set of “policy models” that produce various code samples for each problem. It then eliminates code samples that don’t match the problem description. AlphaCode 2 employs a multimodal approach that integrates data from diverse sources, including web documents, books, coding resources, and multimedia content.

This approach has been compared to the curious Q* from OpenAI. Instead of being a tool that regurgitates information, Q* is rumoured to be able to solve maths problems it has previously not seen before. The technology is speculated to be an advancement in solving basic maths problems, a challenging task for existing AI models.

Now while Q* is only a speculation, AlphaCode performed better than 85% of competitors on average. It solved 43% of problems within 10 attempts across 12 coding contests with more than 8,000 participants, doubling the success rate as the original AlphaCode’s success rate was 25%. 

However, like any AI model, AlphaCode 2 has its limitations. The whitepaper notes that AlphaCode 2 involves substantial trial and error, operates with high costs at scale, and depends significantly on its ability to discard clearly inappropriate code samples. The whitepaper suggests that upgrading to a more advanced version of Gemini, such as Gemini Ultra, could potentially address some of these issues.

What sets AlphaCode 2 apart

The AlphaCode 2 Technical Report presents significant improvements. Enhanced by the Gemini model, AlphaCode 2 solves 1.7 times more problems and surpasses 85% of participants in competitive programming. Its architecture includes powerful language models, policy models for code generation, mechanisms for diverse sampling, and systems for filtering and clustering code samples. 

To reduce redundancy, a clustering algorithm groups together code samples that are “semantically similar.” The final step involves a scoring model within AlphaCode 2, which identifies the most suitable solution from the largest 10 clusters of code samples, forming AlphaCode 2’s response to the problem.

The fine-tuning process involves two stages using the GOLD training objective. The system generates a vast number of code samples per problem, prioritising C++ for quality. Clustering and a scoring model help in selecting optimal solutions. 

Tested on Codeforces, AlphaCode 2 shows remarkable performance gains. However, the system still faces challenges in trial and error and operational costs, marking a significant advancement in AI’s role in solving complex programming problems.

When compared to other code generators, AlphaCode 2, unlike its counterparts, shows a unique strength in competitive programming. On the other hand, GitHub Copilot, powered by OpenAI Codex, serves as a broader coding assistant. Codex, an AI system developed by OpenAI, is particularly adept at code generation due to its training on a vast array of public source code.

In the emerging field, other notable tools like EleutherAI’s Llemma and Meta’s Code Llama bring their distinct advantages. Llemma, with its 34-billion parameter model, specialises in mathematics, even outperforming Google’s Minerva. Code Llama, based on Llama 2, focuses on enabling open-source development of AI coding assistants, offering a unique advantage in creating company-specific AI tools.

AlphaCode 2 has a different approach compared to other AI coding tools. It uses machine learning, code sampling, and problem-solving strategies for competitive programming. These features are tailored for complex coding problems. Other tools like GitHub Copilot and EleutherAI’s Llemma focus on general coding help and maths problems. 

A Close Contest  

For OpenAI, Q* represents a significant advancement in AI capable of solving maths problems it hadn’t seen before. This breakthrough, involving Sutskever’s work, led to the creation of models with enhanced problem-solving abilities. 

However, the rapid advancement in this technology has raised concerns within OpenAI about the pace of progress and the need for adequate safeguards for such powerful AI models.

While both AlphaCode 2 by Google DeepMind and the speculated Q* represent significant advancements in AI, they are not yet widely available to the public. 

The post Is AlphaCode 2 a Q* Moment for Google? appeared first on AIM.

]]>
TSMC: The Wizard Behind AI’s Curtain https://analyticsindiamag.com/innovation-in-ai/tsmc-the-wizard-behind-ais-curtain/ Mon, 04 Dec 2023 07:01:57 +0000 https://analyticsindiamag.com/?p=10104043

TSMC anticipates a substantial CAGR of nearly 50% in the AI sector from 2022 to 2027.

The post TSMC: The Wizard Behind AI’s Curtain appeared first on AIM.

]]>

At AWS re:Invent, the hyperscaler unveiled the next generation of two AWS-designed chip families — AWS Graviton4 and AWS Trainium2 — bringing improvements in price performance and energy efficiency across various customer workloads. 

Only a few weeks prior to the AWS announcement, Microsoft, which competes with AWS in the cloud space, also announced two homegrown chips—Microsoft Azure Maia 100 AI Accelerator and Azure Cobalt 100 CPU. Interestingly, both AWS and Microsoft’s homegrown chips will be developed by Taiwan Semiconductor Manufacturing Company Limited (TSMC).

The Taiwanese semiconductor giant also manufactures chips for Google’s Tensor Processing Units (TPU), which the tech giant announced at Google I/O 2016. Moreover, Google is also working on its own custom chip to power its Pixel smartphones and will replace Samsung with TSMC for chip manufacturing. 

Reportedly, Apple, the most popular smartphone maker, is also working towards introducing a host of generative AI features in iOS 18, again, built on TSMC’s N3E 3-nanometer node.

Revenue from AI to skyrocket 

TSMC’s advanced manufacturing processes enable the production of chips with increased computational capabilities, meeting the requirements of generative AI workloads.

With a market cap of USD 511.12 billion as of December 2023, TSMC is the world’s 12th most valuable company. At present, approximately 6% of TSMC’s overall revenue ( USD 73.86 billion in 2022) is derived from AI. Nevertheless, the company envisions this figure doubling within the next four to five years. TSMC anticipates a substantial compound annual growth rate (CAGR) of nearly 50% in the AI sector from 2022 to 2027.

The recent developments by big tech underscores how important TSMC has emerged in the AI ecosystem. Not to forget, NVIDIA also relies on TSMC for the fabrication of its Graphics Processing Units (GPUs), which became the most sought-after product in the AI world this year. Funnily enough, Intel’s Gaudi2 processors, which compete with NVIDIA H100, are also based on TSMC’s 7 nm processors. 

To keep up with the demand, TSMC is also planning to invest USD 2.87 billion to build a new plant that will handle the advanced packaging of high-performance semiconductors necessary for generative AI.

Moreover, despite having fabrication units in Taiwan, TSMC has also announced numerous expansion plans. In Arizona, US, TSMC is building a second semiconductor factory, with an increased investment from USD 12 billion to USD 40 billion. The new facility, known as Fab 21, is expected to start chip production on TSMC’s advanced N3 process technologies by 2026.

Recent reports indicate that TSMC is considering the establishment of a third fabrication facility in the US to develop 2 nm and 1 nm technology. Similar expansion plans are also being considered in Japan. Moreover, a second fab in Europe is also being evaluated.

Dependence on TSMC- A dangerous precedent?

TSMC’s 3 nm technology is highly important for AI chip companies. For AI applications, where processing large amounts of data with high precision is crucial, TSMC’s 3 nm process enables the development of more powerful and efficient AI chips as it allows for more transistors to be packed into a chip, resulting in improved performance, energy efficiency, and overall capabilities. 

TSMC is one of the few companies in the world that can reliably build chips at the leading edge of semiconductor technology, including advanced AI chips.  The company has already started high-volume production of its 3 nm technology in 2022, making it the industry’s most advanced semiconductor process.

However, the significant dependence of the AI industry on TSMC for chip manufacturing introduces potential risks reminiscent of previous challenges faced by industries relying heavily on specific suppliers. 

For example, the industry heavily depends on NVIDIA for its GPUs. However, a supply shortage left numerous companies in a scramble to acquire these GPUs. Could the AI sector’s reliance on TSMC set a similar precedent, given the semiconductor industry already heavily relies on Taiwan to meet global chip demands?

Notably, Samsung is the only other company with the 3 nm technology. However, TSMC holds a 60% market share of the third-party chip manufacturing business compared to Samsung Electronics, which holds 12%. 

Interestingly, Qualcomm, which is looking to disrupt the smartphone industry with its AI processors, earlier hinted at a dual-foundry strategy with TSMC and Samsung manufacturing simultaneously. However, Qualcomm has officially declared its decision not to enlist Samsung for its upcoming processors, emphasising once more the significance of TSMC in the AI domain.

The post TSMC: The Wizard Behind AI’s Curtain appeared first on AIM.

]]>
Now Everyone’s a Filmmaker, Thanks to Pika  https://analyticsindiamag.com/innovation-in-ai/now-everyones-a-filmmaker-thanks-to-pika/ Wed, 29 Nov 2023 06:56:36 +0000 https://analyticsindiamag.com/?p=10103822 Now Everyone’s a Filmmaker, Thanks to Pika

The 'ChatGPT moment' for generative AI video has finally arrived.

The post Now Everyone’s a Filmmaker, Thanks to Pika  appeared first on AIM.

]]>
Now Everyone’s a Filmmaker, Thanks to Pika

Pika has taken the internet by storm, giving tough competition to Stability AI and RunwayML in text-to-video and image-to-video platforms. Pika Labs has introduced Pika 1.0, for creating and editing videos with AI, and aims to bring everyone’s creativity to life, as their blog says.

This new generative AI platform can edit and create in various styles including anime, cinematic, and 3D animation. All of this would come in a new web experience. Pika was available on Discord all this while. 

Furthermore, Pika has also announced its Series A funding round of $35 million, which is led by Lightspeed Venture Partners. This makes the total raised funds of $55 million, initiated by Nat Friedman and Daniel Gross in the pre-seed and seed rounds. With this new funding, the co-founders want to expand their team to 20 people by next year.

“Our vision for Pika is to enable everyone to be the director of their own stories and to bring out the creator in each of us,” say the co-founders of Pika. The co-founders do not want to monetise the product right now, and that is how they aim to differentiate themselves from others in the field. 

Friedman said that even though there are well-funded companies like Runway and Stability AI, and behemoths like Adobe in the same segment, Pika’s face is unmatched. He along with Gross have a 2,500-plus GPU cluster called Andromeda, which they gave to all the startups they invest in, and Pika is also one of them, utilising hundreds of them.

Pika is loved and supported by everyone. Elad Gil, angel investor; Adam D’Angelo, founder and CEO of Quora; Andrej Karpathy, research scientist at OpenAI; Clem Delangue, co-founder and CEO of Hugging Face; Craig Kallman, CEO of Atlantic Records; Alex Chung, co-founder of Giphy; Zach Frankel; Aravind Srinivas, CEO of Perplexity; Vipul Ved Prakash, CEO of Together; Mateusz Staniszewski, CEO of ElevenLabs; and Keith Peiris, CEO of Tome.

Better than RunwayML and StabilityAI?

“My co-founder and I are creative at heart. We know firsthand that making high-quality content is difficult and expensive, and we built Pika to give everyone, from home users to film professionals, the tools to bring high-quality video to life,” said Demi Guo, Pika co-founder and CEO. “Our vision is to enable anyone to be the director of their stories and to bring out the creator in all of us.”

“We’re not trying to build a product for film production,” she said in a recent interview. “What we’re trying to do is something more for everyday consumers — people like me and [Meng] who are creators at heart, but not that professional.”

The initial iteration of Pika debuted in beta on Discord in late April 2023 and currently boasts over 500,000 users who produce millions of videos on a weekly basis. Pika enthusiasts on Discord dedicate up to 10 hours daily to craft videos using the platform. Videos created with Pika have gained widespread attention on social media; for instance, the #pikalabs hashtag on TikTok has accumulated nearly 30 million views.

We tested the Discord version of the model, as the new one is still on the waitlist. The first version does not improve much beyond RunwayML and StabilityAI’s latest Stable Video Diffusion, which offers the same functionality. But the promos of the new version of Pika definitely shows its prowess.

New functionalities that enable AI-based video editing and the creation of videos in various novel styles:

  • Text-to-Video and Image-to-Video: Simply input a few lines of text or upload an image to Pika, and the platform leverages AI to produce concise, high-quality videos.
  • Video-to-Video: Reimagine your current videos in diverse styles, incorporating various characters and elements, all while preserving the original video’s structure. For instance, transform a live-action video into an animated format.
  • Expand: Enlarge the canvas or alter the aspect ratio of a video. For example, convert a video from a TikTok 9:16 format to a widescreen 16:9 format, with the AI model predicting content beyond the original video border.
  • Change: Employ AI to edit video content, such as altering clothing, introducing new characters, modifying the environment, or adding props.
  • Extend: Lengthen the duration of an existing video clip using AI.

Why everyone loves Pika

CEO Guo began her journey at Harvard University, earning a bachelor’s degree in mathematics. Guo continued to demonstrate her commitment to tech innovation in roles like Tech & Innovation Chair at the Harvard China Forum and director at the Harvard MIT Math Tournament. 

After co-founding Hacklodge, she became a scholar in the inaugural batch of the Neo Fellowship. Following a successful undergraduate journey, she pursued a Master’s degree in computer science at Harvard and later a PhD in computer science at Stanford University, co-advised by professors Ron Fedkiw and Chris Manning, then later did her internship at Bing Microsoft.

The other co-founder and CTO, Chenlin Meng, is also from Stanford University’s StanfordAILab where she specialised in generative AI and diffusion models. Before joining Pika Labs, she gained experience as an intern at GoogleAI. Advised by Prof Stefano Ermon at Stanford, she was enthusiastic about exploring the wide-ranging applications of generative AI.

Guo said in a recent interview that she entered an AI filmmaking contest announced by Runway and didn’t even place even though they had the most technically advanced team. “It just didn’t look that good,” she says of the film. “I was so frustrated.” In April, both the co-founders dropped out of Stanford and started building an “easier” AI video generator and came up with Pika.

The post Now Everyone’s a Filmmaker, Thanks to Pika  appeared first on AIM.

]]>
Phonemakers Now Want to Run LLM On-Device https://analyticsindiamag.com/innovation-in-ai/generative-ai-poised-to-disrupt-smartphone-market/ Thu, 23 Nov 2023 11:07:25 +0000 https://analyticsindiamag.com/?p=10103589

To boost sales and usher in the next wave of innovations in smartphones, phonemakers are turning their attention to generative AI.

The post Phonemakers Now Want to Run LLM On-Device appeared first on AIM.

]]>

In 2022, global smartphone shipments declined by 12% to 1.2 billion units, the lowest in a decade. The decline was mostly due to poor consumer demand. A growing number of consumers are not opting for device upgrades, primarily because they see minimal distinctions between the current and the earlier models, which already boast advanced cameras and processors.

To ramp up sales and usher in the next wave of innovations in smartphones, phonemakers are now turning their attention to generative AI. At the recently held Samsung AI Forum, the South Korea-based manufacturing conglomerate showcased an on-device AI technology that incorporates Gauss, its generative AI models, with various smart devices, the KoreaTimes reported.

However, Samsung’s plan is not just about bringing generative AI to its devices. It plans to bring out the ‘most powerful AI phone’ to date, disrupting the smartphone market significantly.

By incorporating genAI across its entire range of devices, Samsung aims to secure a substantial market share, but formidable competition looms from rivals such as Apple, Google, and Chinese smartphone manufacturers.

Phonemakers now believe generative AI could be a key selling point to boost their sales in the market. Alex Katouzian, SVP and GM of Qualcomm’s mobile, compute, and XR, told indianexpress.com that if there was something most likely to trigger an upgrade cycle for smartphones, it was generative AI.

Race to bring generative AI to smartphones

Apple, the most popular smartphone maker, is also working towards introducing a host of generative AI features in iOS 18, which is expected to be released in 2024, Bloomberg reported.

Apple, which has been working on generative AI for years, was surprised by the genAI explosion after OpenAI opened ChatGPT to the world. The company is now looking to significantly improve Siri, its voice assistant feature, with generative AI.

Additionally, Apple is exploring the integration of AI features into other first-party apps, such as Apple Music, potentially introducing auto-generated playlists, akin to Spotify. Notably, applications like Pages and Keynote are in line for substantial AI enhancements too.

Furthermore, Apple is contemplating various approaches to implement generative AI, weighing options like a comprehensive on-device experience, a cloud-based model, or a hybrid combination of both.

Last month, Google, currently trailing in the AI arms race, looked to heavily monetise generative AI in the phone market. Last month, the tech giant released Pixel 8 series, which comes with genAI features such as magic editor, call assistant, audio magic eraser, and photo unblur, among other things.

Bindu Reddy, CEO of abacus.ai posted on X that Search is a dying business, but, luckily for Google, Android space presents a growth opportunity and can off-set Search.

Tough competition from China

Over the past few years, Chinese phonemakers have made a significant headwind in the smartphone space. Many of them were already in the race to be the first to bring generative AI to smartphones.

Xiaomi, which accounts for around 11% of the global smartphone market compared to Samsung’s 22%, has already introduced a host of such capabilities on its smartphones. Xiaomi 14, which runs on the Snapdragon 8 Gen 3 SoC, can run an AI model with 1.3 billion parameters locally on the phone.

In August, Xiaomi’s founder and CEO, Lei Jun, revealed the integration of generative AI capabilities into the company’s digital assistant, Xiao Ai.

Similarly, earlier this month, Vivo also launched the X100 smartphone, which also runs a 7 billion parameter language model and a 1 billion parameter vision model locally on the device. Other Chinese phonemakers such as Oppo and Realme, are also following suit.

Hardware

Bringing generative AI capabilities to smartphones, Samsung recently unveiled the Exynos 2400 processor, which features AMD’s latest RDNA3-based Xclipse 940 GPU. The company claims Exynos 2400 offers a 1.7x increase in CPU performance and a 14.7x boost in AI performance compared to the Exynos 2200.

Reports also suggest that Exynos 2400 will debut on Samsung Galaxy S24 and Samsung Galaxy S24+ handsets, which will have generative AI capabilities.

In the hardware space, companies such as Qualcomm and MediaTek are making great strides. Qualcomm’s Snapdragon processors power some of the most popular Android phones in the world. Notably, many Samsung smartphones are also powered by the Snapdragon processors.

The Snapdragon 8 Gen 3 processor is already powering Xiaomi 14 series of smartphones. Last month, the San Diego-based semiconductor company announced the Snapdragon X Elite, which is capable of running generative AI models with over 13 billion parameters on-device and continues to expand Qualcomm’s AI leadership with 4.5 times faster AI processing power than competitors.

Similarly, earlier this week, MediaTek launched the Dimensity 8300 chipset, which comes with full generative AI support facilitated by the APU 780 AI processor integrated into the chipset.

“The Dimensity 8300 unlocks new possibilities for the premium smartphone segment, offering users in-hand AI, hyper-realistic entertainment opportunities, and seamless connectivity without sacrificing efficiency,” according to the company.

Notably, the Vivo X100 smartphone is powered by MediaTek’s Dimensity 9300. The new Dimensity 8300 would directly compete with Qualcomm’s Snapdragon and Samsung Exynos.

However, as it stands, the developments made in the hardware would mean generative AI capabilities would be limited to premium smartphones now. Samsung’s success will also depend on its ability to bring generative AI quickly to its smartphones in all price segments.

The post Phonemakers Now Want to Run LLM On-Device appeared first on AIM.

]]>
Hansa Cequity Certified as Best Firm for Data Scientists for the 2nd Time https://analyticsindiamag.com/ai-highlights/hansa-cequity-certified-as-best-firm-for-data-scientists-for-the-2nd-time/ Tue, 21 Nov 2023 09:30:00 +0000 https://analyticsindiamag.com/?p=10103417

The Best Firm For Data Scientists certification surveys a company’s data scientists and analytics employees to identify and recognise organisations with great company culture.

The post Hansa Cequity Certified as Best Firm for Data Scientists for the 2nd Time appeared first on AIM.

]]>

Hansa Cequity has once again been certified as the Best Firm for Data Scientists to work for by Analytics India Magazine (AIM) through its workplace recognition programme. 

The Best Firm For Data Scientists certification surveys a company’s data scientists and analytics employees to identify and recognise organisations with great company cultures. AIM analyses the survey data to gauge the employees’ approval ratings and uncover actionable insights. 

“We are delighted to be chosen as the Best Firm For Data Scientists by Analytics India Magazine. At Hansa Cequity, we are integrating the power of data and AI to solve strategic marketing problems in a holistic manner with extreme passion and customer-centricity. We are continuously innovating with cutting-edge AI algorithms & cloud with a sharp focus to bring a much-needed paradigm shift in the industry. We will continue to provide the data-rich insights to the CXO community that enable the positive impact to their business topline” said Prasad Kothari, Head – Data Science & AI at Hansa Cequity.

The analytics industry currently faces a talent crunch, and attracting good employees is one of the most pressing challenges that enterprises are facing.

The certification by Analytics India Magazine is considered a gold standard in identifying the best data science workplaces and companies participate in the programme to increase brand awareness and attract talent. 

Best Firms for Data Scientists is the biggest data science workplace recognition programme in India. To nominate your organisation for the certification, please fill out the form here.

The post Hansa Cequity Certified as Best Firm for Data Scientists for the 2nd Time appeared first on AIM.

]]>
Microsoft is Making Employees Super Lazy https://analyticsindiamag.com/innovation-in-ai/microsoft-is-making-employees-super-lazy/ Sat, 18 Nov 2023 04:30:00 +0000 https://analyticsindiamag.com/?p=10103228

“77% of people who use Copilot told us that they just don't want to go back to working without it,” said Jared Spataro at Microsoft Ignite 2023

The post Microsoft is Making Employees Super Lazy appeared first on AIM.

]]>

Relying on Microsoft products, which have become indispensable to office-goers, the Ignite event only clarified their plans to further hook everyone to their products. Earlier this year, when we wrote about Microsoft making employees lazy, little did we know that eight months later, the company will announce over 100 new updates including advanced AI features to their office suite to further boost employee productivity and make them more dependable. Perhaps, in the process make employees further lazy. 

“77% of people who use Copilot told us that they just don’t want to go back to working without it,” said a confident Jared Spataro, CVP Modern Work and Business Applications of Microsoft, at the company’s annual conference for developers and IT professionals – Microsoft Ignite 2023

Copilot To Steer Employees

Like the word for the day, Microsoft mentioned ‘Copilot’ more than 250 times during the conference. Two weeks ago, Microsoft announced the general availability of copilot for Microsoft 365 at $30 per month (per user). With Copilot integration across Microsoft’s work-suite applications including Teams, Outlook, Excel, and others, a person can integrate information from their emails, meetings, chats, documents and complete tasks. 

Microsoft released findings from a survey conducted with 297 Copilot users in Microsoft 365 Early Access Program. It was found that 70% of copilot users were more productive and 68% said it improved the quality of their work. Microsoft Vice-President, experiences and devices, Rajesh Jha said that Windows 365 has been adopted by over 60% of the Fortune 500 companies. 

Copilot brings personalisation by bringing one’s tone. Copilot can analyse a user’s previously sent mail to understand one’s unique style in order to sound like the person. As a result, blurring the lines between a human and machine-generated response. 

Source: Microsoft 

Furthermore, Microsoft launched Copilot Studio which is a low-code tool tailored for customising Microsoft Copilot for Microsoft 365 and building standalone copilots. It offers a set of conversational capabilities including custom GPTs, generative AI plugins and more. 

Interestingly, Microsoft announced that it would be renaming its AI search engine-based chatbot Bing Chat to Copilot

Employees in a ‘Loop’  

Microsoft Loop, a collaborative workspace application for managing tasks and projects, was officially launched at the Ignite event. Loop, considered to be Notion’s competitor, was previewed earlier this year, and with its seamless integration with other Microsoft apps such as Teams chat and Outlook, Loop pages can be shared across them. In the process, the company is facilitating an enclosed system that keeps users within the Microsoft ecosystem. 

Microsoft Loop Interface. Source: Microsoft 

Interestingly, Copilot assistant is also available within Loop, helping with tasks such as crafting text and summarising it within the same space. 

Go ‘Teams’

Satya Nadella at Microsoft Ignite 2023. Source: Youtube 

As if it wasn’t simplified enough, Microsoft Teams, the messaging platform for organisations for real-time collaboration and communication, which was released in 2017, had its share of limelight at the event. CEO of Microsoft, Satya Nadella, said that more than 320 million users rely on Teams to stay “productive and connected.” 

Nadella also confirmed that there are more than 2,000 apps in the Teams store. Apps such as Adobe, Service Now, Workday with more than 1 million active users. 145,000 custom line business applications have been formed in Teams. 

To elevate the workspace collaborative experience further, Nadella announced that Mesh would be generally available from January. A cloud-based platform for mixed reality that allows users to join from different locations in an immersive space, Microsoft Mesh will allow users to log into spaces via digital avatars. 

Interestingly, Mesh was first announced in 2021, but it is only now that it is coming to fruition. With a number of product features and enhancements, workspace has only been simplified through Microsoft. At this pace, it is unimaginable to see what the fate of employees will be.

The post Microsoft is Making Employees Super Lazy appeared first on AIM.

]]>
Amazon’s PartyRock Jams Past OpenAI https://analyticsindiamag.com/innovation-in-ai/amazons-partyrock-jams-past-openai/ Fri, 17 Nov 2023 11:24:56 +0000 https://analyticsindiamag.com/?p=10103218

Making GPTs yesterday’s news.

The post Amazon’s PartyRock Jams Past OpenAI appeared first on AIM.

]]>

Recently, AWS announced a launch of a new invention called PartyRock, an approachable Amazon Bedrock Playground for developers to build applications without any hassle, enabling anybody to create a generative AI program – just like OpenAI’s GPT Builder. 

Providing a creative space for everyone to express themselves, the application is completely ahead regardless of the coding expertise, along with allowing the creators to build their own application according to their own preferences and needs.

Just like OpenAI is making everyone an app developer with GPT Builder, Amazon with PartyRock is effortlessly seizing the creative and user-friendly realm, providing a seamless environment for building and exploring generative AI applications in just a few simple steps. In simple terms – letting everyone build AI apps. 

From Bedrock to PartyRock

Similar to OpenAI’s GPTs, PartyRock allows users to create customised LLM based models with personal information. For example, users can give superhero names for their dog or themselves to generate an application that gives them information about certain places or even ratings on food, and you can even use it to generate game content, such as levels, characters, or storylines.

The app builder is powered by Anthropic’s Claude-2, where users can give prompts to start generating their desired app. The user interface of Amazon’s app building platform is pretty minimal and attractive. Whether generating text-based responses or connecting prompts, the platform encourages users to explore and enhance their knowledge of generative AI capabilities. It also has come up with reliable and easy-to-share options, recognising the importance of the community.

PartyRock is helping users enable an API-based interface for accessing foundation models (FMs) from Amazon and other top AI providers, such as AI21 Labs, Anthropic Cohere, Meta, Stability AI, and Amazon with a single API. With this interface, users will have a strong base to test out different generative AI approaches. Along with that, it allows it to be more accessible to anyone by providing a platform that makes working with foundation models less complicated.

Moreover, it has been made easier for users to share the apps they have created with their friends and the community. By providing a straightforward means to publish links on social media platforms using #partyrockplayground, AWS aims to foster a vibrant community of creators inspiring each other through their generative AI creations.

What’s the rock in PartyRock?

The best part about Amazon’s PartyRock is that it offers a limited-time free trial offer without the requirement of a credit card, unlike OpenAI’s GPT Builder, which requires a ChatGPT Plus subscription to start, making it widely accessible for a larger audience. It seems like Amazon is actually making everyone an app developer. 

“With PartyRock’s introduction, the conventional view of generative AI development as a challenging and specialised field is changing,” says Amazon. But it is just one step after OpenAI announced the same with GPT Builder. Apart from the free trial, what sets PartyRock apart is yet to be seen.

The rise of these agent building platforms such as OpenAI’s and now Amazon’s seems like the next frontier for LLM development. It is not just a playground for creating fun applications; rather, it also serves as an educational tool.

The post Amazon’s PartyRock Jams Past OpenAI appeared first on AIM.

]]>
7 NVIDIA Announcements Made at Microsoft Ignite https://analyticsindiamag.com/innovation-in-ai/7-nvidia-announcements-made-at-microsoft-ignite-2023/ Wed, 15 Nov 2023 18:12:14 +0000 https://analyticsindiamag.com/?p=10103095

“Our partnership with NVIDIA spans every layer of the Copilot stack — from silicon to software,” said the Microsoft chief.

The post 7 NVIDIA Announcements Made at Microsoft Ignite appeared first on AIM.

]]>

Microsoft and NVIDIA entered a decade-long partnership earlier this year amid the generative AI craze. While the latter, with its hardware prowess, is already leading the race, Microsoft too enjoys an upper hand, thanks to its deal with OpenAI. All year round, both the parties have announced several deals hand-in-hand in the AI landscape. 

“Our partnership with NVIDIA spans every layer of the Copilot stack — from silicon to software — as we innovate together for this new age of AI,” said Satya Nadella, chairman and CEO of Microsoft, at the ongoing Ignite conference. 

Here are 7 NVIDIA announcements by Microsoft made at the event that caught our attention: 

H100- and H200-based virtual machines come to Microsoft Azure

Microsoft has introduced the NC H100 v5 VM series for Azure, featuring the industry’s first cloud instances with NVIDIA H100 NVL GPUs. These virtual machines have the combined power of PCIe-based H100 GPUs connected via NVIDIA NVLink, delivering nearly 4 petaflops of AI computing and 188GB of HBM3 memory. 

This setup is a game-changer for mid-range AI workloads, offering up to 12x higher performance on models like GPT-3 175B. Moreover, Microsoft plans to integrate the NVIDIA H200 Tensor Core GPU into Azure next year, catering to larger model inferencing with enhanced memory capacity and bandwidth using the latest-generation HBM3e memory.

Microsoft also has plans to add the NVIDIA H200 Tensor Core GPU to its Azure fleet next year to support larger model inferencing with similar latency.

Confidential Computing with NCC H100 v5 VMs

Microsoft is expanding its NVIDIA-powered services with the introduction of NCC H100 v5 VMs. These confidential virtual machines leverage NVIDIA H100 Tensor Core GPUs, ensuring the confidentiality and integrity of data and applications in use, in memory.

These GPU-enhanced confidential VMs will enter private preview soon, providing Azure customers with unparalleled acceleration while maintaining data security.

AI Foundry Service

NVIDIA has introduced an AI foundry service to supercharge the development and tuning of custom generative AI applications for enterprises and startups deploying on Microsoft Azure.

The foundry service pulls together three elements — a collection of NVIDIA AI Foundation Models, NVIDIA NeMoTM framework and tools, and NVIDIA DGXTM Cloud AI supercomputing services. This will give enterprises an end-to-end solution for creating custom generative AI models. 

Businesses can then deploy their customised models with NVIDIA AI Enterprise software to power generative AI applications, including intelligent search, summarisation and content generation.

Partnership with Amdocs 

NVIDIA has launched an AI foundry service to turbocharge the development and tuning of custom generative AI applications for enterprises and startups on Microsoft Azure. This introduction will optimise large language models for various industries. 

The AI leader has also partnered with Amdocs, a key player in communications and media services that will leverage the AI foundry service to optimise enterprise-grade LLMs for the telco and media sectors. This collaboration builds on the existing Amdocs-Microsoft partnership. 

AI Foundation Models More Accessible

Microsoft and NVIDIA are democratising access to AI Foundation Models, allowing developers to experience them through a user-friendly interface or API directly from a browser. These models, including popular ones like Llama 2, Stable Diffusion XL, and Mistral, can be customised with proprietary data. 

Optimised with NVIDIA TensorRT-LLM these models deliver high throughput and low latency, running seamlessly on any NVIDIA GPU-accelerated stack. These foundational models are accessible through the NVIDIA NGC catalogue, Hugging Face, and Microsoft Azure AI model catalogue.

Omniverse Cloud’s Simulation Engines

NVIDIA also launched two new simulation engines on Omniverse Cloud hosted on Microsoft Azure: the virtual factory simulation engine and the autonomous vehicle (AV) simulation engine. 

As automotive companies transition to AI-enhanced digital systems, these simulation engines aim to save costs and reduce lead times. Omniverse Cloud serves as a platform-as-a-service, unifying core product and business processes for automakers. 

TensorRT-LLM Upgrade for Windows

An upcoming update to TensorRT-LLM, an open-source software enhancing AI inference performance, will add support for new large language models. This update makes demanding AI workloads more accessible on desktops and laptops with RTX GPUs, starting at 8GB of VRAM. 

TensorRT-LLM for Windows will soon be compatible with OpenAI’s Chat API, letting developers run projects locally on a PC with RTX. The upcoming release of TensorRT-LLM v0.6.0 promises improved inference performance, up to 5x faster, and support for additional popular LLMs, including Mistral 7B and Nemotron-3 8B.

The post 7 NVIDIA Announcements Made at Microsoft Ignite appeared first on AIM.

]]>
Zomato Cooks Generative AI with Microsoft Azure https://analyticsindiamag.com/innovation-in-ai/zomato-cooks-generative-ai-with-microsoft-azure/ Tue, 14 Nov 2023 12:22:04 +0000 https://analyticsindiamag.com/?p=10103006 zomato

“Because the people are lazy,” said Vaibhav Bhutani, the head of generative AI at Zomato, in the first ever episode of Azure Innovation Podcast.

The post Zomato Cooks Generative AI with Microsoft Azure appeared first on AIM.

]]>
zomato

Zomato has been aggressively investing in generative AI so much so that it now has a dedicated role – head of generative AI – led by Vaibhav Bhutani, where he is working on couple of interesting use cases, including a multi-agent system for suggesting food options and enhancing user engagement and experience and more. 

Zomato believes that it is on a mission to “power India’s changing lifestyles.” With three main divisions—Zomato, Blinkit, and Hyperpure—the company is dedicated to providing better food experiences for more people. From revolutionising instant commerce to tackling malnutrition through its feeding arm, Zomato’s mission is multifaceted, aiming to cater to diverse aspects of society.

Bhutani discussed at the first episode of Azure Innovation Podcast with Ross Kennedy, VP of Digital Natives, about how Zomato uses generative AI to provide different customer experience. (see below)

https://youtu.be/jcyQdEahaf0?si=FIbKjX-4lpImoFon

People need convenience because they are lazy

“People are lazy, and it’s very hard to have a conversation these days,” said Bhutani emphasising that having conversations is the only way to collect data, and he said that it should become easier. This is one of the metrics that Zomato is chasing with generative AI. 

“Is chasing a chatbot an everyday use case, or is it not. And while building all these bots, one of the biggest learnings that I have had is that the UI of these bots is what truly matters,” said Bhutani highlighting that the company focuses and critically thinks about UI and UX design a lot. 

Bhutani said that he had a ton of options about how he would use generative AI. He emphasised that Microsoft and OpenAI’s GPT through Azure emerged as the natural choice, given Azure’s robust commitment to data privacy and security. The LLMs provided on Azure OpenAI Service became the foundation of Zomato’s generative AI architecture.

Bhutani further explained the adoption of a multi-agent system, where AI agents communicate with each other to offer comprehensive responses for the customers. The integration with Zomato’s ecosystem involves creating functions that seamlessly leverage generative AI contributing to a cohesive user experience.

“For me, adopting generative AI is a more personal journey,” said Bhutani. Drawing from personal experiences at Spyne, a SaaS AI photography tool, where he recognised the potential of generative AI in enhancing Zomato’s capabilities. The journey began with the creation of Recipe Rover at Blinkit—an AI-generated recipe tool featuring thousands of recipes, images, and text, using GANs and the company’s proprietary data. 

Bhutani believes that because of the amount of experimentation that Zomato has already done with generative AI, it is definitely going to be a copilot for our company. “we’re working with different teams who need high impact co-pilots so that we can enable large fleets of our team to make much better decisions and faster decisions,” he said. 

Generative AI in Food

In September, Zomato released its Zomato AI Buddy. Going beyond the limitations of traditional chatbots, Zomato AI stands as an intelligent and intuitive foodie companion, dedicated to understanding and satisfying users’ ever-changing preferences, dietary requirements, and even their current moods.

One of the standout features of Zomato AI is its multiple agent framework, which equips it with a diverse range of capabilities to serve customers at any given moment. The framework provides Zomato AI with a variety of prompts for different tasks and activities. 

For instance, if you’re craving a specific dish, the AI will swiftly present you with a widget listing all the restaurants that serve your desired meal. If you’re uncertain about what to order, Zomato AI can suggest a list of popular dishes or restaurants, eliminating the guesswork from your meal selection.

Zomato’s competitor, Swiggy, another food delivery platform in India, started using generative AI in July. Amitkumar Banka told AIM that the company is using generative AI to create customised food images based on specific requirements on their platform, and this is helping them serve millions of customers. “We are using generative AI to put the name, description, and image of food items to individual users based on browsing behaviour not only on Swiggy but on the entire internet,” he said. 

Similar to Zomato, Swiggy also recently unveiled a new feature called ‘WhatTo Eat’ that allows users to explore options based on their mood and cravings.

The post Zomato Cooks Generative AI with Microsoft Azure appeared first on AIM.

]]>
Foxconn to Steer the Electric Wheel  https://analyticsindiamag.com/innovation-in-ai/foxconn-to-steer-the-electric-wheel/ Fri, 10 Nov 2023 09:36:50 +0000 https://analyticsindiamag.com/?p=10102896

Diversification opportunities are ample for Foxconn and the company is going all-in on electric cars next.

The post Foxconn to Steer the Electric Wheel  appeared first on AIM.

]]>

The shift towards electric vehicles (EVs) has been vividly evident in the past decade. Car companies had been working on clean energy long before Elon Musk from Tesla brought attention to it. But in the recent past, every tech company, be it the phone developers or chipmaker Foxconn, has been trying to put their own model on the road.

Last week, the Taiwanese electronics giant announced last week that it is taking a gamble on the EV business. 

Diversification opportunities are ample for Foxconn and the company is going all-in on cars next. The Taiwanese company has already set up fabs and plans further expansion through several automotive production sites across the globe — including India

The Desi Connection 

India boasts millions of EV owners, with motorbikes, scooters, and rickshaws constituting over 90% of the automotives. As per a Bloomberg report, the sales rose to 75,000 in the nine months through September — more than double the volume during the same period in 2022. 

A report by the International Energy Agency (IEA) revealed that in 2022, more than half of India’s three-wheeler registrations were electric-powered. 

One of the reasons for the company to embark on an affair with India can be the rise in demand for passenger EVs in the country.

The surge in the EV market can be attributed, in part, to a government initiative, a $1.3 billion scheme aimed at prompting EV manufacturing within the country while offering discounts to customers. Moreover, charging points across the nation have increased tenfold, reported Elizabeth Connolly, an analyst specializing in energy technology and transport at the IEA.

As Foxconn charts a course to become an assembler of electricity powered vehicles in India, it has long been indecisive about its relationship with India. The chipmaker company in India has a track record of waging bid wars, misleading state governments and denying deals after governments officially announcing a deal with the chip company. 

Machine Hopping

The second reason for Foxconn to shift from making cars to iPhones is the dip in market for smartphones in recent years. It seems that the company plans to diversify its business.

Similar to Foxconn, Apple has also been building an autonomous EV — dubbed Titan — for almost a decade which finally may be on the way. But thanks to its founder Steve Jobs, the company has long maintained its mysterious personality. Hence, no information whatsoever about the project has made it to mainstream media yet. 

The last insider information on the internet dates back to almost a year ago which is highly unlikely for similar significant projects in limbo. Interestingly, in a little over two decades, Apple has applied for 248 car-related patents and even hired Lamborghini’s top executive, Luigi Taraborrelli, to help them design their product in the car space.

While Apple has been regularly launching refreshed versions of its slate products, the company’s car project has an uncertain future. Prominent Apple analyst Ming-Chi Kuo says Apple’s car plans have “lost all visibility,” and the Cupertino-based tech company must look at alternate strategies to make headways into a highly competitive automotive space.

As the single-largest manufacturer of electronics ambitiously forges in the market, news from Apple should also be expected some time soon. But with Foxconn’s leap forward in the EV sector, there are a lot of ‘ifs’ involved. Even though the Apple-Foxconn relationship has long stood the tides of the tech industry, Foxconn has a registered history of making plans and backing out at the last moment. 

“Foxconn has the reputation for being one of the most opaque companies in an opaque world,” Lawrence Tabak, the author of ‘Foxconned: Imaginary Jobs, Bulldozed Homes, and the Sacking of Local Government’ described Foxconn’s habit of backing out of deals. “It is very normal for them to make stagey announcements that involve politicians, business executives, pomp, and circumstance purely based on speculation,” Tabak added.

As of now, all that the stakeholders can do is to keep their fingers crossed and hope it does not turn out to be one of the company’s usual false promises to steer the electronic wheel.

The post Foxconn to Steer the Electric Wheel  appeared first on AIM.

]]>
NVIDIA RTX Brings Alan Wake 2 to Life https://analyticsindiamag.com/innovation-in-ai/nvidia-rtx-brings-alan-wake-2-to-life/ Thu, 09 Nov 2023 10:14:39 +0000 https://analyticsindiamag.com/?p=10102817 Alan Wake 2 Showcases the NVIDIA RTX Prowess

If you don't have an RTX GPU, you can also play this game on NVIDIA GeForce NOW.

The post NVIDIA RTX Brings Alan Wake 2 to Life appeared first on AIM.

]]>
Alan Wake 2 Showcases the NVIDIA RTX Prowess

Alan Wake 2, the latest game developed by Remedy Entertainment is finally here, and it is no short of a beautiful marvel. And beyond a doubt a lot of it is a wonder that can only be experienced using an NVIDIA GPU, taking its full advantage. 

Alan Wake 2 introduces gamers to a world where fully ray-traced graphics have reached new heights, all thanks to the NVIDIA GeForce RTX 40 Series GPUs. Players embark on a journey to explore two beautifully crafted yet terrifying worlds, seeking to unravel the mysteries of a supernatural darkness that has trapped the titular character in a never-ending nightmare.

The game’s full ray-traced, path-traced visuals are a visual masterpiece, combining ray-traced lighting, reflections, and shadows into a unified, breathtaking solution. The result is an unparalleled level of realism and immersion, creating visuals that redefine the gaming experience.

Even if you don’t have a high-end GeForce RTX PC or laptop, you can still enjoy Alan Wake 2 through NVIDIA GeForce NOW Ultimate. This cloud gaming service allows you to stream the game with the same technologies as GeForce RTX 40-Series owners, including DLSS 3.5 and Reflex. With over 1,700 games available, you can play Alan Wake 2 and many other PC titles on a wide range of devices.

“The new Ray Reconstruction feature in DLSS 3.5 renders our fully ray-traced world more beautifully than ever before, bringing you deeper into the story of Alan Wake 2,” says Tatu Aalto, Lead Graphics Programmer at Remedy Entertainment.

Match made in gaming heaven

Until the introduction of NVIDIA’s GeForce RTX GPUs with RT Cores and the AI-powered acceleration of DLSS, real-time full ray tracing in video games was an impossible dream. 

Transparent and opaque reflections meticulously recreate their surroundings at full resolution, immersing players in the game world. Indirect and direct light bounces up to three times, while techniques such as Screen Space Reflections, Screen Space Ambient Occlusion, and rasterized Global Illumination are unified into a single algorithm, resulting in naturally lit environments with exceptional detail and realism.

To provide the definitive gaming experience, Alan Wake 2 incorporates the complete suite of DLSS technologies, designed to maximise frame rates and image quality using AI. Super Resolution accelerates frame rates for all GeForce RTX gamers. Frame Generation boosts performance on GeForce RTX 40 Series GPUs by up to 4.5 times.

Cyperpunk 2077 also recently started using DLSS 3.5. Apart from this, other Remedy games such as Quantum Break and Control have also been working flawlessly with the help of previous versions of DLSS.

Reflex minimises system latency, enhancing gameplay responsiveness. And the groundbreaking Ray Reconstruction replaces multiple hand-tuned ray tracing denoisers with a unified AI model, taking ray-traced effects and full ray tracing to new heights.

Activating ray tracing and DLSS in Alan Wake 2 on a GeForce RTX GPU automatically enables Ray Reconstruction. This feature replaces two denoisers with a unified AI model, enhancing the quality of ray tracing and making the game more immersive and realistic. In addition to these benefits, Ray Reconstruction runs up to 14% faster in benchmarks, further boosting performance for GeForce RTX gamers.

Northlight shines 

The development team behind Alan Wake 2, known as Northlight, has introduced several exciting new technologies and tools to enhance the game’s performance and visual quality.

One of them is the Data-Oriented Game Object Model, based on an entity component system (ECS), which optimises memory efficiency and enables efficient parallel execution. 

This change allows the game engine to support a varying number of hardware cores efficiently, resulting in more dynamic and expansive game worlds. ECS also played a crucial role in simplifying the creation of the Scattering tool for mass-authoring vegetation, making the development process more efficient.

Moreover, the game’s non-player characters (NPCs) now utilise animation-driven movement combined with distance-based Motion Matching, a new system that improves movement quality and provides more control over animation usage. This change results in more realistic NPC movements and contributes to the game’s overall immersion.

The wind system is built on Signed Distance Fields (SDF) methods and employs wind boxes to create a realistic and smoothly varying wind strength field between indoor and outdoor areas.

Northlight has transitioned from a proprietary scripting language to Luau, an embeddable scripting language derived from Lua. Luau exposes a comprehensive set of engine functionality and supports live editing, making it a versatile tool for level scripting and gameplay systems. 

The adoption of Luau empowered the game team to prototype and implement various game features and visual effects without requiring assistance from engine programmers.

Leveraging the power of NVIDIA’s RTX 40 Series GPUs, Alan Wake 2 delivers unparalleled visuals, thanks to full ray tracing and DLSS 3.5. Whether you’re playing on a high-end PC or experiencing it through GeForce NOW, Alan Wake 2 offers a captivating and visually stunning adventure that sets new standards in the gaming industry.

The post NVIDIA RTX Brings Alan Wake 2 to Life appeared first on AIM.

]]>
10 Major AI Updates at GitHub Universe https://analyticsindiamag.com/innovation-in-ai/githubs-10-major-ai-updates-at-universe-2023/ Wed, 08 Nov 2023 17:10:00 +0000 https://analyticsindiamag.com/?p=10102753

Building on the existing global user base GitHub has made major AI announcements

The post 10 Major AI Updates at GitHub Universe appeared first on AIM.

]]>

GitHub’s parent company Microsoft is seeing big growth in the generative AI business, as the company’s CEO, Satya Nadella, told Wall Street that the company’s paying customers for its GitHub Copilot software rose by 40% in the September quarter from the prior quarter.

“We have over 1 million paid copilot users in more than 37,000 organizations that subscribe to copilot for business,” said Nadella, “with significant traction outside the United States.” Building on the existing global user base, the platform has made new major AI announcements at ongoing annual GitHub conference — Universe 2023

In the official statement announcing the launch, Thomas Dohmke, CEO, GitHub, said: “In March, we shared our vision of a new future of software development with Copilot X, where AI infuses every step of the developer lifecycle. Our vision has manifested itself into a new reality for the world’s developers.” He further stated, “Just as GitHub was founded on Git, today we are re-founded on Copilot.”

Here are 10 AI update made at GitHub Universe 2023: 

Copilot Chat

With GitHub’s latest Copilot Chat, the platform is making natural language the go-to programming language for developers. Now you can debug and find errors with ease, just by chatting.

The chatbot powered by OpenAI’s GPT-4 will be generally available in December 2023. Apart from users who have a GitHub Copilot subscription it will also be available to verified teachers, students, and maintainers of popular open source projects for free.

Slash Commands and Context Variables 

Fixing or improving code has never been easier! GitHub introduces slash commands and context variables, making tasks like code fixes and test generation a breeze with simple commands like /fix and /tests.

Inline Chat Integration

Say hello to the new inline Copilot Chat, enabling developers to discuss specific lines of code seamlessly within their coding flow and editor.

One-Click Actions

GitHub’s Copilot now offers powerful shortcuts with just a click! Speed up your development process with smart actions that streamline tasks like fixing suggestions, reviewing pull requests, and generating responses.

JetBrains Suite Integration

Copilot Chat will be soon available in the JetBrains suite of IDEs, making it easier for users to access AI-powered assistance directly within their preferred coding environment.

The feature is available to preview starting today. 

Chat on Mobile App and GitHub.com

GitHub is bringing Copilot Chat to their mobile app, ensuring that developers can access its powerful features even while on the move, enhancing their coding experience anytime, anywhere.

The bot will also be available on Github.com combined with the power of GitHub’s advanced code search, GitHub is enabling Copilot Chat to understand and help with the latest changes to popular open source projects.

Copilot Enterprise

GitHub Copilot initially boosted developers’ speed by 55% as an IDE autocomplete function. Now, GitHub is introducing Copilot for Enterprise which will help teams in codebase orientation, documentation creation, personalized suggestions, and swift pull request reviews. 

The feature will be generally available from February 2024 at $39 USD per user monthly.

GitHub Copilot Partner Program

GitHub is teaming up with over 25 leading partners, including Datastax, LaunchDarkly, Postman, Hashicorp, and Datadog, to expand Copilot’s capabilities and create an ecosystem of AI-driven coding solutions.

AI-Powered Security 

GitHub Copilot employs an LLM-based security system, detecting and hampering insecure code patterns, like hardcoded credentials and SQL injections. Henceforth, GitHub’s Advanced Security will feature AI-powered tools to uncover and mitigate vulnerabilities and sensitive data in code, for upgraded application security.

GitHub Copilot Workspace 

GitHub Next’s research team has introduced the AI-driven GitHub Copilot Workspace, to facilitate idea translation into code for developers. This upcoming platform signifies GitHub’s exploration of software development. 

Set for a 2024 launch, Copilot Workspace will enable seamless code creation through natural language and AI.

The post 10 Major AI Updates at GitHub Universe appeared first on AIM.

]]>
Everything You Need to Know about the First-Ever OpenAI DevDay 2023 https://analyticsindiamag.com/innovation-in-ai/everything-you-need-to-know-about-the-first-ever-openai-devday-2023/ Mon, 06 Nov 2023 21:24:18 +0000 https://analyticsindiamag.com/?p=10102613

ChatGPT has a whopping 100 million weekly active users now. 

The post Everything You Need to Know about the First-Ever OpenAI DevDay 2023 appeared first on AIM.

]]>

OpenAI definitely took everyone by surprise with its slew of announcements. At the DevDay, OpenAI’s first-ever developer conference, held in San Francisco, on November 6, 2023, the team introduced GPTs — customised AI agents which can be designed by users for specific purposes. These agents can be created in natural language (no coding skills required), for personal, professional, or public use.

GPTs will allow users to instruct ChatGPT, share knowledge, and define tasks without complex manual input.

Along with this, OpenAI plans to launch the GPT Store for public sharing in the coming months, showcasing verified builders’ creations, which users can easily discover and use to potentially earn money.

Moreover, developers can link GPTs to the real world through APIs for data integration and specific task automation. Businesses can create internal GPTs to streamline processes in various departments, as seen in Amgen and Bain & Company’s successful use cases.

Here’s a glimpse of all the announcements made at OpenAI DevDay 2023. 

GPT-4 Turbo with a 128K Context Window

OpenAI introduced GPT-4 Turbo, an upgraded version of GPT-4, that boasts a 128k context window, allowing it to process an equivalent of over 300 pages of text in a single prompt, with knowledge extending up to April 2023.

The model offers six significant improvements and comes in two forms: one specialised in text analysis and another proficient in both text and image understanding. Initially available as a preview through an API, OpenAI plans to make both versions widely accessible soon, offering a competitive price of $0.01 per 1,000 input tokens and $0.03 per 1,000 output tokens. 

Notably, it is much more cost-effective, being three times cheaper for input tokens and twice as affordable for output tokens compared to GPT-4.

GPT-4 Turbo Vision now also accepts images as inputs within the Chat Completions API, enabling tasks like generating image captions, detailed real-world image analysis, and document reading with figures.

Assistants API, Retrieval, and Code Interpreter

Developers also have Assistants API now, a tool that helps developers create AI-driven assistants for various applications. These assistants can perform specific tasks, leveraging extra knowledge and utilising models and tools.

The API offers features like Code Interpreter and Retrieval, streamlining complex tasks and allowing the development of high-quality AI applications.

The API also introduces persistent and infinitely long threads, simplifying message handling. Assistants can access tools like Code Interpreter for running Python code, Retrieval for external knowledge, and Function calling to invoke defined functions.

Developers can try the Assistants API beta via the Assistants playground.

DALL·E 3 Integration 

DALL·E 3 can be incorporated into various applications and products by developers via OpenAI’s Images API, simply by specifying “DALL·E 3” as the model.

Prominent organisations like Snapchat, Coca-Cola, and Shutterstock have harnessed DALL·E 3’s capabilities to automatically create images and designs for their customers and marketing initiatives.

Text-to-Speech (TTS) Enhancement

The ChatGPT maker also introduced Audio API, a text-to-speech tool featuring six distinct voices: Alloy, Echo, Fable, Onyx, Nova, and Shimmer. It’s ready for immediate use, with pricing starting at $0.015 for every 1,000 input characters. 

Last, but definitely not least, we now can also access the latest iteration of OpenAI’s open-source automatic speech recognition model, known as Whisper large-v3 with improved performance across various languages.

To address the ever-growing concerns around copyright infringement, OpenAI has introduced Copyright Shield. They are also ready to support and cover legal expenses for customers dealing with copyright infringement claims, including ChatGPT Enterprise and developer platform features.

Although OpenAI hasn’t reached AGI yet, chief Sam Altman showed great excitement about teaming up with Microsoft, stressing their strong teamwork. Meanwhile, Microsoft’s Satya Nadella echoed similar sentiments, emphasising their goal to support people and organisations worldwide and the importance of AI that genuinely empowers.

Read more: OpenAI is the New Apple 

The post Everything You Need to Know about the First-Ever OpenAI DevDay 2023 appeared first on AIM.

]]>