AI Origins & Evolution | AIM https://analyticsindiamag.com/ai-origins-evolution/ Artificial Intelligence news, conferences, courses & apps in India Thu, 15 Aug 2024 15:58:00 +0000 en-US hourly 1 https://analyticsindiamag.com/wp-content/uploads/2019/11/cropped-aim-new-logo-1-22-3-32x32.jpg AI Origins & Evolution | AIM https://analyticsindiamag.com/ai-origins-evolution/ 32 32 Cognizant is Offering a Very Generous Salary of INR 2.5 LPA for Freshers https://analyticsindiamag.com/ai-insights-analysis/cognizant-is-offering-a-very-generous-salary-of-inr-2-5-lpa-for-freshers/ Wed, 14 Aug 2024 08:08:06 +0000 https://analyticsindiamag.com/?p=10132666 Cognizant is Offering a Very Generous Salary of INR 2.5 LPA for Freshers

The saddest part about this is that it’s the same package that was offered in 2002.

The post Cognizant is Offering a Very Generous Salary of INR 2.5 LPA for Freshers appeared first on AIM.

]]>
Cognizant is Offering a Very Generous Salary of INR 2.5 LPA for Freshers

The IT industry seems to love its employees a little too much. It is not some kind of major breakthrough that the salaries offered by these Indian tech giants are way too low, but Cognizant has set a new record this time and is being heavily trolled for it. 

“My driver makes way more than that…,” commented one user on X when they came across the news that Cognizant is offering a salary of Rs 2.52 lakh per annum for a third year undergraduate, which roughly boils down to Rs 20,000 per month. 

The saddest part about this is that it’s the same package that was offered in 2002. Nothing has changed. “No house, no free commutation, no free food. All this to be managed in just 18 to 19K rupees after PF deduction in metro cities,” added the user.

The salary is barely peanuts. “Degrees have become useless in India,” said a user, which is similar to a discussion a while back about developers in India being more likely to be unemployed if they are educated.

Expanding Base and Capabilities, But Not Pay

Adding to all of this, Cognizant is expanding its operations across the country. Most recently, it opened its first centre in Indore, Madhya Pradesh, a move set to create over 1,500 jobs with the potential to grow to 20,000 in the future. Cognizant’s expansion in Indore adds to its existing presence in cities like Bengaluru, Bhubaneswar, Chennai, and others across India.

Right now, Hyderabad has become Cognizant’s biggest centre globally, replacing Chennai from the top spot with around 57,000 employees.

Cognizant also went on a spree of partnering with companies for generative AI capabilities aimed at its employees. Just this year, it partnered with Microsoft and ServiceNow to integrate generative AI for employees, making a generative AI-powered digital workspace. 

The IT giant was also actively hiring in FY23 but the numbers for FY24 are yet to be revealed. Cognizant hired around 60,000 freshers, unlike TCS, Infosys, or Wipro. But it seems like this comes at the cost of not offering good enough pay to its employees. 

“2.52 LPA is very generous. What will the graduates do with so much money?” remarked a user on X, while another added that this is why the younger generation is content with making reels on Instagram.

This year, Cognizant also gave poor salary hikes, with developers on Reddit pointing out this was despite the fact that the company has the highest paid CEO earning Rs 186 crore per year. “Why can’t they have minimum wage criteria in all the sectors,” asked a user on X. Another said that they get more money with cashbacks than this.

The truth is that there are people who are ready to take any job they get and this salary is more than appealing to them. As Debarghya ‘Deedy’ Das puts it, “It’s simple. If you think it’s too low, find another job. If you want 186 cr, start the next Cognizant,” he said. 

The Same With All of the Indian IT

Some people make the argument that the companies, though they’re offering less salaries, are also upskilling them with generative AI to make them ready for the future market, and possibly making them prepare for a business.

If the wage is too low, “Cognizant should get 0 employees, then they will realise they raise the salary. If they got 10,000 employees doing this, then you are wrong and it’s not too low,” Das replied to another post. 

To explain this point, TCS recently also had 80,000 job openings that they could not fill citing a skill gap as the reason. The company is still looking for people for the roles of Ninja, Digital, and Prime. Interestingly, the Ninja category offers a package of Rs 3.3 LPA for various roles, while the other two roles offer a range between Rs 9 and Rs 11.5 LPA.

Most of the IT giants have stopped hiring, but people would still love to work there to start somewhere. Moreover, there is almost no hike at TCS. To put inflationary pressures into perspective, Jio and Airtel have hiked their prices by 25%, but TCS gave only around a 1% hike to its employees. 

There is an oversupply of engineers in India, which makes it clear that all these job openings can get easily filled. This makes Cognizant’s risky bet not so risky as the backlash will die soon as soon as the seats are filled. 

But this also puts the perspective of why graduates are shying away from IT companies. They get better salaries at startups and even GCCs in India. Indian IT giants are not even trying to lure new joinees anymore.

The post Cognizant is Offering a Very Generous Salary of INR 2.5 LPA for Freshers appeared first on AIM.

]]>
Generative AI is a Threat to Frontend Developers https://analyticsindiamag.com/ai-origins-evolution/generative-ai-is-a-threat-to-frontend-developers/ Wed, 14 Aug 2024 07:37:24 +0000 https://analyticsindiamag.com/?p=10132663

Overuse of generative AI tools had made frontend development more complex

The post Generative AI is a Threat to Frontend Developers appeared first on AIM.

]]>

A few days back, a developer named Jason revealed that 99% of front end development as we know it today will be fully automated within 3 years. “The 1% will be branding and curating. 2-3 individuals will do as much as 20 does today,” he said in a post on X.

Jason isn’t wrong. HTML, CSS and JavaScript are still considered as the foundation of frontend development. HTML arranged both structure and content. This material is styled using CSS. JavaScript has the ability to provide the content an incredible level of interaction. 


Source: X

However, the frontend development landscape of today is incredibly vast, with bundlers like Webpack and Rollup, automated task runners like Gulp and Grunt, and CSS preprocessors like Sass and Less that let developers modularise CSS code and work with control flow commands and other utilities in it.

There are also powerful testing tools like Puppeteer and Cypress, and UI frameworks and libraries like Vue.js, Angular, and React.

Source: X

Additionally, web developers have become quite outdated! These days, it’s unlikely that a neighbourhood bakery, dentist, or artist will hire a developer and pay them tens of thousands of dollars to create a website from the ground up. They will visit website building and hosting softwares, select a preferred template, and pay as low as $20 per month.

The Threat of Generative AI

AI tools are also taking up some frontend development tasks. A tool like Anima lets you convert Figma designs to React code. Sketch2Code converts wireframe sketches into HTML pages. These AI coding tools can increase developer productivity, allowing them to build products faster.

Source: Mckinsey

Debugging AI-generated code can be difficult, though. You may end up taking up even more time than you’d have if you’d written the code yourself.

Too Complex

This is similar to what AIM had said earlier that overuse of such tools had made frontend development more complex. Many developers on Reddit have found web development frustrating. 

One developer went on to say that, “Web development is f**** stupid”, while another wrote, “I started off learning video games and desktop apps, but by the time I finished college, I realised most of software engineering is web apps, which I despise.”

Developers say that the dependency management with npm and Yarn (JavaScript package managers) is a nightmare and “frameworks like React, Redux, and Next.js are constantly changing for no reason, making the entire process unnecessarily complicated”.

The hate for web development is not new. A simple search on Google, X, Quora, Reddit, or any community platform throws thousands complaining about the constantly changing paradigm of web development for more than a decade.

The threat of low-code and no-code tools

Even large corporations have some low- and no-code use cases. For instance, Upwork uses Webflow to add new pages to its website and change it in real time without consulting the IT team. 

As a result, the engineering team can concentrate on the final product instead of making quick tweaks or upgrades that the marketing team may do on their own. 

Without a doubt, low-code tools may help front-end developers with some jobs. For instance, they can create templates that can be used to generate designs rather than having to start from scratch. 

Too Early To Call It Quit

However, not everyone shares this negative view. Some see the challenges of web development as opportunities for interesting projects. “There are so many interesting projects in web development. People think it’s just building simple websites, but web apps can have so many interesting challenges and depth,” a user noted.

Moreover, simply using AI tools won’t help you build a website.  According to the McKinsey study “Unleashing developer productivity with generative AI,” AI code tools produced “incorrect coding recommendations and even introduced errors in the code.”

The project or organisation’s context is unknown to the tools. You have to provide the context as a developer, meaning you need to be great at prompt engineering.

AI code generation tools are also more suitable for simpler tasks, like generating code snippets. They do not generate helpful code in more complex use cases.

The post Generative AI is a Threat to Frontend Developers appeared first on AIM.

]]>
This Mumbai-Based Startup Has Released India’s Very Own Harvey AI https://analyticsindiamag.com/ai-origins-evolution/this-mumbai-based-startup-has-released-indias-very-own-harvey-ai/ https://analyticsindiamag.com/ai-origins-evolution/this-mumbai-based-startup-has-released-indias-very-own-harvey-ai/#respond Tue, 13 Aug 2024 12:09:19 +0000 https://analyticsindiamag.com/?p=10132560

LexLegis AI has been trained on one crore legal documents aggregated over 25 years. The AI tool is aimed at legal professionals, offering detailed analyses for legal research.

The post This Mumbai-Based Startup Has Released India’s Very Own Harvey AI appeared first on AIM.

]]>

Mumbai-based legal tech company LexLegis has set itself apart as “India’s answer to Harvey AI,” having opened for access as of this week.

LexLegis AI has been trained on one crore legal documents aggregated over 25 years. The AI tool is aimed at legal professionals, offering detailed analyses for legal research.

“It is to help simplify and demystify the legal complexities for everyone and to save time on the vast amounts of time that we’re spending on legal research. The tool enables users to efficiently navigate through thousands of pages and extract meaningful, actionable information,” said co-founder and managing director Saakar S Yadav.

The legal research company, which was founded in 1998, has reinvented itself this year with the goal of building an LLM for Indian law. The company was founded by the late S C Yadav, who served as the Chief Commissioner of Income Tax, and his son Saakar S Yadav.

Over the years, the company worked on several legal tech solutions. Shortly after its founding, the company developed and launched a search engine catered specifically towards legal professionals and tax consultants, to help in the understanding of the taxation domain.

Additionally, they also built the largest database of judgments in India in 2004, followed a decade later by the development of the National Judicial Reference System (NJRS), which is the world’s largest repository of appeals for the Income Tax Department.

With LexLegis AI, the company has leveraged its 25 years of experience within the industry to offer an overarching tool to help legal professionals, businesses and researchers cut down on the time used to research and find citations for relevant cases.

Speaking on the tool, Yadav stated that while the tool currently focuses on tax law, it aims to inculcate all fields of law for use.

While previously AIM has covered tools to assist legal professionals, this is one of the first Indian-made LLMs for law, focusing solely on the Indian legal system.

The post This Mumbai-Based Startup Has Released India’s Very Own Harvey AI appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-origins-evolution/this-mumbai-based-startup-has-released-indias-very-own-harvey-ai/feed/ 0
Stuck in Bengaluru Traffic? Don’t Blame BMTC Buses https://analyticsindiamag.com/ai-origins-evolution/stuck-in-traffic-dont-blame-bmtc/ Tue, 13 Aug 2024 12:08:18 +0000 https://analyticsindiamag.com/?p=10094383

In Bangalore, there are currently more than one crore private vehicles, compared to 6800 buses.

The post Stuck in Bengaluru Traffic? Don’t Blame BMTC Buses appeared first on AIM.

]]>

Bengaluru is synonymous with traffic. If you happened to be stuck in your car or bike in places like Silk Board, K.R. Puram, Marathalli, or Outer Ring Road, you are surely to blame BMTC and KSRTC buses for occupying the entire road and blocking your way. 

Source: Bangalore Insider

“Public transport occupies a much lesser space compared to any personal mobility vehicle, car, two-wheeler. Plus public transport emits a much lesser tailpipe emission per passenger they carry,” said Dr Ashish Verma, Convenor, IISc Sustainable Transportation Lab, in a conversation with AIM.

Read more: ChatGPT Craves Human Expertise

Well, this is true when calculated with a full bus, usually public transportation, especially eco-oriented buses that have lesser emissions than two-wheelers or cars. In the same respect, a bus which occupies the space of 2 cars can carry up to 10 cars worth of people. It is noted that these are the calculations done with a full bus, which is not always the case. 

There have been so many instances where buses with one or two passengers making the rounds in the city, occupy more space, leading to more traffic.

Source: Deccan Herald

So, how is BMTC addressing this concern, alongside optimising its buses and routes? 

BMTC recently introduced Namma BMTC, a passenger information app for phones, under the Government of India’s Nirbhaya scheme. Developed by MCT Cards & Technology and Amnex Infotechnologies, the app can be used in both Kannada and English and offers comprehensive details about bus schedules, routes, stops, and other relevant details for public transportation.

Buses: The Unsung Lifeline Transportation System

“We have a footfall of around 30 lakh passengers on a daily basis in buses which is almost six times than the metros, and we can proudly say that it is the lifeline of Bangalore where people, mostly low-income household bank on buses for the daily commute,” said A.V. Surya Sen, Director, IT, BMTC. 

In Bangalore, there are currently more than one crore private vehicles on the road, but they are hardly able to transport another 30 lakh passengers. In comparison, there are only 6800 buses operating in Bangalore, making the bus population nowhere near the vehicle population in terms of providing transportation services. So according to Sen, blaming the volume of traffic solely on the number of buses is unfair. 

Nonetheless, even with the availability of various modes of transportation, buses retain a crucial function. They are not only vital for connectivity and convenience but also serve as a means of providing essential services to the public. Buses can complement the higher-level public transport system while also offering distinct advantages. 

“Additionally, buses are a cost-effective and affordable option, especially for individuals from low-income households. Hence, buses contribute to equitable transportation accessibility,” added Verma.

Read more: Open Source Opens Funds for Semiconductor in India

AI is Solving Traffic

The Namma BMTC app is an ode to the successful implementation of AI and ML algorithms.  “The use of ML and AI is instrumental in providing effective solutions. The availability of extensive historical and legacy data allows us to make accurate predictions about bus arrival times and create algorithms for determining bus schedules and locations,” added Sen.  

Having a comprehensive mobile application that offers a range of functions, including real-time tracking, trip planning, identifying nearby bus stops, and suggesting optimal travel routes, is an important aspect of an information system. However, it is essential to understand that the app itself does not provide a complete solution. It is just one part of a larger set of actions and initiatives that need to be taken.

According to Verma, the app in the future will also function as an Automated Fare Collection (AFC) system, enabling mobile ticketing, mobile passes, and digital wallets, revamping the process of purchasing tickets and travelling on buses by introducing digital payment methods. 

Compared to other metropolitan cities like Mumbai, Chennai, and Delhi, BMTC is extremely expensive. 

However, the government is taking steps to address the issue of affordability, and it is expected that improvements will be made in this regard.

“It is important to note that service quality and affordability are two separate aspects. Urban public transportation should be viewed as a service rather than a profit-oriented business, and efforts should be made to enhance service conditions while making it more affordable,” commented Verma. 

Read more: Don’t Call Us Back to the Office

Moving Towards a Greener City

According to the NASSCOM report of 2021, Bangalore is the top destination for IT professionals in India with over 1 million people moving to the city each year in search of work. Now, with employees returning to the office as remote work is ending, buses are now in higher demand along with places to rent. 

Public transportation promotes sustainability by reducing emissions, improving air quality, alleviating traffic congestion, and fostering social equity. It efficiently carries more passengers, lowers greenhouse gas emissions, reduces pollutants, saves time and money, and provides access to opportunities for those without cars. Furthermore, in terms of carrying a single passenger, public transport emits significantly lower levels of tailpipe emissions per passenger. 

“Public transportation is a viable solution to combat global warming and carbon emissions. Prioritising public transportation and reducing reliance on private vehicles leads to a cleaner environment and improved quality of life,” concluded Verma.

The headline of the story has been updated to show that BMTC is not the reason behind Bengaluru’s traffic.

The post Stuck in Bengaluru Traffic? Don’t Blame BMTC Buses appeared first on AIM.

]]>
India’s Fabless Semiconductor Supply Chain is Far from Fabulous https://analyticsindiamag.com/ai-origins-evolution/indias-fabless-semiconductor-supply-chain-is-far-from-fabulous/ https://analyticsindiamag.com/ai-origins-evolution/indias-fabless-semiconductor-supply-chain-is-far-from-fabulous/#respond Tue, 13 Aug 2024 09:48:41 +0000 https://analyticsindiamag.com/?p=10132477 India’s Fabless Semiconductor Supply Chain is Far from Fabulous

When it comes to manufacturing chips outside India, trust is foundational. But it is also the first point of vulnerability in the supply chain.

The post India’s Fabless Semiconductor Supply Chain is Far from Fabulous appeared first on AIM.

]]>
India’s Fabless Semiconductor Supply Chain is Far from Fabulous

The semiconductor industry in India is fabless, not fabulous, meaning that while designs are conceived within the country, the actual manufacturing of chips occurs overseas. While PSMC and Tata Electronic have their roadmap for building a fab in India by 2026, till then, Indian companies rely on TSMC and others for building semiconductor chips that were designed in India. Which creates a security issue within the supply chain of the semiconductor industry.

To explain the possible risks in the semiconductor supply chain and how to protect from them, Shashwath TR, the CEO and founder of Mindgrove Technologies spoke with AIM. “When I send our design to a foundry, I am trusting them with the intellectual property,” said Shashwath.

Founded in 2021, Mindgrove Technologies is a Chennai based semiconductor company focusing on the design and production of Systems on Chips (SoCs). The fabless startup secured $2.32 million in seed funding last year from investors led by Sequoia Capital India (now Peak XV Partners).

In May, the company unveiled India’s inaugural commercial high-performance SoC (system on chip) dubbed Secure IoT, which would be in production by next year. “When it does go into production, that is the revenue generating thing for us,” added Shashwath. 

Secure IoT’s production is based on MPW (Multi-Project Wafer). This enables cost-effective prototyping and low-volume production, reducing the cost of a full prototyping wafer run to 10% or even 5% of the initial price.

Moreover, the company also has plans for releasing a prototype for its Vision SoC by next year, or another 18 months. 

But There is a Bigger Issue

When it comes to manufacturing chips outside India, trust is foundational. But it is also the first point of vulnerability in the supply chain. Foundries like TSMC assure clients that they will not misuse or replicate the designs, but the risk cannot be entirely eliminated.

Two primary risks emerge when outsourcing semiconductor manufacturing: design theft and malicious tampering. While the former requires advanced state-level capabilities, the latter could be more subtle, involving the insertion of backdoors or other vulnerabilities into the chip design.

“As far as stealing the design is concerned, it’s really hard to do—it takes a state actor,” Shashwath emphasised, underscoring the sophisticated level of expertise required to extract a chip design post-manufacture. 

Even with the guarantees provided by foundries like TSMC, the potential for tampering remains. This concern is mitigated through rigorous verification processes, including non-functional verification, silicon validation, and extensive testing. A more subtle risk lies in the potential for unauthorised modifications during the manufacturing process. 

“Somebody in the middle of the supply chain could add something into the design, making it insecure,” Shashwath said. To mitigate this, companies must implement rigorous verification processes, including functional and non-functional tests, to ensure that the final product aligns with the original design and is free of tampering. “We can measure the power output from different things in simulation and then see if they match in real life,” Shashwath noted, highlighting the meticulous checks in place to ensure chip integrity.

Despite these precautions, the semiconductor industry has yet to experience a proven attack at the silicon level. However, the potential for such an attack exists. “There are usually three or four papers a year talking about these possibilities,” Shashwath remarked, highlighting the ongoing research into theoretical vulnerabilities.

The risk does not end at the foundry

Once a chip is manufactured, the One-Time Programmable (OTP) memory, which stores critical cryptographic keys, remains unprogrammed until the packaging stage. This introduces another layer of vulnerability—if compromised, it could render the entire chip insecure. 

Shashwath explained, “An insecure root key can be provisioned on that chip, which makes the entire chip insecure.”

To counter this, companies like Mindgrove Technologies employ multiple strategies, including the ability to disable compromised keys via updates. “We have space for four root keys inside the chip. If any root key is known to be compromised, we can write an OT update to the chip, which will disable that key,” Shashwath shared, illustrating a proactive approach to potential threats.

In such a complex landscape, the role of government regulations and industry standards becomes crucial. Agencies like the RBI or Ministry of External Affairs or Ministry of Defense in India often prescribe specific security measures for chips used in sensitive applications. These measures are not only about protecting the chip itself but ensuring that the entire supply chain remains secure. 

“There is a long consultation process of what kind of regulations you need,” Shashwath mentioned, acknowledging the collaborative effort required across the ecosystem.

Read: What’s Stopping India’s Semiconductor Mission

Much More is Needed

While companies like Mindgrove Technologies are pioneering in securing their products, Shashwath admits that the broader challenge lies in creating a consensus across the industry. “The ecosystem has to agree on what to do,” he said, reflecting the shared responsibility of all stakeholders in the semiconductor supply chain.

“Everybody who’s making systems on chip will be working on this, especially if they have some kind of relationship with the government or security-focused applications,” Shashwath explained that every company such as Netrasemi, Sima, or even AMD, Intel, Qualcomm, everyone is concerned about this. 

As India looks to the future, the establishment of a domestic fab by PSMC and Tata Electronics is a significant step toward reducing reliance on foreign foundries. “When it comes, it’ll be great. We hope to be able to use a wholly indigenous supply chain,” Shashwath remarked, expressing optimism about the potential for a more secure and self-reliant semiconductor industry in India.

However, he remains realistic about India’s current capabilities when asked about building something like NVIDIA. “We have to learn to walk before we can run, before we can fly,” Shashwath observed, highlighting the importance of focusing on consumer appliances, electronics, and smart devices before venturing into more complex areas like supercomputing.

The post India’s Fabless Semiconductor Supply Chain is Far from Fabulous appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-origins-evolution/indias-fabless-semiconductor-supply-chain-is-far-from-fabulous/feed/ 0
Fancy Engineering Programmes are Making Graduates Ineligible for Jobs in India https://analyticsindiamag.com/ai-origins-evolution/are-fancy-engineering-programmes-making-graduates-ineligible-for-jobs-in-india/ https://analyticsindiamag.com/ai-origins-evolution/are-fancy-engineering-programmes-making-graduates-ineligible-for-jobs-in-india/#respond Mon, 12 Aug 2024 10:09:29 +0000 https://analyticsindiamag.com/?p=10132212 Are Fancy Engineering Programmes Making Graduates Ineligible for Jobs in India?

The mismatch between new-age degrees and existing eligibility criteria isn't just an isolated issue; it's a systemic one.

The post Fancy Engineering Programmes are Making Graduates Ineligible for Jobs in India appeared first on AIM.

]]>
Are Fancy Engineering Programmes Making Graduates Ineligible for Jobs in India?

The allure of innovative and specialised engineering programs, such as a PGP in AI, or an 11-month course in ML, offered by new-age universities in India, is undeniable. These courses have been designed in a way that would boost the current youth of India to be prepared for the jobs of the future. 

Register for this Career Webinar >

However, are there any jobs in India that are looking for such roles, specifically outside the startup sector?

Prof. V Ramgopal Rao, group Vice-Chancellor of BITS Pilani Campuses and former director of IIT Delhi, voiced a growing concern. He observed that while these innovative programs may seem exciting, they could inadvertently limit students’ prospects. “Students graduating with fancy discipline titles will neither be eligible for master’s programs in India nor for any government jobs,” Rao warned. 

While the crux of the problem lies in the rigid eligibility criteria of government jobs and higher education admissions, which are slow to adapt to the rapidly evolving educational landscape, the problem with the allure of these courses is also undeniable. 

The rigidity could leave students in a precarious position, with limited options other than joining private industries or seeking opportunities abroad. At the same time, “Students and parents need to be mindful of what they are getting into,” concluded Rao, while also adding that there is nothing wrong with these programmes as they are approved by the government in the first place.

Voices from Academia

These concerns are echoed by many in the field. Mahadevan Chandramouleeswaran, an investor at Akshamala Tech Services, shared a personal account. His daughter, a top scorer in a five-year integrated M Tech software engineering course, found her application for an assistant professor position rejected because her degree did not align with the traditional 4+2-year degree format. 

The academic community is increasingly aware of the challenges posed by these new programs. Dr. Shashi Bhushan Arya, a professor and national expert in aluminium recycling, noted a decline in enrollment rates in core branches of engineering. He suggested that universities may be using fancy degree titles as a marketing strategy to fill seats, particularly in expensive management quotas. 

This trend raises questions about the long-term viability and recognition of these programs, while the number of STEM graduates in India are increasing rapidly.

However, not all voices are critical. Prof Ravikumar Bhaskaran, life fellow of IIT Kharagpur, pointed out that IIT Kharagpur has a history of introducing new B.Tech programs in emerging fields, often ahead of other institutions. While these programs initially faced scepticism regarding job prospects and postgraduate opportunities, they eventually gained acceptance and recognition. 

Bhaskaran’s perspective offers a glimmer of hope, suggesting that new programs, given time and support, can find their place in the academic and professional landscape.

The root of the problem, as many experts suggest, lies in the inflexibility of current policies. Dharmendra Saraswat, a professor of agricultural and biological engineering, urges policymakers to bring about the necessary changes to match the evolving landscape of engineering education in India. 

A Call for Policy Reform?

Similar cases have also been reported by others. The mismatch between new-age degrees and existing eligibility criteria isn’t just an isolated issue; it’s a systemic one. As Prof Rao highlighted, this problem isn’t limited to undergraduate programs. 

He recently chaired a committee for scientist recruitment at a CSIR lab, where candidates from top institutions like IIT Bombay and IIT Delhi were disqualified because their degrees didn’t match the advertised requirement of “Electronics engineers.” Such cases underline the urgent need for coordination across government bodies and educational institutions to ensure that degree titles and job requirements align.

The rigidity of the system is further exemplified by Reuben Mathew, an aerospace engineer, who recounted his experience of being rejected for an engineering position at DRDO/NAL due to a mismatch between his degree title and the traditional degree names listed in the job advertisement. 

Mathew’s frustration is a shared sentiment among many graduates of specialised programs, who find themselves excluded from opportunities despite possessing the necessary skills and experience.

Without these changes, graduates from these interdisciplinary programs may find themselves at a disadvantage. Dr. Chennakesava Kadapa, a lecturer in mechanical engineering, sees an opportunity in this challenge. He argues that IITs and other premier institutions should update the GATE exam and introduce new master’s programs in fields like AI, robotics, and sustainable energy. 

This proactive approach could ensure that new-age programs are recognised and valued, both in academia and the job market.

The rise of specialised engineering programs is a response to the evolving demands of technology and industry. However, without corresponding changes in the eligibility criteria for government jobs and postgraduate programs, these degrees may leave students stranded.

The post Fancy Engineering Programmes are Making Graduates Ineligible for Jobs in India appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-origins-evolution/are-fancy-engineering-programmes-making-graduates-ineligible-for-jobs-in-india/feed/ 0
To Be an AI Agent is to be Curious https://analyticsindiamag.com/ai-origins-evolution/to-be-an-ai-agent-is-to-be-curious/ https://analyticsindiamag.com/ai-origins-evolution/to-be-an-ai-agent-is-to-be-curious/#respond Mon, 12 Aug 2024 08:30:00 +0000 https://analyticsindiamag.com/?p=10132200

By combining human and AI curiosity, we can leverage their unique strengths to compound our creative potential

The post To Be an AI Agent is to be Curious appeared first on AIM.

]]>

Most AI approaches rely heavily on the program developed by humans. They can only do what they were programmed to do. They can only learn what they are taught. When faced with new environments, these systems get stuck. But this is slowly changing with the rise of AI agents. But is that enough?

In a recent interview, British neuroscientist Karl Friston underscored a transformative potential in current AI agents: the integration of curiosity. 

“Their inability to independently select training data limits their capacity for genuine intelligence. While they excel at data processing and prediction, they lack the curiosity and independent thought essential for true scientific advancement and problem-solving,” he said.

Why is curious AI even necessary?  To answer this, look at these four research papers to understand the need for curiosity. 

A “linear vs. loopy maps” study found that while LLMs do well on simple, linear tasks, they have trouble on complex ones that involve cycles or dead ends. 

This restriction was investigated in further detail in the “TravelPlanner” study, which showed that LLMs frequently perform poorly in complicated decision-making scenarios that call for taking into account a variety of limitations and possible outcomes.

An additional significant vulnerability is to an LLM’s capacity to efficiently retrieve and apply the knowledge it has received training on. Studies on “LLM lookup capabilities” have revealed that these models’ performance in locating pertinent data might vary, which can result in errors and imprecise results.

How Can Being Curious Solve the AI Problem

Nick Clegg, the president of global affairs at Meta, has called AI models stupid, similar to how Yann LeCun calls them ‘dumb’. “They can process information and identify patterns incredibly fast, but they don’t truly understand the world in the same way humans do. They’re essentially sophisticated pattern-matching machines,” he said.

The idea is to compound curiosity with intelligence. By combining human and AI curiosity, we can leverage their unique strengths to compound our creative potential. The intuitive nature of human curiosity can work in tandem with the computative power of AI curiosity to accelerate discoveries and drive innovation. 

Putting it in the words of Sonya Huang, partner at Sequoia: “As the models get bigger and bigger, they begin to deliver human-level, and then superhuman results.”

This proved true in the new algorithm developed by researchers at MIT’s Improbable AI Laboratory and CSAIL. The paper highlighted that the algorithm automatically piques interest when needed, but when not, it stifles it until the agent gathers enough information from its surroundings to decide what to do.

When evaluated on more than 60 video games, the method performed well on both hard and easy exploration tasks, whereas prior algorithms could only handle a hard or easy domain on their own. 

“Previously what took, for instance, a week to successfully solve the problem, with this new algorithm, we can get satisfactory results in a few hours,” co-author Zhang-Wei Hong said. This efficiency is crucial in real-world applications where time and resources are limited.

The study expands on past research by OpenAI that showed how AI agents driven by curiosity could succeed in challenging gaming environments such as Montezuma’s Revenge. 

Making AI Agents More Curious

Over the years, scientists have worked on algorithms for curiosity, but copying human inquisitiveness has been tricky. For example, most methods aren’t capable of assessing AI agents’ gaps in knowledge to predict what will be interesting before they see it. 

For example, TEXPLORE-VENIR is a reinforcement learning method that was created by Todd Hester and Peter Stone. It incentivises programs to find new knowledge and reduce uncertainty. It offers intrinsic rewards for comprehending novel concepts, such places or recipes, in contrast to traditional approaches that only concentrate on reaching predetermined goals. 

Another example is IBM’s Project Debater AI which aims to engage in competitive debates rather than genuinely exploring the nuances of the topics discussed. IBM claims that Debater AI is the first-ever AI system designed to meaningfully engage with humans in a debate.

This improves curiosity-driven exploration and learning efficiency.

Take chatbots, for example—it’s common to see chatbots that can answer frequently asked questions (FAQs). On the other hand, customer service quality can significantly improve if chatbots have a certain level of perceived emotional intelligence that can be achieved by injecting curiosity-driven behaviours.

In healthcare, more curious models could accelerate drug discovery by exploring vast chemical spaces with greater efficiency. In robotics, curious AI could enable robots to adapt to new environments and tasks more rapidly.


Definitely, making AI agents curious is something to look forward to. 

The post To Be an AI Agent is to be Curious appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-origins-evolution/to-be-an-ai-agent-is-to-be-curious/feed/ 0
RLHF is NOT Really RL https://analyticsindiamag.com/ai-origins-evolution/rlhf-is-not-really-rl/ https://analyticsindiamag.com/ai-origins-evolution/rlhf-is-not-really-rl/#respond Sun, 11 Aug 2024 05:32:23 +0000 https://analyticsindiamag.com/?p=10132069

Unlike true RL, where the reward is clear and directly tied to success, RLHF relies on subjective human judgments, making it less reliable for optimising model performance.

The post RLHF is NOT Really RL appeared first on AIM.

]]>

Yann LeCun wholeheartedly agrees. OpenAI co-founder Andrej Karpthy recently expressed disappointment in Reinforcement Learning from Human Feedback (RLHF), saying, “RLHF is the third (and last) major stage of training an LLM, after pre-training and supervised finetuning (SFT). My rant on RLHF is that it is just barely RL, in a way that I think is not too widely appreciated.”

He explained that Google DeepMind’s AlphaGo was trained using actual reinforcement learning (RL). The computer played games of Go and optimised its strategy based on rollouts that maximised the reward function (winning the game), eventually surpassing the best human players. “AlphaGo was not trained with reinforcement learning from human feedback (RLHF). If it had been, it likely would not have performed nearly as well,” said Karpathy. 

However, Karpathy agrees that for tasks that are more open-ended, like summarising an article, answering tricky questions, or rewriting code, it’s much harder to define a clear goal or reward. In these cases, it’s not easy to tell the AI what a “win” looks like. Since there’s no simple way to evaluate these tasks, using RL in these scenarios is really challenging.

Not everyone aligns with Karpathy’s view. Pierluca D’Oro, a PhD student at Mila and researcher at Meta, who is building AI agents, argues that AlphaGo has a straightforward objective,  to win the match. “Yes, without any doubt RL maximally shines when the reward is clearly defined. Winning at Go, that’s clearly defined! We don’t care about how the agent wins, as long as it satisfies the rules of the game,” D’Oro said.

He explained that as humans will interact with AI agents in the future, it is important for LLMs to be trained with human feedback. “AI agents are designed to benefit humans, who are not only diverse but also incredibly complex, beyond our full understanding,” he said. “For humans, it often comes from things like human common sense, expectations, or honor.”

Here, Karpathy also agrees. “RLHF is a net helpful step in building an LLM assistant,” he said, adding that LLM assistants benefit from the generator-discriminator gap. “It is significantly easier for a human labeller to select the best option from a few candidate answers than to write the ideal answer from scratch,” he explained, citing an example such as ‘generate a poem about paperclips.” 

An average human labeller might struggle to create a good poem from scratch as an SFT example, but they can more easily select a well-written poem from a set of candidates.

Karpathy goes on to explain that using RLHF in complex tasks like Go wouldn’t work well because the feedback (“vibe check”) is a poor substitute for the actual goal. The process can lead to misleading outcomes and models that exploit flaws in the reward system, resulting in nonsensical or adversarial behavior. 

Unlike true RL, where the reward is clear and directly tied to success, RLHF relies on subjective human judgments, making it less reliable for optimising model performance, he says.

“This is a bad take. When interacting with humans, giving answers that humans like *is* the true objective,” responded Natasha Jaques, senior research scientist at Google AI, to Karpathy’s critique.

She says that while human feedback is limited compared to something like infinite game simulations (e.g., in AlphaGo), this doesn’t make RLHF less valuable. Instead, she suggests that the challenge is greater but also potentially more impactful because it could help reduce biases in language models, which has significant societal benefits.

“Posting this is just going to discourage people from working on RLHF, when it’s currently the only viable way to mitigate possibly severe harms due to LLM biases and hallucinations,” she replied to Karpathy.

Moving Away from RLHF

Yann LeCun from Meta AI has constantly been talking about how the trial-and-error method of RL for developing intelligence is a risky way forward. For example, a baby does not identify objects by looking at a million samples of the same object, or trying dangerous things and learning from them, but instead by observing, predicting, and interacting with them even without supervision. 

Meta has been bullish on self-supervised learning for quite some time. Self-supervised learning is ideal only for large corporations like Meta, which possess terabytes of data to train state-of-the-art models. 

On the other hand, OpenAI recently introduced Rule-Based Rewards (RBRs), a method designed to align models with safe behaviour without extensive human data collection. 

According to OpenAI, while reinforcement learning from human feedback (RLHF) has traditionally been used, RBRs are now a key component of their safety stack. RBRs use clear, simple, and step-by-step rules to assess whether a model’s outputs meet safety standards. 

When integrated into the standard RLHF pipeline, RBRs help balance helpfulness with harm prevention, ensuring the model behaves safely and effectively without the need for recurrent human inputs.

Similarly, Anthropic recently introduced Constitutional AI, an approach to train AI systems, particularly language models, using a predefined set of principles or a “constitution” rather than relying heavily on human feedback.

Meanwhile, Google DeepMind, which is known for its paper “Reward is Enough” which claims intelligence can be achieved through reward maximisation, recently introduced another paper detailing Foundational Large Autorater Models (FLAMe)

FLAMe is designed to handle various quality assessment tasks and address the growing challenges and costs associated with the human evaluation of LLM outputs. 

Meta, which recently released LLaMA 3.1, opts for self-supervised learning rather than RLHF. For the post-training phase of Llama  3.1, Meta employed SFT on instruction-tuning data along with Direct Preference Optimisation (DPO).

DPO is designed to directly enhance the model’s performance based on human preferences or evaluations, rather than relying solely on traditional reinforcement learning or supervised learning methods.

Meta isn’t stopping there either. It recently published another paper titled “Self-Taught Evaluators,” which proposes building a strong generalist evaluator for model-based assessment of LLM outputs. This method generates synthetic preferences over pairs of responses without relying on human annotations.

Another paper from Meta titled “Meta-Rewarding Language Models: Self-Improving Alignment with LLM-as-a-Meta-Judge” allows LLMs to improve by judging their own responses instead of relying on human labellers. 

In line with this, Google DeepMind also proposed another new algorithm called reinforced self-training (ReST) for language modelling. It follows a similar process of removing humans from the loop by letting language models build their own policy with a single initial command. While ReST finds application in various generative learning layouts, its expertise lies in machine translation.

The post RLHF is NOT Really RL appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-origins-evolution/rlhf-is-not-really-rl/feed/ 0
NVIDIA AI Summit India 2024 – 5 Key Things to Expect https://analyticsindiamag.com/ai-origins-evolution/nvidia-ai-summit-india-2024-5-key-things-to-expect/ https://analyticsindiamag.com/ai-origins-evolution/nvidia-ai-summit-india-2024-5-key-things-to-expect/#respond Fri, 09 Aug 2024 08:17:09 +0000 https://analyticsindiamag.com/?p=10131973

Blackwell is Coming to India.

The post NVIDIA AI Summit India 2024 – 5 Key Things to Expect appeared first on AIM.

]]>

NVIDIA has announced its AI Summit 2024, scheduled between October 23 and 25 at the Jio World Convention Centre in Mumbai, India. The summit will feature three days of presentations, hands-on workshops, and networking opportunities aimed at connecting industry experts and exploring advancements in artificial intelligence.

“With accelerated computing infrastructure, research and AI skilling at scale, India has the potential to become the intelligence capital of the world. The upcoming NVIDIA AI Summit is the first-of-its-kind event with a significant focus on India,” said Vishal Dhupar, managing director (Asia-South), NVIDIA. 

“It promises to be topical, relevant for India and is a must-attend for developers, startups and enterprises,” he added.

1. Blackwell is Coming to India 

NVIDIA chief Jensen Huang is set to attend the event and participate in a fireside chat. There’s a strong possibility that Huang will announce the availability of NVIDIA’s latest Blackwell GPUs for the Indian market. 

According to a report, NVIDIA’s newly announced Blackwell GPUs are expected to begin shipping as early as October. Mumbai-based data center and managed cloud infrastructure provider Yotta Data Services is set to benefit from this early release.

“We are in early talks with Nvidia to source Blackwell GPUs as part of our order, and are in the process of finalising all details,” said Sunil Gupta, the co-founder and chief executive of Yotta.

“We’re looking at procuring around 1,000 Blackwell B200 GPUs by October, which would be equivalent to around 4,000 ‘H100’ GPUs. While a timeline isn’t clear yet, we’re expecting the delivery of Blackwell GPUs before the end of this year, and complete our full existing order by the next fiscal,” Gupta said.

The entire order between Yotta and Nvidia, Gupta said, is worth ~$1 billion.

Notably, the production of Blackwell chips has been delayed by three months or more due to design flaws, which could impact customers such as Meta platforms, Google, and Microsoft, who have collectively ordered tens of billions of dollars worth of the chips.

Earlier this year, Yotta, an elite partner of NVIDIA, received the first shipment of 4,000 GPUs. Yotta plans to scale up its GPU inventory to 32,768 units by the end of 2025. Last year, the company announced that it would import 24,000 GPUs, including NVIDIA H100s and L40S, in a phased manner.

2. Data Centres Loading 

During his last visit to India, Huang announced NVIDIA’s partnership with Reliance to develop a foundational large language model tailored to India’s diverse languages and generative AI needs. However, there have been no updates since. It is likely that Reliance will announce something significant during the summit.

Reliance is working with NVIDIA to build AI infrastructure that is over an order of magnitude more powerful than the fastest supercomputer in India today. NVIDIA will provide access to its most advanced GH200 Grace Hopper Superchip and NVIDIA DGX™ Cloud, an AI supercomputing service in the cloud. 

Reliance said that it will create AI applications and services for their 450 million Jio customers and provide energy-efficient AI infrastructure to scientists, developers and startups across India.

Similarly, NVIDIA partnered with Tata to build an AI supercomputer powered by the next-generation NVIDIA® GH200 Grace Hopper Superchip. TCS will utilize this AI infrastructure to develop and process generative AI applications. Additionally, TCS announced plans to upskill its 600,000-strong workforce through this partnership. 

In a recent earnings call, TCS reported having over $1.5 billion in its AI pipeline, encompassing 270 projects. 

Last year, Infosys expanded its alliance with NVIDIA to train 50,000 employees on NVIDIA’s AI technology, integrating these tools with Infosys Topaz to create generative AI solutions for enterprises.

Similarly, Netweb Technologies also partnered with NVIDIA to manufacture NVIDIA’s Grace CPU Superchip and GH200 Grace Hopper Superchip MGX server designs. This partnership supports the Make in India initiative by building a local ecosystem to address demands around AI and accelerated computing applications for both government and private enterprises.

3. Partnership with Indian AI Startups 

It’s highly likely that Indian AI startups will also make their presence felt at the event. Earlier this year, Dhupar said that he found Krutrim, Sarvam AI, and Immersio to be the three ‘most-exciting’ AI startups from India.

NVIDIA chief Huang believes the future of AI lies in physical AI. Recently, Bengaluru-based startup Control One launched India’s first physical AI agent for the global market. They released a video showcasing this agent, which responds to voice commands via a unique operating system. Control One has already integrated this OS into a forklift.

Control One is also an NVIDIA Inception Partner, which grants the startup access to cutting-edge GPU technology and crucial expertise for developing and scaling its AI systems.

Recently, another Indian AI startup, KissanAI, got accepted into the NVIDIA Inception Program.

Who knows, NVIDIA might partner with People+ AI’s Open Cloud Compute as well. OCC seeks to create an open network of compute resources, making it easier for businesses, especially startups, to access the compute power they need without being locked into specific cloud providers.

Meanwhile, NVIDIA recently partnered with Thapar Institute of Engineering and Technology to advance AI education and research. 

Through this technical collaboration, the university will offer a formidable infrastructure. Its current 227 petaflops of AI performance will be expanded to over 500 petaflops, tailored for the most demanding AI and deep learning tasks. It might follow suit by partnering with more universities. 

4. Made In India PCs 

Recently, NVIDIA announced a collaboration with six Indian PC builders—The MVP, Ant PC, Vishal Peripherals, Hitech Computer Genesys Labs, XRig, and EliteHubs—to launch Made-in-India PCs equipped with RTX AI. This initiative aims to bring advanced AI technology to Indian gamers, creators, and developers.

“Our vision is deeply rooted in the commitment to India’s future in AI and computing. With India’s AI market projected to reach $6 billion by 2027, the opportunity is immense,” said Dhupar. 

“There is an opportunity for India to be the capital of intelligence. The country has the skill sets and talent that understand how to work with a computer,” he added.

The new PCs are part of NVIDIA’s ongoing commitment to gaming and technological advancement. The inclusion of RTX AI technology in these systems offers gamers enhanced performance and visual experiences. 

5. Partnership with Indian Govt

Given Huang’s previous engagements with Indian leaders, including a meeting with Prime Minister Narendra Modi, his address may also touch upon NVIDIA’s plans for collaboration with India in AI and chip manufacturing. 

The Indian government is actively supporting AI development through the IndiaAI mission, launched in March 2024, which aims to position India as a global AI leader by investing in infrastructure and supporting startups. The mission includes an INR 10,300 crore investment to expand AI infrastructure and make GPUs more accessible. As many as 10,000 GPUs will be made available to startups and a marketplace will be created to benefit R&D facilities and startups.

The post NVIDIA AI Summit India 2024 – 5 Key Things to Expect appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-origins-evolution/nvidia-ai-summit-india-2024-5-key-things-to-expect/feed/ 0
Humanoids: The New Employees Who Work Cheap and Never Complain  https://analyticsindiamag.com/ai-origins-evolution/owning-a-humanoid-will-soon-be-cheaper-than-humans/ https://analyticsindiamag.com/ai-origins-evolution/owning-a-humanoid-will-soon-be-cheaper-than-humans/#respond Thu, 08 Aug 2024 13:38:15 +0000 https://analyticsindiamag.com/?p=10131932

Human labour can rake up additional costs, something that cannot happen with a humanoid.  

The post Humanoids: The New Employees Who Work Cheap and Never Complain  appeared first on AIM.

]]>

Brett Adcock, founder of Figure AI, predicted that everyone will own a robot in the future, much like everyone owns a car or phone today.

Interestingly, the robotics company unveiled its second-generation humanoid robot, Figure 02. The company said it is one step closer to its goal of selling production humanoids to industrial users, with the newer design refining every element of the original Figure 01.

The first-generation robot, Figure 01, took its first steps within a year of its development. As technology advances, owning a humanoid could indeed become more cost-effective than employing human workers.

While the initial investment in robotics may be high, the long-term savings in wages, benefits, and training can be substantial. Meanwhile, Figure AI has gotten both investment and a strong partnership deal from OpenAI.

According to a report, Goldman Sachs estimates that the cost of the Figure 01 humanoid is around $30,000 to $150,000 per unit. But, with future production and advanced adoption in factories, it is possible that the cost can come down in the long run. 

Interestingly, labour costs at major US automakers like Ford, GM, and Stellantis are approximately $64, though this is expected to rise to $150 per hour. These figures include wages, health care expenses, bonuses, and other benefits. Furthermore, human labour can rake up additional costs, something that cannot happen with a humanoid.  

Bengaluru-based Control One shared similar sentiments. The startup has focused on the warehousing sector, where there is a significant labour shortage and a high demand for automation-based solutions, especially in the global market.

“The warehousing market is facing a huge labour crisis. Our system enables one person to manage multiple robots, effectively multiplying their productivity,” said Pranavan S, Control One’s founder and CEO, in an interaction with AIM 

However, this shift raises ethical questions about job displacement and the value of human labour. Balancing economic efficiency with social responsibility will be crucial as we navigate this transformation, ensuring that technological progress benefits society as a whole without exacerbating inequality.

Furthermore, Elon Musk recently stated that Tesla aims to produce “genuinely useful” humanoid robots to start operating in its factories next year.

Humanoid Race

Back in August 2021, Tesla introduced a prototype humanoid robot, which Musk believes could help humanity achieve quite ambitious goals. Leading into this, at an event in October 2022, Musk expressed his hope to eventually produce millions of Optimus robots.

Optimus, a Tesla-built humanoid robot, weighs 56 kg, stands at a 170 cm tall, and is priced under $20,000 (€18,000) for mass production.

However, the robot had limited capabilities and Musk stated that he wouldn’t assign it more complex tasks because he “didn’t want it to fall on its face.” He said, “There’s still a lot of work to be done to refine Optimus. I think Optimus is going to be incredible in five or ten years.”  

Robot for the House

Bindu Reddy, CEO of Abacus.AI, posted on X, saying that the next trillion dollar company will be the one that ships a mass-market humanoid robot under $30k, capable of handling household chores like laundry, loading the dishwasher and cooking. 

While humanoids may be employed in factories and warehouses, a bigger application is probably finding a use case in the household. For instance, Figure’s humanoid can be dubbed as a robot housekeeper that is capable of performing a variety of household chores.

This humanoid robot is designed as a general purpose solution capable of thinking, learning, and interacting with its environment. It is set to support the global supply chain and address labour shortages by performing structured and repetitive tasks. 

The robot utilises AI and machine learning algorithms to understand and execute tasks such as cleaning and organising. Equipped with sensors and robotic arms, it can navigate complex home environments, ensuring efficiency and precision. 

Interestingly, the company had shared an image of them shipping the humanoid to their first customer, which turned out to be automotive giant BMW.

Similarly, the German robotics company NEURA has unveiled a video of their humanoid robot, 4NE-1. It is one of the first to participate in the early access NVIDIA Humanoid Robot Developer Programme.

Not That Easy

Several users have expressed their views about how owning humanoid robots would be impractical for certain tasks and also very expensive. 

Despite the advancements predicted, creating a fully strong robot that matches human intelligence remains extremely costly. Certain aspects of the manufacturing process cannot be easily scaled, and it takes years to develop a single human-like AI. As a result, they are incredibly expensive.

“Robots need to be able to deal with uncertainty if they’re going to be useful to us in the future. They need to be able to deal with unexpected situations and that’s sort of the goal of a general purpose or multi-purpose robot, and that’s just hard,” said Robert Playter, CEO of Boston Dynamics, in an interview with Lex Fridman last year.

Playter emphasised the immense difficulty of advancing robotics. Boston Dynamics, which started developing general-purpose robots in the early 2000s, only introduced its humanoid robot Atlas in 2013. Besides facing challenges in securing investments for robotics, training robots has always been a significant hurdle.

Simpler Robots 

While humanoids are the advanced version, simpler home-grown alternative robotic solutions are being developed. For instance, one Indian student has developed an AI-powered machine that completes homework in his handwriting. 

The machine, developed by Devadath PR, a robotics and automation engineering undergrad student, has now garnered significant attention, with over 1,000 people inquiring about purchasing it. He built the device using parts from his old 3D printer and is now working on a second prototype.

The post Humanoids: The New Employees Who Work Cheap and Never Complain  appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-origins-evolution/owning-a-humanoid-will-soon-be-cheaper-than-humans/feed/ 0
Is OpenAI Intel’s Biggest Regret Ever? https://analyticsindiamag.com/ai-origins-evolution/is-openai-intels-biggest-regret-ever/ https://analyticsindiamag.com/ai-origins-evolution/is-openai-intels-biggest-regret-ever/#respond Thu, 08 Aug 2024 11:10:36 +0000 https://analyticsindiamag.com/?p=10131908 Is OpenAI Intel’s Biggest Regret Ever?

Intel had the opportunity to acquire a 15% stake in OpenAI for $1 billion in 2017.

The post Is OpenAI Intel’s Biggest Regret Ever? appeared first on AIM.

]]>
Is OpenAI Intel’s Biggest Regret Ever?

Intel is going through some trouble lately. The company’s recent earnings fell short of analysts’ expectations, resulting in a 26% single-day selloff that brought its market cap below $100 billion for the first time in three decades. 

Also, CEO Patrick Gelsinger announced to employees last week that the company would reduce its workforce by 15% and cut about 15,000 jobs as part of a significant cost-cutting measure. All this, while he’s also been posting proverbs from the Bible, which made people stress even more about the company. 

But this unfortunate time could have played out differently had the company made that one single investment back in 2017. According to reports, in 2017 and 2018, the tech giant had the opportunity to acquire a 15% stake in OpenAI for $1 billion. Additionally, Intel could have secured another 15% stake by offering OpenAI its hardware at cost, according to the sources.

This would have made the company acquire a 30% stake in OpenAI, which is contentiously the leader in generative AI for the past few years. OpenAI sought Intel as an investor to reduce its dependence on NVIDIA, whose chips are possibly powering the entire AI world right now.

A Bet Gone Wrong

Intel declined the offer, partly because it doubted the immediate viability of generative AI models in 2018, which it believed would impact a timely return on investment. 

Cut to the present, it can be said that Intel is pushing really hard to make a strong presence in the AI industry. Once a world-leader in chips, Intel failed to capitalise on the AI boom and propelled NVIDIA to become one of the most valuable companies globally. 

But it is not just Gelsinger who is possibly praying for his business. NVIDIA chief Jensen Huang is also reportedly paranoid about the future of his company. In a recent podcast with Lex Fridman, Perplexity AI chief Aravind Srinivas revealed that he once asked Huang how he handles success and stays motivated. 

To this, Huang had replied, “I am paranoid about going out of business. Every day I wake up in a sweat, thinking about how things could go wrong.” Huang explained that in the hardware industry, planning two years in advance is crucial because fabricating chips takes time. 

“You need to have the architecture ready. A mistake in one generation of architecture could set you back by two years compared to your competitor,” Huang said. This definitely puts into perspective how a single investment could have changed Intel’s fortune, since even the CEO of the leading company is paranoid about things going wrong at any moment.

But Intel is Not Sitting Ducks

However, there is some positive news from Intel as well, which shows that the company is not giving up. 

For years, Intel focused on enabling CPUs, like those in laptops and desktops, for AI processes, rather than prioritising GPUs, which are more effective for AI calculations. In contrast, NVIDIA and AMD have thrived by concentrating on GPUs, while Intel largely missed the opportunity. 

However, in the third quarter, Intel plans to release its Gaudi 3 AI chip, which Gelsinger claims will outperform NVIDIA’s H100 GPUs, possibly even challenging NVIDIA Blackwell architecture.

Continuing with its focus on chips, Intel has also announced that Panther Lake and Clearwater Forest, the leading products on Intel 18A, are now out of the fab and ready to run on operating systems. These would be ready for production by next year.

Several people have cited that Gelsinger would bring Intel back on its feet after having almost lost in the AI race. The OpenAI failed deal of 2017-18 is something that Gelsinger, if he had been leading the company at that time, might have been able to make successful. 

In May, Gelsinger had said that the company’s AI strategy is on the right track, which made everyone think Intel was living in denial. “We’re really starting to see that pipeline of activity convert,” said Gelsinger.

But apart from GPUs, Intel’s CPU and NPU plans are still seemingly strong, along with a focus on edge use cases and on-device AI. Since Intel is the majority holder of the laptop industry, with the future of AI racing towards smaller models, it is possible that Intel might rise in a year or two as the leader, spearheading the AI PCs game.

Intel anticipates shipping 40 million AI PCs in 2024, featuring over 230 designs spanning from ultra-thin PCs to handheld gaming devices. There are no PCs without Intel – that’s for sure.

Failed Deals and Poor Quarters are Part of the Game

Undoubtedly, failed AI deals are part of the business. Recently, Elon Musk’s xAI cancelled the $10 billion deal with Oracle. Also, Apple has denied a partnership with Meta for AI.

Intel is not the first one that failed to convert an OpenAI deal. Not many are aware that IT consulting giant Infosys, together with Musk, AWS, YC Research, and a few others, had donated a sizable $1 billion to OpenAI back in 2015, when the latter began as a non-profit organisation. But the donation did not turn into an investment.

What if it is a bigger regret for OpenAI to not have Intel as one of its partners? Since the cost of running its business and AI offerings powered by NVIDIA is significantly higher for the company and is making it struggle to earn revenue, Intel could have helped OpenAI make its own hardware by now.

When it comes to Intel, maybe owning this huge part of OpenAI would have been a failed strategy. Since Microsoft owns 49% of the shares in OpenAI now, things could have been quite different for all three companies. 

Moreover, Intel has a sweet spot for India. It has partnered with several companies in India, such as Krutrim, Bharti Airtel, Zoho, and several others, to provide its enterprise and data centre computing services. Maybe, Gelsinger’s interest in India would put Intel on the driving seat soon in the generative AI race. 

The post Is OpenAI Intel’s Biggest Regret Ever? appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-origins-evolution/is-openai-intels-biggest-regret-ever/feed/ 0
Wait… Did OpenAI Just Solve ‘Jagged Intelligence’?  https://analyticsindiamag.com/ai-origins-evolution/wait-did-openai-just-solve-jagged-intelligence/ https://analyticsindiamag.com/ai-origins-evolution/wait-did-openai-just-solve-jagged-intelligence/#respond Wed, 07 Aug 2024 12:36:48 +0000 https://analyticsindiamag.com/?p=10131786

“9.11 > 9.9” is now in the OpenAI docs as a problem solved by requesting JSON-structured output to separate final answers from supporting reasoning.

The post Wait… Did OpenAI Just Solve ‘Jagged Intelligence’?  appeared first on AIM.

]]>

Today, OpenAI came up with its latest update featuring Structured Outputs in its API, which claims to enhance the model’s reasoning by ensuring precise and consistent adherence to output schemas. This is as demonstrated by gpt-4o-2024-08-06, achieving “100% reliability” in evals, perfectly matching the output schemas, ensuring accurate and consistent data generation.

Funnily, the OpenAI docs include the problem of “9.11 > 9.9” as an example resolved using JSON structured output to distinguish between final answers and supporting reasoning. It was in reference to the term ‘Jagged Intelligence’ coined by Andrej Karpathy for LLMs’ struggle with dumb problems. 

This new feature helps ensure that the responses from models follow a specific set of rules (called JSON Schemas) provided by developers. OpenAI said that they took a deterministic, engineering-based approach to constrain the model’s outputs to achieve 100% reliability.

“OpenAI finally rolled out structured outputs in JSON, which means you can now enforce your model outputs to stick to predefined schemas. This is super handy for tasks like validating data formats on the fly, automating data entry, or even building UIs that dynamically adjust based on user input,”posted a user on X.

OpenAI has used the technique of constrained decoding. Normally, when a model generates text, it can choose any word or token from its vocabulary. This freedom can lead to mistakes, such as adding incorrect characters or symbols.

Constrained decoding is a technique used to prevent these mistakes by limiting the model’s choices to tokens that are valid according to a specific set of rules (like a JSON Schema).

A Stop-Gap Mechanism for Reasoning?

Arizona State University, Prof Subbarao Kambhampati argues that while LLMs are impressive tools for creative tasks, they have fundamental limitations in logical reasoning and cannot guarantee the correctness of their outputs. 

He said that GPT-3, GPT-3.5, and GPT-4 are poor at planning and reasoning, which he believes involves time and action. These models struggle with transitive and deductive closure, with the latter involving the more complex task of deducing new facts from existing ones.

Kambhampati aligns with Meta AI chief Yann LeCun, who believes that LLMs won’t lead to AGI and that researchers should focus on gaining animal intelligence first. 

“Current LLMs are trained on text data that would take 20,000 years for a human to read. And still, they haven’t learned that if A is the same as B, then B is the same as A,” Lecun said

He has even advised young students not to work on LLMs. LeCun is bullish on self-supervised learning and envisions a world model that could learn independently.

“In 2022, while others were claiming that LLMs had strong planning capabilities, we said that they did not,” said Kambhampati, adding that their accuracy was around 0.6%, meaning they were essentially just guessing.

He further added that LLMs are heavily dependent on the data they are trained on. This dependence means their reasoning capabilities are limited to the patterns and information present in their training datasets. 

Explaining this phenomenon, Kambhampati said that when the old Google PaLM was introduced, one of its claims was its ability to explain jokes. He said, “While explaining jokes may seem like an impressive AI task, it’s not as surprising as it might appear.”

“There are humour-challenged people in the world, and there are websites that explain jokes. These websites are part of the web crawl data that the system has been trained on, so it’s not that surprising that the model could explain jokes,” he explained. 

He added that LLMs like GPT-4, Claude, and Gemini are ‘stuck close to zero’ in their reasoning abilities. They are essentially guessing plans for the ‘Blocks World’ concept, which involves ‘stacking’ and ‘unstacking, he said.  

This is consistent with a recent study by DeepMind, which found that LLMs often fail to recognise and correct their mistakes in reasoning tasks. 

The study concluded that “LLMs struggle to self-correct their reasoning without external feedback. This implies that expecting these models to inherently recognise and rectify their reasoning mistakes is overly optimistic so far”.

Meanwhile, OpenAI reasoning researcher Noam Brown also agrees. “Frontier models like GPT-4o (and now Claude 3.5 Sonnet) may be at the level of a “smart high schooler” in some respects, but they still struggle on basic tasks like tic-tac-toe,” he said.

Interestingly, Apple recently used the standard prompt engineering for a bunch of their Apple Intelligence features, and someone on Reddit found the prompts. 

The Need for a Universal Verifier 

To tackle the problem of accuracy, OpenAI has introduced a prover-verifier model to enhance the clarity and accuracy of language model outputs.

In the Prover-Verifier Games, two models are used, the Prover, a strong language model that generates solutions, and the Verifier, a weaker model that checks these solutions for accuracy. The Verifier determines whether the Prover’s outputs are correct (helpful) or intentionally misleading (sneaky).

Kambhampati said, “You can use the world itself as a verifier, but this idea only works in ergodic domains where the agent doesn’t die when it’s actually trying a bad idea.”

He further said that even with end-to-end verification, a clear signal is needed to confirm whether the output is correct. “Where is this signal coming from? That’s the first question. The second question is, how costly will this be?”

Also, OpenAI is currently developing a model with advanced reasoning capabilities, known as Q* or Project Strawberry. Rumours suggest that, for the first time, this model has succeeded in learning autonomously using a new algorithm, acquiring logical and mathematical skills without external influence.

Kambhampati is somewhat sceptical about this development as well. He said, “Obviously, nobody knows whether anything was actually done, but some of the ideas being discussed involve using a closed system with a verifier to generate synthetic data and then fine-tune the model. However, there is no universal verifier.”

Chain of Thought Falls Short

Regarding the Chain of Thought, Kambhampati said that it basically gives the LLM advice on how to solve a particular problem. 

Drawing an analogy with the Block World problem, he explained that if you train an LLM to solve three- or four-block stacking problems, they could improve their performance on these specific problems. However, if you increase the number of blocks, their performance significantly dies. 

Kambhampati quipped that Chain of Thought and LLMs remind him of the old proverb, “Give a man a fish, and you feed him for a day, teach a man to fish, and you feed him for a lifetime.” 

“Chain of Thought is actually a strange version of this,” he said. “You have to teach an LLM how to catch one fish, then how to catch two fish, then three fish, and so on. Eventually, you’ll lose patience because it’s never learning the actual underlying procedure,” he joked.

Moreover, he said that this doesn’t mean AI can’t perform reasoning. “AI systems that do reasoning do exist. For example, AlphaGo performs reasoning, as do reinforcement learning systems and planning systems. However, LLMs are broad but shallow AI systems. They are much better suited for creativity than for reasoning tasks.” 

Google DeepMind’s AlphaProof and AlphaGeometry, based on a neuro-symbolic approach, recently won a Silver Medal at the International Maths Olympiad. Many, to an extent, feel that neuro-symbolic AI will prevent generative AI bubbles from exploding.

Last year, AIM discussed the various approaches taken by big-tech companies, namely OpenAI, Meta, Google DeepMind, and Tesla, in the pursuit of AGI. Since then, tremendous progress has been made. 

Lately, it’s likely that OpenAI is working on Causal AI, as their job postings, such as for data scientists, emphasise expertise in causal inference.

LLM-based AI Agents will NOT Lead to AGI 

Recently, OpenAI developed a structured framework to track the progress of its AI models toward achieving artificial general intelligence (AGI).  OpenAI CTO Mira Murati claimed that GPT-5 will reach a PhD-level of capability, while Google’s Logan Kilpatrick anticipates AGI will emerge by 2025

Commenting on the hype around AI agents, Kambhampati said, “I am kind of bewildered by the whole agentic hype because people confuse acting with planning.”

He further explained, “Being able to make function calls doesn’t guarantee that those calls will lead to desirable outcomes. Many people believe that if you can call a function, everything will work out. This is only true in highly ergodic worlds, where almost any sequence will succeed and none will fail.”

The post Wait… Did OpenAI Just Solve ‘Jagged Intelligence’?  appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-origins-evolution/wait-did-openai-just-solve-jagged-intelligence/feed/ 0
Nagarro Targets Net Zero Emissions by 2050 https://analyticsindiamag.com/ai-origins-evolution/nagarro-targets-net-zero-emissions-by-2050/ https://analyticsindiamag.com/ai-origins-evolution/nagarro-targets-net-zero-emissions-by-2050/#respond Wed, 07 Aug 2024 07:03:51 +0000 https://analyticsindiamag.com/?p=10131715

To achieve this, the company has come up with an ‘eco-digital’ strategy.

The post Nagarro Targets Net Zero Emissions by 2050 appeared first on AIM.

]]>

The carbon footprint of the global information technology & computing industry almost equals or, at times, exceeds that of the aviation industry. The bad news is that it is going to substantially increase as generative AI gets embedded into the global economy.

Therefore, there is a greater need for IT companies to adopt sustainable practices and innovative solutions to reduce their environmental impact. 

Nagarro, a global player in digital engineering with a strong presence in India, has pledged net-zero carbon emissions by 2050. To achieve this, the company has come up with an ‘eco-digital’ strategy guided by three fundamental principles – green, inclusive, and ethical. 

“I see it playing a crucial role because the eco-digital strategy is designed not just to change technology but to shift human behaviour. With a workforce of 18,000 employees, the objective is to ensure that everyone understands and embraces this change towards sustainability,” Ashish Agarwal, the global BU head and custodian of sustainability at Nagarro, told AIM.

This becomes highly critical because the IT industry was quick to integrate generative AI into their existing solutions and has reported multiple proofs of concept (PoC) and revenue directly from generative AI.

Nagarro, which reported over $1 billion in revenue in FY 2023 for the first time, has also launched various LLM-powered solutions for its clients.

Generative AI Carbon Footprint

However, these models, which primarily run on the cloud, require significant energy. Google, which has invested heavily in generative AI and is the third biggest cloud service provider in the world, saw its carbon emission soar by 50% in the generative AI era. 

According to reports, it costs OpenAI millions of dollars every day just to keep ChatGPT running. 

“When generating not just text, but especially new videos or images, the energy consumption can be extremely high. Therefore, it’s crucial to use such technologies judiciously and evaluate whether they are truly necessary.

“Cloud computing plays a major role in this and is often used continuously throughout the day. However, cloud resources do not always automatically shut down or scale back, leading to the use of capacity that may remain idle. This can result in a substantial increase in energy consumption due to the scale of cloud operations,” Agarwal said.

As part of its eco-digital strategy, Nagarro plans to implement sustainable practices to ensure the responsible use of AI. Some of the best practices include evaluating whether every piece of software or API call is truly necessary when making decisions. 

“Consider if some functions can be managed locally rather than through multiple API calls. Additionally, assess whether the image or video quality is required for the end user’s needs and the context in which it will be used. Making these considerations can lead to more efficient and responsible use of resources,” Agarwal added.

Nagarro has also developed a dashboard that enables both customers and the company to monitor their cloud and energy consumption. It provides insights into policies that the customer can implement to reduce their carbon footprint. 

The dashboard visually displays current carbon consumption and projects potential reductions based on specific policies, which are automatically suggested based on their cloud usage patterns.

Green Softwares 

As part of its eco-digital strategy, Nagarro is also exploring ways to use software with limited energy. Besides designing energy-efficient software, the IT company is also ensuring that its software is backward compatible. 

The goal, according to Agarwal, is to extend the lifespan of devices such as laptops or mobile phones. This has been particularly relevant as in the past enterprises often discarded old devices every few years. 

“To address this, it is important to design software that is both backward and forward compatible, allowing devices to be used effectively in the long term while accommodating new technologies as they emerge,” he said.

The best practices also include deciding which programming language to use. For instance, C is a much more energy-efficient language compared to Python because it is an assembly language and gets executed very quickly. There are also softwares available that can translate Python code into a simpler form of language, closer to assembly language.

Moreover, Nagarro is training its employees in best practices by partnering with Terra.do, an edtech startup and climate careers platform, to develop a specialised curriculum. Approximately 2,000 Nagarro employees are currently receiving training through this programme.

“In terms of accessibility, global standards are well-established and effectively addressed. However, on the software engineering side, standards are not as developed. This is an area that will need to evolve and improve over time,” Agarwal said.

Ensuring Eco Digital Success 

Besides training its employees, Nagarro’s approach includes labelling a sustainability number to every project. Currently, the company assigns every engineering project with a carbon number, reflecting factors such as Microsoft Office 365 usage, hardware, travel, and hotel stays. 

“While we aim to advance toward a sustainability rating in the future, our current approach involves tracking the carbon footprint of all our engineering projects. Although this is still an early stage in our journey, we ensure that every project has a carbon number associated with it,” Agarwal said.

Though these are ambitious steps, the carbon footprint and energy consumption associated with training and inferencing LLMs remain substantial. While Nagarro’s efforts may appear modest, similar adoption across the industry could make a meaningful impact.

The post Nagarro Targets Net Zero Emissions by 2050 appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-origins-evolution/nagarro-targets-net-zero-emissions-by-2050/feed/ 0
Soon, You Will Be Able to Control Your Reality with Your Thoughts  https://analyticsindiamag.com/ai-origins-evolution/soon-you-will-be-able-to-control-your-reality-with-your-thoughts/ https://analyticsindiamag.com/ai-origins-evolution/soon-you-will-be-able-to-control-your-reality-with-your-thoughts/#respond Tue, 06 Aug 2024 08:35:15 +0000 https://analyticsindiamag.com/?p=10131568

If not, AI companies will

The post Soon, You Will Be Able to Control Your Reality with Your Thoughts  appeared first on AIM.

]]>

Your thoughts shape your reality, whether you realise it or not. Henry Ford famously said, “Whether you think you can, or you think you can’t—you’re right.” While this statement encapsulates profound truth and wisdom, here come AI companies trying their best to control your thoughts, feelings, and how you navigate in this present reality. 

A general belief is that humans have an inner voice, an internal dialogue that helps in decision-making – the ‘I think, therefore, I am’ ideology. 

Now, with AI in the picture, many are bound to turn to it for advice on relationships, finances, health, and career development. Simply put, AI is getting louder as the third voice, besides the external voices of the peers and our inner voice. Now, you can actually talk to your AI friend (real, not imaginary) for $99. 

Recently, OpenAI rolled out its advanced voice feature in ChatGPT to selected users, and the response has been very positive. “I spent an hour having an in-depth, zero-latency conversation about molecular biology with ChatGPT using the new voice mode while driving yesterday,” said Andrew Mayne, the founder of Interdimensional AI. 

The OpenAI voice feature can sing, count numbers quickly, beatbox, help you practice dialogues, tell stories, and more. Overall, it’s like having a super knowledgeable friend.

Meanwhile, Meta recently launched AI Studio powered by the company’s latest model, Llama 3.1. This platform allows anyone to create and discover AI characters and enables creators to build AI as an extension of themselves to reach more fans. 

Users can create an AI that teaches them how to cook, assists with Instagram captions, and generates memes to entertain friends—the possibilities are endless. This will definitely lead to more human-AI conversations. 

Not only this, Meta already knows what we think and like. It is well-known that Meta employs machine learning to predict the kind of content users prefer. With the advent of AI agents, they will have an even greater control over our thoughts and feelings. 

Previously, Meta used Llama 2 for personalization and recommendations, as revealed by Amey Porobo Dharwadker, AI engineer at Meta, in an exclusive interview with AIM.

Meanwhile, Google has also entered the personal AI agents market. The tech giant recently hired the co-founders of Character.AI, Noam Shazeer and Daniel De Freitas, along with several other key team members. 

Character.AI allows users to create their own chatbots, defining their personalities, traits, and backstories. These characters can be based on real people, fictional characters, or entirely new creations. Users can also chat with a variety of characters, including celebrities, and historical and fictional figures. 

Now, with AI offering opinions to humans, it is becoming increasingly difficult to ignore this third voice, the ‘AI voice’, that might eventually evolve into the AGI voice – who knows?

What about the Inner Voice? 

AI is permeating our inner voice as well. Neuralink has successfully implanted its device in a second patient, designed to enable paralyzed individuals to use digital devices solely through thinking. Elon Musk revealed this in a recent podcast with Lex Fridman. 

He further said that in the second implant there are 400 electrodes which are providing signals. Neuralink on its website states that its implant uses 1,024 electrodes. 

“I don’t want to jinx it, but it seems to have gone extremely well with the second implant. There’s a lot of signal and many electrodes, it’s working very well. In a few years, it’s going to be gigantic because we’ll dramatically increase the number of electrodes,” said Musk. 

He added that by the end of this year, the company is planning to have 10 Neuralink implants depending on the regulatory approval. 

“The long-term aspiration of Neuralink is to improve the AI-human symbiosis by increasing the bandwidth of the communication,” said Musk, explaining that if humans are too slow, AI is simply going to get bored waiting for you to spill out a few words. “If the AI can communicate at terabits per second and you’re communicating at, you know, bits per second, it’s like talking to a tree,” quipped Musk.

Musk further said that Neuralink might help with AI safety as well. “I think maybe within the next year or two, someone with a Neuralink implant will be able to outperform a pro gamer because the reaction time would be faster,” said Musk.

Noland Arbaugh, a 30-year-old man who became the first human to receive a Neuralink brain implant. Arbaugh now browses the web, plays computer games, and performs tasks that were previously impossible for him.

Apart from Neuralink, Neurotech startup Synchron recently announced that it has connected its brain implant to Apple’s Vision Pro headset. This breakthrough enables patients with limited physical mobility to control the device using only their thoughts.

Synchron is developing a brain-computer interface (BCI) designed to help patients with paralysis operate technology, such as smartphones and computers, with their minds. The company has implanted its BCI in six patients in the US and four in Australia. As these technologies continue to evolve, the third voice of AI will only grow stronger, reshaping our reality in unimaginable ways. Meta AI chief Yann Lecun believes that it is already happening, and stated: AI platforms will control what everybody sees.”

The post Soon, You Will Be Able to Control Your Reality with Your Thoughts  appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-origins-evolution/soon-you-will-be-able-to-control-your-reality-with-your-thoughts/feed/ 0
Can Observability Help Prevent Another CrowdStrike Outage? https://analyticsindiamag.com/ai-origins-evolution/could-observability-help-prevent-another-crowdstrike-outage/ https://analyticsindiamag.com/ai-origins-evolution/could-observability-help-prevent-another-crowdstrike-outage/#respond Tue, 06 Aug 2024 06:01:56 +0000 https://analyticsindiamag.com/?p=10131472 observability

Real-time feedback mechanisms in observability solutions notify teams immediately, reducing the mean time to detect and respond.

The post Can Observability Help Prevent Another CrowdStrike Outage? appeared first on AIM.

]]>
observability

The Microsoft-CrowdStrike outage on July 19 was possibly the biggest in IT history, costing Fortune 500 companies alone over $5 billion in losses. The outage, caused by a faulty update, brought call centres, hospitals, banks, and airports across the globe to a complete halt for a few hours.

The outage might underscore the fragility of modern technology, revealing how critical systems can be disrupted by vulnerabilities, signalling the need for robust safeguards and resilience in digital infrastructure.

Observability players were quick to point out the importance of comprehensive monitoring and real-time analytics in detecting and mitigating such vulnerabilities, emphasising that enhanced visibility can prevent or minimise the impact of future disruptions.

AIM inquired with both established and new observability providers about whether their solutions can help prevent similar outages in the future. 

Rohit Ramanand, the GVP of engineering, India, at New Relic said that full stack observability platforms provide real-time insights into system performance and health, making them an invaluable tool to help prevent, or mitigate outages when they occur.

New Relic is one of the leading and most dominating players in the space, with over 80,000 customers worldwide and over 12,000 customers in India alone.

Observability Can Avert System Downtime

“Observability enhances operational efficiency through three key mechanisms. First, it enables early issue detection with real-time insights, allowing engineering teams to resolve problems before they impact customers,” Ramanand told AIM.

Second, it offers a unified source of truth, streamlining the process of identifying root causes during outages by consolidating data from various sources.

Lastly, AI-driven observability platforms leverage historical data to build predictive models, helping to foresee and mitigate similar issues in the future. This integrated approach ensures a more proactive and efficient management of potential disruptions.

AIM also posed the same question to Middleware, a new-age startup based in San Francisco with roots in Ahmedabad. Sawaram Suthar, the founding director, echoed a similar sentiment. 

Suthar believes observability solutions can significantly help prevent a situation like the CrowdStrike outage. 

“Development and operations teams can collect metrics on performance, latency, and error rates, enabling proactive responses to anomalies. Furthermore, they can centralise logs to gain a unified view of system activity and streamline root cause analysis,” he said.

Suthar also adds that the real-time feedback mechanisms in observability solutions notify teams immediately, reducing the mean time to detect and respond. “We ourselves have helped companies achieve over 20% reduction in time to resolution,” he said.

“We’ve noticed that debugging often ends up accounting for 50% of a developer’s effort. With observability tools, they can focus on building applications, dedicating only about 10% of their time to debugging and problem resolution,” he added.

Can AI Help Enterprises Prepare Better?

Even if enterprises believe they are monitoring all aspects, they can still encounter blind spots without the right tools, highlighting the importance of full-stack observability. 

AI’s ability to examine historical data and be more predictive could help organisations take appropriate action and a preventive approach.

New Relic’s AI capabilities help enterprises monitor AI-specific metrics like token usage, costs, and response quality and integrate with traditional application performance monitoring.

Having an integrated view of metrics, events, traces, and logs simplifies and accelerates root cause identification. “Comprehensive application performance monitoring (APM) capabilities enhance anomaly detection, leading to quicker remediation,” Ramanand said.

Besides implementing robust monitoring and logging, enterprises should develop automated alerting and notification systems, regularly conduct system audits and develop disaster recovery and business continuity plans, according to Suthar.

However, sometimes an outage might be inevitable. Enterprises should be well-equipped to mitigate risk and minimise the impact through effective response strategies and robust contingency plans.

“With observability, organisations gain a deeper understanding of systems to identify how to mitigate incidents when they do occur and ultimately prevent such events from reoccurring,” Ramanand added.

Observability in the Generative AI Era

Overall, the observability market is projected to grow from $2.4 billion in 2023 to $4.1 billion by 2028, reflecting a compound annual growth rate (CAGR) of 11.7% over the forecast period, according to a MarketsandMarkets research report.

Moreover, an increasing number of observability providers have begun incorporating generative AI into their products and services. Additionally, companies are developing solutions to monitor LLMs as enterprises integrate these models into their business operations.

An AIM Research report revealed that several leading players in the AI observability market, including Dynatrace, Datadog, and New Relic, have expanded their offerings to include observability capabilities tailored for GenAI-infused applications, addressing the specific needs of this emerging field.

Another interesting observation from the report is that around 80% of the companies offering tools for generative AI observability are startups, and most of them have been established in

the last three years. This signifies the growing prominence of observability, especially in the era of generative AI supremacy.

The post Can Observability Help Prevent Another CrowdStrike Outage? appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-origins-evolution/could-observability-help-prevent-another-crowdstrike-outage/feed/ 0
Most AI Startups are Destined to Fail – Even the Funded Ones https://analyticsindiamag.com/ai-origins-evolution/most-ai-startups-are-destined-to-fail-even-the-funded-ones/ https://analyticsindiamag.com/ai-origins-evolution/most-ai-startups-are-destined-to-fail-even-the-funded-ones/#respond Mon, 05 Aug 2024 12:38:48 +0000 https://analyticsindiamag.com/?p=10131356 Most AI Startups are Destined to Fail, Even the Funded Ones

For some, AI is a bubble, for some it is a tree. Regardless, many AI startups would burst or fall off that tree.

The post Most AI Startups are Destined to Fail – Even the Funded Ones appeared first on AIM.

]]>
Most AI Startups are Destined to Fail, Even the Funded Ones

Every startup wants to be big someday, and most successful businesses were startups when they began. But the truth is, somewhere down the line most of them end up either dying, or getting acquired by big companies. This gets a notable exit for most investors, but the story for startups ends there. 

“Most startups are destined to die. Even the funded ones,” said Kunal Shah, the founder of CRED, adding that the success of a startup is mostly a miracle. “And a miracle doesn’t happen with a team that’s looking for stability and dislikes ambiguity,” he said, adding that startups need problem solvers.

The same is the case with the current AI startups, globally. The ones that started their journey a few years ago are now either getting acquired or gradually dying because of lack of funds. Only a few companies, such as OpenAI, Anthropic, Mistral, Hugging Face, and few others, are actively getting funded, which eventually will be known as the prodigies of the AI generation.

The AI Bubble is Here

With Google recently acquiring the founders of CharacterAI, the case of AI startups sustaining for a long term is put into question. The same was the case with Mustafa Suleyman from Inflection AI joining Microsoft AI Research team, Amazon taking over Adept AI’s team, Snowflake’s acquisition of Neeva, or Canva’s acquisition of Lenoardo.ai.

Going by that logic, Reka AI or even Cohere, might end up with the same fate. With Emad Mostaque leaving, Stability AI is also going through unstable times. The same could happen to Midjourney. 

What options do startups really have apart from getting acquired? As the discussion around the AI bubble intensifies, it gets tougher for companies to raise funds from investors as they get increasingly wary. The ones that were successful during the AI boom are now encountering difficulties.

Most of these startups are also not making money. For example, Lensa, the AI photo generating company with a great product and marketing, was not able to differentiate itself even when it was generating revenue. Quickly, other companies started building similar offerings within their own products, making Lensa lack defensibility.

This is the problem with most of the AI startups. “The problem with AI is that just as quickly as you can create a great product, another copycat can emerge and undercut you,” said David Chen, the CEO of Kapsule. Apart from this, another problem he highlighted was the problem of finding use cases and distribution. 

While India is currently running as the AI use capital of the world, running on jugaad and not VC money, the long term strategy for them also seems questionable. Though they know how to run businesses without large investment, most of them have the inherent goal of getting acquired, as competing with big-tech is not what they strive to do.

A few, such as Sarvam AI, Krutrim, TWO.AI, may have received a decent amount of funding, but long-term plans still remain unclear.

Sriharsha Putrevu, the co-founder of Retail Technology Group, said that the current AI startups would fail as they are not focused on value creation, but rather just the valuation. “Startup is a repeatable, scalable, sustainable business model not just a cash burning, user acquiring business models in hope of making profits in decades,” he said. 

Sales Cure All Problems

When it comes to starting a company, raising funds is the relatively easy part (not easy in itself) as we have seen with a lot of current AI funds. The harder part is finding the right market, niche, and perfect fit for profit. 

The problem cannot be solved by building the best version of an AI model as well, since any competitor would learn from it and create an even better one with the frontier of AI constantly moving. Though the cost of building AI models is shrinking which is helping the startups, it is also drawing in several competitors to the field.

Take, for example, OpenAI’s GPT-4 constantly getting dethroned by Meta’s Llama 3 or Anthropic’s Claude, or Google’s Gemini. And these are the ones that are already competing for the top spot; what about the new startups?

If AI is like electricity, it is important for startups to build a niche and solve the problem in a specific field, since competing for ‘best electricity’ does not make sense. That is what the current Indian AI landscape is focused on – to build use cases of AI instead of building the next LLM. Maybe, this would help them sustain, but for how long is the question. 

Moreover, since the investors in India are extra wary of pouring money into startups, these companies have very low tailwinds. This is making them run low on money and get close to the point of acquisition, or maybe extinction. 

All these problems can be solved with sales, for which startups need to move fast. For some, AI is a bubble, for some it is a tree. Regardless, many AI startups would burst or fall off that tree.

The post Most AI Startups are Destined to Fail – Even the Funded Ones appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-origins-evolution/most-ai-startups-are-destined-to-fail-even-the-funded-ones/feed/ 0
Indian Engineers are a Different Breed https://analyticsindiamag.com/ai-origins-evolution/indian-engineers-are-a-different-breed/ https://analyticsindiamag.com/ai-origins-evolution/indian-engineers-are-a-different-breed/#respond Mon, 05 Aug 2024 08:54:43 +0000 https://analyticsindiamag.com/?p=10131325 Indian Engineers are a Different Breed

Burning the midnight oil, fixing bugs, and driving autos to combat loneliness.

The post Indian Engineers are a Different Breed appeared first on AIM.

]]>
Indian Engineers are a Different Breed

It is often said that Indian techies are stealing jobs across the globe, but the reason is much more than just outsourcing and cheap labour. The reason is that Indian techies are more dedicated when working and ready to work overtime without hesitation. 

A week ago, Roshan Patel, the founder & CEO of Arrow Payments, shared the screenshot of a conversation he had with his employee. Patel asked if the employee would like a break as he’d been working nonstop for quite a while. To his surprise, the engineer replied, “I don’t need a break sir. My body is a vessel for the company to find product market fit.”

“Indian engineers are a different breed,” Patel remarked. That’s certainly turning out to be true. Shravan Tickoo, the founder of Rethink Systems, posted the same story, but with a twist, adding that the employee was working at 4 am, which was not the case. 

He revealed that in reality, it was the WhatsApp bot that sent random answers to the CEO when he texted the engineers. Pooja Goel in the comments narrated a story about how one of her family members used to schedule messages for 2 am to his manager, just to make it look like he’d been working post midnight.

Though, it definitely happens a lot of the time. Time zones do not seem to matter to Indians.

Tickoo funnily added that startups from the HSR Layout in Bengaluru have now started searching for this employee to hire him as the CTO of their companies. “Since that day, Mr Narayana Murthy and Bhavish Aggarwal have been considering this engineer as their heir apparent,” he added in jest.

Indian Engineers Eat Code for Breakfast

Not everyone is happy being the overworked employee though. People appreciate Patel’s message to the employee for encouraging him to take a break, but are actually also concerned about the broader trend of working overtime. “This is actually PTSD [post-traumatic stress disorder] from working under toxic Indian management for years,” said a user. “Colonial hangover of subservience,” remarked another.

However, some argue that working long hours doesn’t necessarily mean increased productivity. “Yet the American-born engineer who takes 1-hour lunch breaks and two-week PTO is still more productive,” noted a user. 

Ayush Jaiswal shared a screenshot of his chat with his friend about him working 18 hours a day for the past two months and how his productivity has taken a hit and so has the work-life balance. “Take a break, the world won’t end,” commented a user.

Arguably, AI tools are making the world a lot better for these developers, enabling them to accomplish much more quickly. There are a lot of jobs for 10X developers, who are able to take on the jobs of their peers and also work overtime. 

What’s the Bug?

In Karnataka, the state IT/ITeS employees union (KITU) has reported that the government may soon raise the IT employees’ working hours to 14 (12 hours + 2 hours of overtime). This might be a good idea for a lot of employees who would now be compensated for all the extra work they are already doing. 

However, this overlooks the critical issue: the bug is the overwork culture itself. The relentless pursuit of recognition and better pay leads many Indian engineers to work excessive hours, sacrificing their health and personal lives. This culture of overwork can lead to burnout, reduced productivity over time, and a skewed work-life balance.

When it comes to working for startups, most Indian employees get attached to the product they are building, and if fairly compensated, they are happy to continue working over and above what is expected of them. 

The ones who do not get fairly compensated resort to working on multiple projects. Recently, the photo of a Microsoft employee driving a Namma Yatri auto to combat loneliness went viral. This also narrates the story of what happens if you get addicted to working overtime.

It seems like the underlying issue is the desire to make a lot of money and freshers want to learn as much as possible to be able to compete or do as-well the senior resources. When Indians look at the salaries of their counterparts in the West, the most common conception is that they are not as skilled as them. Therefore, they decide to put in longer hours in a hope to upskill, or possibly get recognition from their bosses to get better pay. 

While the dedication and hard work of Indian engineers is commendable, it’s important to address the underlying issues of overwork and its long-term impact on health and productivity.

The post Indian Engineers are a Different Breed appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-origins-evolution/indian-engineers-are-a-different-breed/feed/ 0
Microsoft Azure Crushes Cloud Competition, Leaves AWS and Google Cloud Behind https://analyticsindiamag.com/ai-origins-evolution/microsoft-azure-crushes-cloud-competition-leaves-aws-and-google-cloud-behind/ https://analyticsindiamag.com/ai-origins-evolution/microsoft-azure-crushes-cloud-competition-leaves-aws-and-google-cloud-behind/#respond Mon, 05 Aug 2024 08:24:27 +0000 https://analyticsindiamag.com/?p=10131318

Microsoft announced plans to spend more money this fiscal year to enhance its AI infrastructure, even as growth in its cloud business has slowed, suggesting that the AI payoff will take longer than expected.

The post Microsoft Azure Crushes Cloud Competition, Leaves AWS and Google Cloud Behind appeared first on AIM.

]]>

Microsoft has once again claimed the crown of cloud, despite mixed market reactions after it reported its results.

In the previous quarter, Microsoft Azure notably encroached on AWS’s market share and it continues to ride this wave. In the latest quarter, Microsoft Azure’s Intelligent Cloud revenue, which includes the company’s server products and cloud services, rose to $28.5 billion, a 19 per cent increase year over year. This segment now constitutes nearly 45 percent of Microsoft’s total revenue. 

Meanwhile, AWS reported a revenue of $26.28 billion, a 19 per cent increase, surpassing analysts’ expectations of $26.02 billion according to StreetAccount. Google Cloud, meanwhile, experienced a 29 per cent rise in revenue, reaching $10.3 billion, slightly above the projected $10.2 billion.

Combined, AWS, Google Cloud and Microsoft Azure accounted for a whopping 67 per cent share of the $76 billion global cloud services market in Q1 2024, according to new data from IT market research firm Synergy. It needs to be seen, however, if Microsoft Azure has increased its market share. 

Source: Statista

In Generative AI We Trust 

One thing common amongst the three hyperscalers is that they have kept their trust in generative AI even though it hasn’t really started to pay off. 

Microsoft announced plans to spend more money this fiscal year to enhance its AI infrastructure, even as growth in its cloud business has slowed, suggesting that the AI payoff will take longer than expected.

Microsoft CFO Amy Hood explained that this spending is essential to meet the demand for AI services, adding that the company is investing in assets that “will be monetised over 15 years and beyond.” Meanwhile, CEO Satya Nadella said that Azure AI now boasts over 60,000 customers, marking a nearly 60% increase year-on-year, with the average spending per customer also on the rise. 

“For the next quarter, we expect Azure’s Q1 revenue growth to be 28% to 29% in constant currency,” said Hood. “Growth will continue to be driven by our consumption business, including AI, which is growing faster than total Azure.”

On similar lines, Google is facing increasing AI infrastructure costs. “The risk of under-investing far outweighs the risk of over-investing for us. Not investing to stay ahead in AI carries much more significant risks,” warned Google CEO Sundar Pichai.

In another news, Meta chief Mark Zuckerberg said that to train Llama 4, the company will need ten times more compute than what was needed to train Llama 3.

The Azure OpenAI service provides access to best-in-class frontier models, including GPT-4o and GPT-4o mini. Apart from that, Azure also offers in-house built AI models like Phi-3, a family of powerful small language models, which are being used by companies like BlackRock, Emirates, Epic, ITC, and Navy Federal Credit Union.

“With Models as a Service, we provide API access to third-party models, including as of last week, the latest from Cohere, Meta, and Mistral. The number of paid Models as a Service customers more than doubled quarter over quarter, and we are seeing increased usage by leaders in every industry from Adobe and Bridgestone to Novo Nordisk and Palantir,” said Nadella.

Microsoft is trying hard not to be dependent on OpenAI and has listed the startup as its competitor in generative AI and search. This move might have come after OpenAI cozied up to Apple by integrating ChatGPT into Siri.

Similarly, AWS Bedrock has been constantly adding new models to its offerings. “Bedrock has recently added Anthropic’s Claude 3.5 models, which are the best-performing models on the planet, Meta’s new Llama 3.1 models, and Mistral’s new Large 2 models,” said Amazon chief Andrew Jassy.

Last year, Amazon also announced its generative AI model called Q. “With Q’s code transformation capabilities, Amazon has migrated over 30,000 Java JDK applications in a few months, saving the company $260 million and 4,500 developer years compared to what it would have otherwise cost,” said Jassy.  

Google is also quite bullish with Gemini. Most recently, Google DeepMind’s new Gemini 1.5 Pro’s experimental version, 0801, was tested in Arena for the past week, gathering over 12K community votes.

For the first time, Google Gemini has claimed the 1st spot, surpassing GPT-4 and Claude-3.5 with an impressive score of 1300, and also achieving the first position on the Vision Leaderboard.

Google Vertex AI includes all models from the Gemini and Gemma families, such as Gemini 1.5 Pro, Gemini 1.5 Flash, and Gemma 2. It also offers third-party models from Anthropic, Mistral and Meta.

“Uber and WPP are using Gemini Pro 1.5 and Gemini Flash 1.5 in areas like customer experience and marketing. We broadened support for third-party models including Anthropic’s Claude 3.5 Sonnet and open-source models like Gemma 2, Llama, and Mistral,” Pichai said.

Some of the notable customers of Google Cloud are Hitachi, Motorola Mobility, KPMG, Deutsche Bank, and Kingfisher, as well as the US government.

Building In-House AI Chips 

NVIDIA’s upcoming Blackwell chip has been delayed by three months or more due to design flaws, a setback that could impact customers such as Meta, Google, and Microsoft, who have collectively ordered tens of billions of dollars worth of the chips.

Ahead of this, all the hyperscalers, apart from utilising NVIDIA GPUs, have also been developing their own AI chips. “We added new AI accelerators from AMD and NVIDIA, as well as our own first-party silicon chips, Azure Maia, and we introduced the new Cobalt 100, which provides best-in-class performance for customers like Elastic, MongoDB, Siemens, Snowflake, and Teradata,” said Nadella. 

Google also recently launched Trillium which was used by Apple to train its foundation models. “Trillium is the sixth generation of our custom AI accelerator, and it’s our best-performing and most energy-efficient TPU to date. It achieves a near five time increase in peak compute performance per chip and is 67 percent more energy efficient compared to TPU v5e,” said Pichai. 

“Year-to-date, our AI infrastructure and generative AI solutions for cloud customers have already generated billions in revenues and are being used by more than two million developers,” he added.

AWS has also developed its custom silicon chips. “We’ve invested in our own custom silicon with Trainium for training and Inferentia for inference. The second versions of those chips, with Trainium coming later this year, are very compelling on price performance. We are seeing significant demand for these chips,” Jassy said.

The post Microsoft Azure Crushes Cloud Competition, Leaves AWS and Google Cloud Behind appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-origins-evolution/microsoft-azure-crushes-cloud-competition-leaves-aws-and-google-cloud-behind/feed/ 0
GitHub Thinks It’s Hugging Face https://analyticsindiamag.com/ai-origins-evolution/github-thinks-its-hugging-face/ https://analyticsindiamag.com/ai-origins-evolution/github-thinks-its-hugging-face/#respond Fri, 02 Aug 2024 12:19:14 +0000 https://analyticsindiamag.com/?p=10131248

With GitHub Models, over 100 million developers can now access and experiment with top AI models where their workflow is – directly on GitHub.

The post GitHub Thinks It’s Hugging Face appeared first on AIM.

]]>

Microsoft’s GitHub is on a roll. The company recently announced the launch of  GitHub Models which will offer developers access to leading LLMs, including Llama 3.1, GPT-4o, GPT-4o Mini, Phi 3, and Mistral Large 2. “You can access each model via a built-in playground that lets you test different prompts and model parameters, for free, right in GitHub,” the company said in its blog post.

GitHub Models marks another transformational journey of GitHub. From the creation of AI through open source collaboration, to the creation of software with the power of AI, to enabling the rise of the AI engineer with GitHub Models – GitHub is the creator network for the age of AI,said Github chief Thomas Dohmke. 

With GitHub Models, the platform seeks to be more than just an AI pair programmer reliant on OpenAI’s models. Developers now have access to the latest LLMs and can experiment to find the one that best suits their needs. Just as ‘natural language’ has become a prominent programming language, GitHub is positioning LLMs as the go-to framework to develop new software and products. 

With GitHub Models, over 100 million developers can now access and experiment with top AI models where their workflow is – directly on GitHub. This will allow developers to build AI applications whereever they manage their code. 

“With GitHub Models, developers can now explore these models on GitHub, integrate them into their dev environment in Codespaces and VS Code, and leverage them during CI/CD in Actions – all simply with their GitHub account and free entitlements,” explained Dohmke.

GitHub is high on confidence as its financial performance has been equally impressive. During Microsoft’s recent earnings call, chief Satya Nadella said, “Copilot is driving GitHub growth overall. GitHub’s annual revenue run rate is now $2 billion.”

He further added, “Copilot accounted for over 40% of GitHub’s revenue growth this year and is already a larger business than GitHub was when we acquired it.” The AI-powered coding assistant now has 1.3 million paid subscribers, marking a 30% increase quarter-over-quarter, according to Nadella.

Meanwhile, GitHub Copilot Business has secured 50,000 enterprise customers across various industries. This year, Accenture plans to deploy the tool to 50,000 developers. Other notable enterprise customers include Goldman Sachs, Etsy, and Dell Technologies, Nadella said.

Getting ‘Huggy’ With AI Developers 

Github Models seems to be inspired by Hugging Face. Hugging Face also provides the ability to test out different models. It offers Git-based code repositories, pre-trained models for NLP, computer vision and audio tasks, datasets for translation and speech recognition, and spaces for small-scale demos of machine learning applications.

Recently, NVIDIA developed a playground called NVIDIA NIM, which hosts several open-source models covering reasoning, vision, retrieval, biology, and more. LMSYS Chat and Groq also offer playgrounds to try out different LLMs but none of them offer the capability to write code and build apps.

Hugging Face recently partnered with NVIDIA to offer inference as a service. These new capabilities will enable developers to rapidly deploy leading LLMs, such as the Llama 3 family and Mistral AI models, with optimisation from NVIDIA NIM microservices running on NVIDIA DGX Cloud.

Supposedly, GitHub Models also looks very familiar to Azure AI  Studio, with Microsoft bringing it under GitHub so that developers who are active on GitHub can experiment with it.

“Seems like this is a sales funnel for Azure’s OpenAI/LLM gateway with GitHub as a proxy(?). It’s a bit unclear. Regardless, I’d be pretty wary of adding either Azure or GitHub as a core dependency to any of my apps at this point with how poor uptime seems to be at both lately,” posted one user on Hacker News.

Even if it is true, this definitely seems to be an attempt by Microsoft to make GitHub a one-stop solution for developers to build AI applications.

The G in AGI Stands for Github?

Dohmke quipped, “The path to AGI will not be built without the source code and collaboration of the interconnected community on GitHub.” However, during his recent visit to India, he expressed different views on AGI.

“I think everybody has a different understanding of what AGI even means and what the G is. I think today I see no sign that machine learning models, or large language models, have sentience. They are not creative. They are machines created by us that help us with the things that we want to do,” he said in an interaction with AIM.

Moreover, he believes that GitHub will enable 100 million developers to become AI engineers and AI won’t take away the jobs of developers. “In many ways this new age of AI has actually created more demand for developers because now somebody also has to build all the AI systems.

“When you embark on a new job, whether fresh out of college or transitioning from another company, the primary challenge is understanding the company’s operations and code bases, which could be thousands of files,” he said.

A tool like Copilot could be very useful for entry-level coders. Moreover, Dohmke believes that going forward, applicants will be expected to be adept at using AI tools like Copilot and ChatGPT.

“Some software companies have even begun incorporating Copilot into their interview processes, replacing traditional coding exercises with tasks that assess applicants’ ability to utilise these tools effectively,” Dohmke said.

The post GitHub Thinks It’s Hugging Face appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-origins-evolution/github-thinks-its-hugging-face/feed/ 0
Perplexity AI is More Than Just a BS Generator https://analyticsindiamag.com/ai-origins-evolution/perplexity-ai-is-more-than-just-a-bs-generator/ https://analyticsindiamag.com/ai-origins-evolution/perplexity-ai-is-more-than-just-a-bs-generator/#respond Fri, 02 Aug 2024 10:41:47 +0000 https://analyticsindiamag.com/?p=10131244

“People don’t come to Perplexity to consume journalism; they come to Perplexity to consume facts”

The post Perplexity AI is More Than Just a BS Generator appeared first on AIM.

]]>

Perplexity AI chief Aravind Srinivas is the perfect example of grit and determination. Refusing to give up despite the surrounding noise claiming that his product will soon be overshadowed by lurking competitors, he is on his path to build the best AI search engine.

The company has faced accusations ranging from using Google’s search engine behind the scenes to plagiarising content from major media outlets calling it a bullshit machine.  Despite these challenges, Srinivas somehow manages to come out on top. Perplexity currently has 10 million monthly users and the number seems to be just growing. 

The company recently announced the Perplexity Publisher Program and has partnered with major publishing houses, including TIME, Der Spiegel, Fortune, Entrepreneur, The Texas Tribune, and WordPress. The plan is to introduce advertising within their related questions feature.

“Brands can pay to ask specific follow-up questions in our answer engine interface and on Pages. When Perplexity earns revenue from an interaction where a publisher’s content is referenced, that publisher will also earn a share,” the company said in its blog post.

This could be a good alternative for smaller brands that are unable to rank well in Google Search since Google married Reddit and rolled out their “helpful” content update. “This is something Wikipedia could have done long ago instead of the donation model,” posted a user on X. 

GenAI Search Would Be Nothing Without Journalists

“People don’t come to Perplexity to consume journalism; they come to Perplexity to consume facts,” said Dmitry Shevelenko, Perplexity’s chief business officer, indirectly taking the side of the journalists – a staggering contrast to Elon Musk’s views of making X the defacto platform for citizen journalism. 

Perplexity AI is helping the media by offering media houses free access to Online LLM APIs and developer support. This allows each publisher to create their own custom answer engine on their website. Visitors can ask questions and receive answers citing only that publisher’s content. 

“One of the key ingredients for our long-term success is that we need web publishers to keep creating great journalism that is loaded up with facts, because you can’t answer questions well if you don’t have accurate source material,” Shevelenko further added.

It seems true. In the long run, the key differentiator will likely be the media houses, as creating tools like Perplexity or SearchGPT is relatively straightforward. Recently, Dhravya Shah, an 18-year-old from India, developed an open-source alternative to SearchGPT.

Another example is Bishal Saha, a dropout from Lovely Professional University who created Omniplex, an open-source alternative to Perplexity AI, over a single weekend. Speaking to AIM, he said that despite initial job rejections from Perplexity AI, it was his tenacity that led him to creating this alternative, which is now gaining traction.

Not just Perplexity, OpenAI has also formed numerous strategic media partnerships that include major media organisations such as Vox Media, Financial Times, The Atlantic, TIME, Le Monde, Prisa Media, Associated Press (AP), News Corp, BuzzFeed, Stack Overflow, and Shutterstock. These partnerships will allow OpenAI to integrate high-quality journalistic content into its AI models, providing users with accurate and up-to-date information while ensuring proper attribution to the original sources.

“Journalists’ content is rich in facts, verified knowledge, and that is the utility function it plays to an AI answer engine,” said Shevelenko.

SearchGPT Killer? 

This move is timely given OpenAI’s recent announcement of SearchGPT, which appears to be a direct replica of Perplexity AI with few differentiators. 

SearchGPT is likely to integrate just OpenAI’s models, like GPT-4o mini, with a search engine, likely Microsoft’s Bing as the company is closely working with Microsoft, despite the tech-giant listing OpenAI as a competitor.

In contrast, Perplexity AI allows users to choose from multiple LLMs, including GPT-4o, Claude 3, Llama 3.1 405B, and Sonar Large, which is based on the open-source LLaMa 3 model and trained in-house by Perplexity.

The current and the most touted major competitor to Perplexity AI is Google, not OpenAI. Google itself is struggling to perfect the balance between search and generative AI. 

“I just got access to OpenAI’s new search tool SearchGPT. So far it’s quite good, however, I still think that Perplexity is slightly better in terms of performing complex AI searches,” posted a user on X. 

In a recent podcast with Lex Fridman, OpenAI chief, Sam Altman said, “The intersection of LLMs plus search, I don’t think anyone has cracked the code on yet. I would love to go do that. I think that would be cool.”

Further, he said that OpenAI does not want to build another Google Search. “I find that  (Google Search) boring. I mean, if the question is if we can build a better search engine than Google or whatever, then sure, we should.”

“Google shows you like 10 blue links, like 13 ads, and then 10 blue links, and that’s like one way to find information. But the thing that’s exciting to me is not that we can go build a better copy of Google Search, but that maybe there’s just a much better way to help people find, act on, and synthesise information,” said Altman.

Earlier this year, Google introduced AI Overviews, but it hasn’t received a strong response. Unlike AI Overviews, which has difficulty with complex, multi-layered questions, Perplexity’s results effectively handle multiple queries at once, delivering accurate answers along with relevant links. 

Moreover, Google’s AI Overviews recently also came under fire for suggesting users should use glue to stick cheese to their pizza, based on a comment from an 11-year-old Reddit user. 

Search is a major domain for Google, with the tech giant generating $48.5 billion in revenue from this segment alone.

Currently, OpenAI’s search feature on GPT-4o is not as good as Perplexity AI’s, and the  latter has somehow been able to maintain its responses grounded, while GPT-4o tends to hallucinate and confidently generate wrong answers. 

“It looks similar to the early days of Perplexity. While it looks unpolished, very interested to see how they approach search and how they actually integrate this with ChatGPT,” said Elvis Saravia, co-founder Dair about OpenAI’s search feature.

He further added that other players like xAI with Grok and Meta with Meta AI, being the giants in the publishing space, will also make attempts at search as they are catching up fast.

However, one advantage OpenAI has is that it is constantly working on its voice features in LLMs, which will definitely make asking questions fun!

The post Perplexity AI is More Than Just a BS Generator appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-origins-evolution/perplexity-ai-is-more-than-just-a-bs-generator/feed/ 0
There is No Need for a Databricks for the Postgres World https://analyticsindiamag.com/ai-origins-evolution/there-is-no-need-for-a-databricks-for-the-postgres-world/ https://analyticsindiamag.com/ai-origins-evolution/there-is-no-need-for-a-databricks-for-the-postgres-world/#respond Fri, 02 Aug 2024 07:33:22 +0000 https://analyticsindiamag.com/?p=10131219 There is No Need for a Databricks for the Postgres World

The future of MySQL is Postgres, which should be concerning.

The post There is No Need for a Databricks for the Postgres World appeared first on AIM.

]]>
There is No Need for a Databricks for the Postgres World

The database ecosystem has been going through a tectonic shift. Since last year, Postgres has been gaining popularity while MySQL is seeing a decline, with cloud providers prompt in building extensions within their offerings to provide Postgres services natively, connecting different databases.

The latest example is Databricks announcing that LakeFlow Connect is available for public preview for SQL Server, Salesforce, and Workday. This would allow the ingestion of data from different databases and enterprise apps. The platform is also native to Databricks’ Data Intelligence Platform, allowing both serverless compute and Unity Catalog governance.

“Ultimately, this means organisations can spend less time moving their data and more time getting value from it,” read the announcement blog, which listed the problems associated with connecting external sources like an increased extraction time and data preparation.

At the sideline of the Databricks’ Data+AI Summit 2024, CEO Ali Ghodsi said that two years ago, when he asked his customers if they need AI or the ease of accessing their data on Databricks, most of them chose the latter. But the company was not at all interested in it at the time.

“This was two years ago. Then we started our journey towards ‘we need to do this’,” said Ghodsi. This later led to the acquisition of Arcion for exactly the same purpose, and led to this announcement. “You don’t need to cobble it [databases] together yourself or use something else. It will just work seamlessly. So this is a big strategic area for us,” said Ghodsi.

The Old is New Again

Pondering this, Harsh Singhal, senior engineering manager of machine learning at Adobe, said that despite being an older piece of technology, Postgres is experiencing renewed interest and relevance in modern applications.

“Postgres has a large number of extensions to provide access to other databases and apps. And if there isn’t one available, developing an extension for a specific purpose is a very lucrative business,” he said, giving the example of Citus, which offered an open source extension to Postgres for distributed databases, increasing performance and scale for application developers.

The company was acquired by Microsoft in 2019, which is comparable to the acquisition of Arcion by Databricks.

Singhal asks the question – “Is there a need for a Databricks like company for the Postgres ecosystem?”

Developing custom extensions for Postgres can be a profitable business. Databricks, a prominent data technology platform, is praised for its capabilities in the AI sector. However, the question is whether a similar company focused solely on the Postgres ecosystem might be equally or more beneficial, given Postgres’ expanding feature set.

Not Really

There are companies such as Crunchy Bridge which are still offering Postgres services on different clouds. But as the offerings and capabilities are increasing in the ecosystem, it is very hard to catch up and survive as a business, with cloud providers also providing similar services. Take Databricks, for example, or even Google, Microsoft, and Oracle.

Eventually, Crunchy Bridge might also get acquired by big companies, marking an end to its entire business segment.

People say that the future of MySQL is Postgres, which is concerning as there were several companies that used to offer relational database capabilities. However, with the allure of MySQL dying, these companies are also in deep water. The same could happen with Postgres in the future, making it risky for anyone to venture into the field.

However, many MySQL applications still exist, and developers continue to use it. Although the industry trend is moving towards Postgres, transitioning from MySQL typically involves traditional migrations, which, while manageable for data, are quite challenging for applications.

Regardless, Postgres has become the future of data. But taking a step towards Postgres and making a business out of it is tricky, as announcements like LakeFlow and others can make business solely focused on offering Postgres a lucrative business.

The post There is No Need for a Databricks for the Postgres World appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-origins-evolution/there-is-no-need-for-a-databricks-for-the-postgres-world/feed/ 0
Kuku FM is Using Generative AI to Make Everyone a Full-Stack Creative Producer https://analyticsindiamag.com/intellectual-ai-discussions/kuku-fm-is-using-generative-ai-to-make-everyone-a-full-stack-creative-producer/ https://analyticsindiamag.com/intellectual-ai-discussions/kuku-fm-is-using-generative-ai-to-make-everyone-a-full-stack-creative-producer/#respond Fri, 02 Aug 2024 06:30:00 +0000 https://analyticsindiamag.com/?p=10131210 Kuku FM is Using Generative AI to Make Everyone a Full-Stack Creative Producer

"AI is going to be commoditised; everybody will have access to the tools. What will remain crucial is the talent pool you have – the storytellers."

The post Kuku FM is Using Generative AI to Make Everyone a Full-Stack Creative Producer appeared first on AIM.

]]>
Kuku FM is Using Generative AI to Make Everyone a Full-Stack Creative Producer

Kuku FM, a popular audio content platform backed by Google and Nandan Nilekani’s Fundamentum Partnership, is harnessing the power of generative AI to revolutionise how stories are created, produced, and consumed. This transformation is spearheaded by Kunj Sanghvi, the VP of content at Kuku FM, who told AIM that generative AI is part of their everyday work and content creation.

“On the generative AI side, we are working pretty much on every layer of the process involved,” Sanghvi explained. “Right from adapting stories in the Indian context, to writing the script and dialogues, we are trying out AI to do all of these. Now, in different languages, we are at different levels of success, but in English, our entire process has moved to AI.”

Kuku FM is leveraging AI not just for content creation but for voice production as well. The company uses Eleven Labs, ChatGPT APIs, and other available offerings to produce voices directly.

“Dramatic voice is a particularly specific and difficult challenge, and long-form voice is also a difficult challenge. These are two things that most platforms working in this space haven’t been able to solve,” Sanghvi noted. 

In terms of long-form content moving to generative AI, Kuku FM also does thumbnail generation, visual assets generation, and description generation and Sanghvi said that the team has custom GPTs for every process.

Compensating Artists

AI is playing a crucial role in ensuring high-quality outputs across various languages and formats. “In languages like Hindi and Tamil, the quality is decent, but for others like Telugu, Kannada, Malayalam, Bangla, and Marathi, the output quality is still poor,” said Sanghvi. 

However, the quality improves every week. “We put out a few episodes even in languages where we’re not happy with the quality to keep experimenting and improving,” Sanghvi added.

Beyond content creation, AI is helping Kuku FM in comprehensively generating and analysing metadata. “We have used AI to generate over 500 types of metadata on each of our content. AI itself identifies these attributes, and at an aggregate level, we can understand what makes certain content perform better than others,” he mentioned.

One of the most transformative aspects of Kuku FM’s use of AI is its impact on creators. The platform is in the process of empowering 5,000 creators to become full-stack creative producers. 

“As the generative AI tools become better, every individual is going to become a full-stack creator. They can make choices on the visuals, sounds, language, and copy, using AI as a co-pilot,” Sanghvi said. “We are training people to become creative producers who can own their content from start to end.”

When asked about the competitive landscape such as Amazon’s Audible or PocketFM, and future plans, Sanghvi emphasised that AI should not be viewed as a moat but as a platform. “Every company of our size, not just our immediate competition, will use AI as a great enabler. AI is going to be commoditised; everybody will have access to the tools. What will remain crucial is the talent pool you have – the storytellers,” he explained.

Everyone’s a Storyteller with AI

In a unique experiment blending generative AI tools, former OpenAI co-founder Andrej Karpathy used the Wall Street Journal’s front page to produce a music video on August 1, 2024. 

Karpathy copied the entire front page of the newspaper into Claude, which generated multiple scenes and provided visual descriptions for each. These descriptions were then fed into Ideogram AI, an image-generation tool, to create corresponding visuals. Next, the generated images were uploaded into RunwayML’s Gen 3 Alpha to make a 10-second video segment.

Sanghvi also touched upon the possibility of edge applications of AI, like generating audiobooks in one’s voice. “These are nice bells and whistles but are not scalable applications of AI. However, they can dial up engagement as fresh experiments,” he said.

Kuku FM is also venturing into new formats like video and comics, generated entirely through AI. He said that the team is not going for shoots or designing characters in studios. “Our in-house team works with AI to create unique content for video, tunes, and comics,” he revealed.

Sanghvi believes that Kuku FM is turning blockbuster storytelling into a science, making it more accessible and understandable. “The insights and structure of a story can now look like the structure of a product flow, thanks to AI,” Sanghvi remarked. 

“This democratises storytelling, making every individual a potential storyteller.” As Sanghvi aptly puts it, “The only job that will remain is that of a creative producer, finding fresh ways to engage audiences, as AI will always be biassed towards the past.”

The post Kuku FM is Using Generative AI to Make Everyone a Full-Stack Creative Producer appeared first on AIM.

]]>
https://analyticsindiamag.com/intellectual-ai-discussions/kuku-fm-is-using-generative-ai-to-make-everyone-a-full-stack-creative-producer/feed/ 0
Indian AI Startups are Powered by Jugaad, Not VC Money  https://analyticsindiamag.com/ai-origins-evolution/indian-ai-startups-are-powered-by-jugaad-not-vc-money/ https://analyticsindiamag.com/ai-origins-evolution/indian-ai-startups-are-powered-by-jugaad-not-vc-money/#respond Thu, 01 Aug 2024 12:00:31 +0000 https://analyticsindiamag.com/?p=10131172

“I have never believed that funding is the reason why you or I succeed."

The post Indian AI Startups are Powered by Jugaad, Not VC Money  appeared first on AIM.

]]>

Unlike their US counterparts, Indian AI startups know how to run their business even without large investments. 

Indian AI music company Beatoven.ai recently raised $1.3 million in its pre-series A round led by Entrepreneur First and Capital 2B, bringing its total funding to $2.42 million. Today, it is challenging the likes of Udio and Suno AI, which have raised $10 million and $125 million, respectively. 

“Companies like Udio received money from a16z as they hold a completely different view on copyrights. They argue that training AI on copyrighted materials is indeed fair use and doesn’t amount to theft of intellectual property,” said Mansoor Rahimat Khan, the co-founder of Beatoven.ai, in an exclusive interaction with AIM.

Meanwhile, Beatoven’s model has been certified as “Fairly Trained” and also endorsed by AI for Music as an “ethically trained AI model”. Khan said that the company aims to pursue more licensing deals to expand its music libraries. He shared that when he started the company, his family members were the first to create data to train the models. 

“Because of the artist-first approach we took, none of them protested. In fact, they supported Beatoven. My initial set of recordings to build this model came from my family,” said Khan. 

Talking about jugaad, Khan said that they use GPUs primarily for training purposes, while CPUs are used for inferencing. “We have optimised it in such a way that during production we don’t use GPUs when the users are prompting. This helps us keep our costs low,” he said, adding that when it comes to Indian labels, though, they are not yet open to licensing deals. 

There’s More 

Tech Mahindra’s CP Gurnani recently told AIM that they built Project Indus for under $5 billion. Citing the example of ISRO’s Mars Mission, he said that they spent much less compared to NASA, thanks to the ‘frugal innovation’ and ‘jugaad’ mindset in India.

“I have never believed that funding is the reason why you or I succeed. You need the bare minimum to be able to do it. Your product has to succeed, your people have to believe in you, and there’s so much more,” said Gurnani.

In line with this belief, Bengaluru-based AI agent platform company Kogo aims to reach an annual recurring revenue (ARR) of  INR 20 crore by 2025. The company has raised $3 million to date and reported a revenue of INR 2.5 crore for the fiscal year 2023-24.

Led by co-founder and CEO Raj K Gopalakrishnan, Kogo has developed patent-pending technology supporting small and large language models (SLMs and LLMs) for various industries. Kogo’s solutions span sales, research, operations, data management, and customer service. 

The company’s AI low-code platform, the KOGO AI Operating System, enables companies to quickly create AI agents that converse in Indic languages.

On the other hand, IIT-alum-founded startup Rabbitt.AI raised $2.1 million from TC Group of Companies. The GenAI startup enables businesses to create and deploy advanced AI applications with tools for custom LLM development, RAG fine-tuning, and data-centric AI. 

Their platform features MLOps integration and voice bot AI agents, and prioritises privacy-first strategies in AI deployment. Founder Harneet SN said that when he started the company, he applied all the tricks of the trade to keep the costs of running the company low. 

In the initial days, they used the free credits provided by Google and AWS. “We received $300,000 worth of credits from Google to optimise our operations,” said Harneet. He further said, “For the longest time, my website was hosted on Vercel, which is a free hosting tool, so I wasn’t even paying INR 700 to 1000 per month for website hosting.”

At the same time, Jio-backed startup TWO AI is making waves with its recent launch of ChatSUTRA and Geniya, alternatives to ChatGPT and Perplexity AI, respectively. The company has raised $20 million to date. This is minuscule when compared with the $13 billion raised by OpenAI and Perplexity’s $165 million. 

Another Indian startup, Unscript, recently converted a single photo into a full-fledged video, generating head and eye movements, facial expressions, voice modulations, and body language, achieving studio-quality results in under 2 minutes, significantly reducing manual shooting efforts. The company has raised  $1.25 million till date.  

Interestingly, this new upgrade surpasses Google Vlogger and rivals Microsoft’s VASA-1 and Alibaba’s EMO, making it ideal for brands, marketing agencies, and virtual influencers, with over 50 top companies already benefiting from its cost-effective, scalable video production capabilities. 

India is Safe from AI Bubble 

While many in the West fear that the AI bubble might burst soon, the situation in India appears quite different, with investors hesitant to spend money. Despite the global AI hype generating a flood of funding, Indian investors’ approach is cautionary and focused on only a select few opportunities. 

In an exclusive interview with AIM, Vishnu Vardhan, the founder and CEO of SML and Vizzhy, the creators of Hanooman, said that most Indian investors are reluctant to invest in research and deep-tech startups.

“Many VCs do not even have a thesis on how to invest in deep tech,” Vardhan noted, referring to the lack of informed deep-tech investors in the country. In India, funding for AI startups — including those working on infrastructure and services — dropped nearly 80% in 2023 to $113.4 million from $554.7 million in 2022, according to a Tracxn data.

Funding for AI startups in India totalled $8.2 million in the April-June 2024 quarter. In contrast, AI startups in the US received $27 billion in the same period, representing nearly half of all startup funding in the country.

While these numbers seem reasonable, they are comparatively much lower when looking at the global standard set by OpenAI, Anthropic, or Mistral, who have raised billions of dollars. 

For reference, Sarvam AI, which has announced its intention of building foundational AI models, has raised a total of $41 million and Ola Krutrim raised $50 million becoming India’s first generative AI unicorn.

Speaking with AIM, Soket AI Labs’ founder & CEO Abhishek Upperwal revealed that currently, funding is just enough to make do for AI research within a startup. “Yes, there are fewer funds available here as compared to any foreign markets, but I also believe that we can maybe make do with that particular fund and then ultimately grow in scale after the seed stage,” said Upperwal. 

He explained that for the seed stage in India, a funding of $5 million or $10 million is still a decent amount. This roughly translates to around INR 40-50 crores. This is still minuscule when compared to the 100s of millions raised by companies in the West.

“If VCs can trust these companies in the generative AI space, we can do wonderful stuff for sure,” he said. 

If not investors, the Indian government can also play a role. According to IndiaAIMission, the Union government has allocated INR 551.75 crore to the AI Mission  to advance AI research and applications. Earlier, in March 2024, the Cabinet had approved INR 10,354 crore for the AI Mission, which includes a provision of 10,000 graphics processing units for use by start-ups and universities.

The post Indian AI Startups are Powered by Jugaad, Not VC Money  appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-origins-evolution/indian-ai-startups-are-powered-by-jugaad-not-vc-money/feed/ 0
Why ErLang is the Top Paid Programming Language in 2024? https://analyticsindiamag.com/ai-origins-evolution/how-on-earth-is-erlang-the-top-paid-programming-language/ https://analyticsindiamag.com/ai-origins-evolution/how-on-earth-is-erlang-the-top-paid-programming-language/#respond Thu, 01 Aug 2024 08:04:58 +0000 https://analyticsindiamag.com/?p=10131119 How on Earth is Erlang the Top Paid Programming Language

Despite its low profile, the language's robustness makes it indispensable in certain sectors.

The post Why ErLang is the Top Paid Programming Language in 2024? appeared first on AIM.

]]>
How on Earth is Erlang the Top Paid Programming Language

While everyone agrees that AGI will be built on either Python or Rust, surprisingly, they are not the programming languages that pay the most. It’s the Erlang developers who have clinched the top spot for the highest reported median salary of $100,636 in the latest Stack Overflow Developer Survey 2024

Erlang has continued to stay on the top for the past few years. Even in last year’s Stack Overflow Developer Survey, it reported the highest median salary of $99,492. Despite its relatively low adoption rate, Erlang’s niche, yet powerful capabilities have made it a highly valued skill in the tech industry.

Who Even Uses Erlang?

The survey posed a straightforward question to developers: “What is your current total annual compensation (salary, bonuses, and perks, before taxes and deductions)?” The results revealed that Erlang developers enjoyed the highest median salaries among their peers, a trend that has piqued the curiosity of many in the tech community.

Erlang, developed in Sweden, powers some of the most influential startups from the 2010s and even earlier. Companies such as Goldman Sachs, Ericsson, WhatsApp, Amazon, and PepsiCo have all built a huge base of their technology on Erlang. And since they were built on it, rebuilding the whole stack is highly impossible.

Though new developers are shifting to modern programming languages, such as Python and Rust, an expertise in Erlang is still sought after by companies for maintaining their tech stack. 

The high salaries of Erlang developers reflect the language’s importance in specific industries. While it may not be as widespread as JavaScript or Python, Erlang’s efficiency in handling concurrent processes makes it a critical tool for telecommunications, banking, and messaging systems.

Interestingly, Peter Ullrich, a developer with extensive experience in Elixir and Erlang, noted the disparity between the high pay and low adoption rates. “Elixir and Erlang jobs have the highest median salaries, but their adoption is still quite low. There aren’t many Erlang and Elixir developers, but those that exist are paid really handsomely.”

A helplessly curious discussion on Reddit brought forth people’s confusion: “Can someone explain to me why Erlang is the top #1 best paid language of SO survey 2024?”

“It’s probably because it’s a language a lot of senior devs tend to find. So the pay may be more based on the experience in general and not necessarily Erlang itself,” replied a user. Erlang is often used in mission-critical applications requiring high concurrency and maintainability. 

Low Profile Ninja

Despite its low profile, the language’s robustness makes it indispensable in certain sectors. Plus, the limited number of Erlang developers creates a high demand for those proficient in the language. Companies committed to a tech stack involving Erlang often find it challenging to switch technologies, leading to higher salaries for maintaining their systems.

“C programmers are common. Erlang and Zig programmers aren’t. Supply and demand,” is the simple reason. The rarity of Erlang’s expertise drives up the compensation for those few who master it.

Erlang’s presence in the programming landscape is steady. It is neither making a significant comeback nor declining. Instead, it maintains its niche, especially in areas where reliability and fault tolerance are critical. 

Golang and Kubernetes, while popular, are seen as complementary rather than direct competitors to Erlang’s OTP.

Niche skills with high demand and low supply command premium salaries. As technology evolves, the need for specialised knowledge in robust, scalable languages like Erlang will likely continue to grow, maintaining its status as a top-paying technology. But as of now, it is very hard to find jobs that require Erlang, and the few who have it are paid the highest.

The post Why ErLang is the Top Paid Programming Language in 2024? appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-origins-evolution/how-on-earth-is-erlang-the-top-paid-programming-language/feed/ 0
GenAI Is NOT a Bubble, It’s a Tree  https://analyticsindiamag.com/ai-origins-evolution/genai-is-not-a-bubble-its-a-tree/ https://analyticsindiamag.com/ai-origins-evolution/genai-is-not-a-bubble-its-a-tree/#respond Thu, 01 Aug 2024 05:57:52 +0000 https://analyticsindiamag.com/?p=10131093

And its branching...

The post GenAI Is NOT a Bubble, It’s a Tree  appeared first on AIM.

]]>

Many believe that the rush to adopt generative AI may soon lead to a bubble burst. OpenAI, creator of ChatGPT, faces high operating costs and insufficient revenue, potentially leading to losses of up to $5 billion in 2024 and risking bankruptcy within a year.

OpenAI is expected to spend nearly $4 billion this year on Microsoft’s servers and almost $3 billion on training its models. With its workforce of around 1,500 employees, expenses could reach up to $1.5 billion. In total, operational costs may hit $8.5 billion, while revenue stands at only $3.4 billion.

However, some believe otherwise. “As long as Sam Altman is CEO of OpenAI, OpenAI will never go bankrupt. He will continue to drop mind-blowing demos and feature previews, and raise billions. I am not being sarcastic, it’s the truth,” posted AI influencer Ashutosh Shrivastava on X. 

He added that with products like Sora, the Voice Engine, GPT-4’s voice feature, and now SearchGPT, anyone who thinks OpenAI will go bankrupt is simply underestimating Altman.

As OpenAI prepares to seek more funding in the future, it’s essential for Altman to create more bubbles of hype. Without this, the industry risks underestimating the full impact of generative AI. 

Chinese investor and serial entrepreneur Kai-Fu Lee is bullish about OpenAI becoming a trillion-dollar company in the next two to three years. “OpenAI will likely be a trillion-dollar company in the not-too-distant future,” said Lee recently. 

​​On the contrary, analysts and investors from major financial institutions like Goldman Sachs, Sequoia Capita, Moody’s, and Barclays have released reports expressing concerns about the profitability of the substantial investments in generative AI.

Sequoia Capital partner David Cahn’s recent blog, “600 Billion Question,” points out the gap between AI infrastructure spending and revenue. He suggests the industry needs to generate around $600 billion annually to cover investment costs and achieve profitability.

Early Signs of an AI Bubble? 

Microsoft shares fell 7% on Tuesday as the tech giant reported lower-than-expected revenue. Revenue from its Intelligent Cloud unit, which includes the Azure cloud-computing platform, rose 19% to $28.5 billion in the fourth quarter, missing analysts’ estimates of $28.68 billion.

Despite that, the company announced plans to spend more money this fiscal year to enhance its AI infrastructure, even as growth in its cloud business has slowed, suggesting that the AI payoff will take longer than expected. 

Microsoft CFO Amy Hood explained that the spending is essential to meet the demand for AI services, adding that the company is investing in assets that “will be monetised over 15 years and beyond.” CEO Satya Nadella also said that Azure AI now boasts over 60,000 customers, marking a nearly 60% increase year-on-year, with the average spending per customer also on the rise. 

Last week, Google’s cloud revenue exceeded $10 billion, surpassing estimates for Q2 2024. The company is, however, facing increasing AI infrastructure costs. Google CEO Sundar Pichai insists, “The risk of under-investing far outweighs the risk of over-investing for us.” He warned, “Not investing to stay ahead in AI carries much more significant risks.”

“If you take a look at our AI infrastructure and generative AI solutions for cloud across everything we do, be it compute on the AI side, the products we have through Vertex AI, Gemini for Workspace and Gemini for Google Cloud, etc, we definitely are seeing traction,” Pichai said, elaborating that the company now boasts over two million developers playing around with Gemini on Vertex and AI Studio. 

“AI is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn’t designed to do,” said Jim Covello, Goldman Sachs’ head of global equity research.

Really? 

Recently, Google DeepMind’s AlphaProof and AlphaGeometry 2 AI models worked together to tackle questions from the International Math Olympiad (IMO). The DeepMind team scored 28 out of 42 – enough for a silver medal but one point short of gold.

Meanwhile, the word on the street is that OpenAI is planning to start a healthcare division focused on developing new drugs using generative AI. Recently, the startup partnered with Moderna to develop mRNA medicines. The company is already working with Whoop, Healthify, and 10BedICU in healthcare. 

JPMorgan recently launched its own AI chatbot, LLM Suite, providing 50,000 employees (about 15% of its workforce) in its asset and wealth management division with a platform for writing, idea generation, and document summarisation. This rollout marks one of Wall Street’s largest LLM deployments.

“AI is real. We already have thousands of people working on it, including top scientists around the world like Manuela Veloso from Carnegie Mellon Machine Learning,” said JP Morgan chief Jamie Dimon, adding that AI is already a living, breathing entity.

“It’s going to change, there will be all types of different models, tools, and technologies. But for us, the way to think about it is in every single process—errors, trading, hedging, research, every app, every database—you’re going to be applying AI,” he predicted. “It might be as a copilot, or it might be to replace humans.”

Investor and Sun Microsystems founder Vinod Khosla is betting on generative AI and remains unfazed by the surrounding noise. “These are all fundamentally new platforms. In each of these, every new platform causes a massive explosion in applications,” Khosla said. 

Further, he acknowledged that the rush into AI might lead to a financial bubble where investors could lose money, but emphasised that this doesn’t mean the underlying technology won’t continue to grow and become more important.

Declining Costs

Dario Amodei, CEO of Anthropic, has predicted that training a single AI model, such as GPT-6, could cost $100 billion by 2027. In contrast, a trend is emerging towards developing small language models, more cost-efficient language models that are easier to run without requiring extensive infrastructure. 

OpenAI co-founder Andrej Karpathy recently said that the cost of building an LLM has come down drastically over the past five years due to improvements in compute hardware (H100 GPUs), software (CUDA, cuBLAS, cuDNN, FlashAttention) and data quality (e.g., the FineWeb-Edu dataset).

Abacus.AI chief Bindu Reddy predicted that in the next five years, smaller models will become more efficient, LLMs will continue to become cheaper to train, and LLM inference will become widespread. “We should expect to see several Sonnet 3.5 class models that are 100x smaller and cheaper in the next one to two years.”

The Bigger Picture 

Generative AI isn’t represented by a single bubble like the dot-com era but is manifested in multiple, industry-specific bubbles. For example, generative AI tools for video creation, such as SORA and Runway, demand much more computational power compared to customer care chatbots. Despite these variations, generative AI is undeniably a technology with lasting impact and is here to stay.

“I think people are using ‘bubble’ too lightly and without much thought, as they have become accustomed to how impressive ChatGPT or similar tools are and are no longer impressed. They are totally ignoring trillion-dollar companies emerging with countless new opportunities. Not everything that grows is a bubble, and we should stop calling AI a bubble or a trend. It is a new way of doing things, like the internet or smartphones,” posted a user on Reddit. 

“AI is more like…a tree. It took a long time to germinate, sprouted in 2016, became something worth planting in 2022, and is now digging its roots firmly in. Is the tree bubble over now? Heh. Just like a tree, AI’s impact and value will keep growing and evolving. It’s not a bubble; it’s more like an ecosystem,” said another user on Reddit. 

The Bubblegum effect: The issue today is that investors are using OpenAI and NVIDIA as benchmarks for the AI industry, which may not be sustainable in the long term. While NVIDIA has had significant success with its H100s and B200s, it cannot afford to become complacent. 

The company must continually innovate to reduce training costs and maintain its edge. This concern is evident in NVIDIA chief Jensen Huang’s anxiety about the company’s future.

“I am paranoid about going out of business. Every day I wake up in a sweat, thinking about how things could go wrong,” said Huang. 

He further explained that in the hardware industry, planning two years in advance is essential due to the time required for chip fabrication. “You need to have the architecture ready. A mistake in one generation of architecture could set you back by two years compared to your competitor,” he said.

NVIDIA’s success should not be taken for granted, even with the upcoming release of its latest GPU, Blackwell. Alternatives to NVIDIA are increasingly available, particularly for inference tasks, including Google TPUs and Groq. Recently, Groq demonstrated impressive inference speed with Llama 3.1, and Apple selected Google TPUs over NVIDIA GPUs for its model training needs.

Most recently, AI hardware company Etched.ai, unveiled its chip purpose-built just to run transformer models. Etched claims that Sohu can process over 500,000 tokens per second with Llama 70B. One 8xSohu server replaces 160 H100s. According to the company, “Sohu is more than ten times faster and cheaper than even NVIDIA’s next-generation Blackwell (B200) GPUs.”

Meta recently released Llama 3.1, which is currently competing with GPT-40. Meta chief Mark Zuckerberg is confident that Llama 3.1 will have a similar impact on the AI ecosystem as Linux had on the operating system world. Moreover, Meta also recently launched AI Studio, which allows creators to build and share customisable AI agents.

In contrast, “I hate the AI hype and, at the same time, I think AI is very interesting,” said Linus Torvalds, the creator of the Linux kernel, in a recent conversation with Verizon’s Dirk Hohndel. When asked if AI is going to replace programmers and creators, Torvalds asserted that he doesn’t want to be a part of the AI hype.

 He suggested that we should wait ten years before making broad announcements, such as claiming that jobs will be lost in the next five years. 

Bursting the Bubble 

With AI representing more than just a single bubble, some of these bubbles may burst. Gartner predicts that by the end of 2025, at least 30% of generative AI projects will be abandoned after the proof-of-concept stage due to factors such as poor data quality, inadequate risk control, escalating costs, and unclear business value.

Some start-ups that thrived during the initial AI boom are now encountering difficulties. Inflection AI, founded by ex-Google DeepMind veterans, secured $1.3 billion last year to expand their chatbot business. However, in March, the founders and some key employees moved to Microsoft. Other AI firms, like Stability AI, which developed a popular AI image generator, have faced layoffs. The industry also contends with lawsuits and regulatory challenges.

Meanwhile, Karpathy is confused as to why state of the art LLMs can both perform extremely impressive tasks (e.g. solve complex math problems) while simultaneously struggling with some very dumb problems such as incorrectly determining that 9.11 is larger than 9.9. He calls this “Jagged Intelligence.”

The post GenAI Is NOT a Bubble, It’s a Tree  appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-origins-evolution/genai-is-not-a-bubble-its-a-tree/feed/ 0
Meet The Brains Behind AI Anchors on Doordarshan and Aaj Tak https://analyticsindiamag.com/ai-origins-evolution/meet-the-brains-behind-ai-anchors-on-doordarshan-and-aaj-tak/ https://analyticsindiamag.com/ai-origins-evolution/meet-the-brains-behind-ai-anchors-on-doordarshan-and-aaj-tak/#respond Thu, 01 Aug 2024 04:30:00 +0000 https://analyticsindiamag.com/?p=10131023 Meet The Brains Behind AI Anchors on Doordarshan and Aaj Tak

In the coming months, the startup will launch AI anchors for DD Sports and DD News.

The post Meet The Brains Behind AI Anchors on Doordarshan and Aaj Tak appeared first on AIM.

]]>
Meet The Brains Behind AI Anchors on Doordarshan and Aaj Tak

India’s public sector broadcaster Doordarshan recently introduced two AI anchors – Krish and Bhoomi – who deliver weather forecasts, commodity prices, farming trends, updates on agricultural research, and information on state welfare programmes to millions of farmers.

Enabling this is a Delhi-based startup called Personate AI. Incorporated in 2021, the startup is helping broadcasters, media houses and even content creators develop virtual AI agents.

Last year, Aaj Tak became the first broadcaster in India to host an AI anchor named Sana. The virtual anchor is multilingual and provides new updates multiple times throughout the day.

“We introduced India’s first AI anchor, Sana, at the India Today Conclave in the presence of Prime Minister Narendra Modi. Following that, we launched several anchors for various brands, including Vendhar TV in South India and ongoing campaigns for Zee

“Since then, AI anchors have been developed for multiple news channels, including Russian media,” Rishab Sharma, chief technology officer & co-founder at Personate.ai, told AIM.

The startup has developed eight different AI anchors for Aaj Tak and successfully launched them across its regional channels. Last month, Modi’s interview with the channel was also translated and broadcasted in seven hyperlocal languages using the startup’s technology.

Building on this success, the startup approached Prasar Bharati, which was impressed with its vision and chose to integrate AI anchors for DD Kisan. Sharma also revealed that AI anchors will soon be introduced for other Doordarshan channels, including DD Sports and DD News.

Personate.ai is also co-founded by Akshay Sharma, who serves as the CEO. Completely bootstrapped and profitable, the company has over 25 enterprise customers. 

Personate AI Studio

The startup has created an AI studio that allows users to produce a clone or a digital avatar of themselves. To date, the startup has collaborated with five media houses, achieving an average return on investment (ROI) of approximately 160%, according to Sharma.

“It’s adding more viewers per minute. Indian viewers have become accustomed to viewing AI content, be it a reel or a video shot on social media,” he said.

Personnate’s AI studio replaces the human component with a synthetic one. This synthetic element can be an actual human, or content creator or a synthetic personality.

For Aaj Tak, the startup also created a clone for their managing editor Anjana Om Kashyap. This was done by taking a short video clip, typically five minutes of the individual’s data. With AI, the clone was ready within minutes to read anything on screen with text-to-speech technology.

In contrast, creating a synthetic anchor involves designing from scratch. “This process includes pixel manipulation, designing the body, overlaying textures and clothing, and stitching the face. For a synthetic person, we spend about a week crafting the design,” Sharma revealed.

The personality is created with a 3D model by adding textures and shapes to the body. Once the model is ready, AI steps in to control its movements. 

Explaining in the context of video games, Sharma said that traditionally animators decided how the character moved. “Here, the role of the animator shifts to generative AI, which now directs how the character behaves and interacts,” Sharma said.

Large Vision Models 

Personate has developed a large vision model (LVM), which is a generative AI model similar to LLMs, but generates pixels instead of text. Examples of popular LVM include OpenAI’s Sora and Google’s Imagen.

The AI model translates a 3D anchor to a 2D screen, ensuring that the output is ultra-realistic. On a 2D screen, the challenge is how to rotate the model and adjust its positioning relative to camera angles, even though there is no actual camera. 

“The goal is for the model to behave as if it’s responding to the camera, creating a convincing and lifelike appearance,” Sharma said.

One of the biggest challenges in training an LVM is data. Moreover, models like OpenAI’s Sora are trained on trillions of data points. According to Sharma, Personate’s AI model too is trained with multi-trillion data points.

The startup tapped into the past experiences of the founders to collect data to train the model. Sharma revealed he began his journey with the Indian Space Research Organisation (ISRO) and later worked with Reliance.

Currently, Personate.ai is the only Indian company with such capabilities. Beyond India, Synthesia, a startup based in London, offers similar solutions.


Synthesia’s platform enables users to create videos using pre-generated AI avatars or by generating digital representations of themselves, which they call artificial reality identities. The startup is backed by NVIDIA and is being leveraged by the United Nations.

The post Meet The Brains Behind AI Anchors on Doordarshan and Aaj Tak appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-origins-evolution/meet-the-brains-behind-ai-anchors-on-doordarshan-and-aaj-tak/feed/ 0
Why Mark Zuckerberg Is Selfish With Open Source https://analyticsindiamag.com/ai-origins-evolution/why-mark-zuckerberg-is-selfish-with-open-source/ https://analyticsindiamag.com/ai-origins-evolution/why-mark-zuckerberg-is-selfish-with-open-source/#respond Tue, 30 Jul 2024 12:30:00 +0000 https://analyticsindiamag.com/?p=10130763

With the release of Llama 3.1, Mark Zuckerberg has established himself as the king of open-source AI. Contrary to popular belief, Zuckerberg has admitted that the pursuance of an open-source strategy is due to somewhat selfish reasons about the tech ecosystem, as he wants to influence how the models are developed and integrated into the […]

The post Why Mark Zuckerberg Is Selfish With Open Source appeared first on AIM.

]]>

With the release of Llama 3.1, Mark Zuckerberg has established himself as the king of open-source AI. Contrary to popular belief, Zuckerberg has admitted that the pursuance of an open-source strategy is due to somewhat selfish reasons about the tech ecosystem, as he wants to influence how the models are developed and integrated into the social fabric.

“We’re not pursuing this out of altruism, though I believe it will benefit the ecosystem. We’re doing it because we think it will enhance our offerings by creating a strong ecosystem around contributions, as seen with the PyTorch community,” said Zuckerberg at SIGGRAPH 2024.

“I mean, this might sound selfish, but after building this company for a while, one of my goals for the next ten or 15 years is to ensure we can build the fundamental technology for our social experiences. There have been too many times when I’ve tried to build something, only to be told by the platform provider that it couldn’t be done,” he added.

Zuckerberg does not want the AI industry to follow the path of the smartphone industry, as seen with Apple. “Because of its closed ecosystem, Apple essentially won and set the terms. Apple controls the entire market and profits, while Android has largely followed Apple. I think it’s clear that Apple won this generation,” he said.

He explained that when something becomes an industry standard, other folks work starts to revolve around it. “So, all the silicon and systems will end up being optimised to run this thing really well, which will benefit everyone. But it will also work well with the system we’re building, and that’s, I think, just one example of how this ends up being really effective,” he said.

Earlier this year, Meta open-sourced Horizon OS built for its AR/VR headsets. “We’re basically making the Horizon OS that we’re building for mixed reality an open operating system, similar to what Android or Windows was. We’re making it so that we can work with many different hardware companies to create various kinds of devices,” said Zuckerberg.

Jensen Loves Llama 

NVIDIA chief Jensen Huang could not agree more with Zuckerberg. He said that using Llama 2, NVIDIA has developed fine-tuned models that assist engineers at the company. 

“We have an AI for chip design and another for software coding that understands USD (Universal Scene Description) because we use it for Omniverse projects. We also have an AI that understands Verilog, our hardware description language. We have an AI that manages our bug database, helps triage bugs, and directs them to the appropriate engineers. Each of these AIs is fine-tuned based on Llama,” said Huang.

“We fine-tune them, we guardrail them. If we have an AI designed for chip design, we’re not interested in asking it about politics, you know, and religion and things like that,” he explained. Huang joked that an AI chip engineer is costing them just $10 an hour.

Moreover, Huang said he believes the release of Llama 2 was “the biggest event in AI last year.” He explained that this was because suddenly, every company, enterprise, and industry—especially in healthcare—was building AI. Large companies, small businesses, and startups alike were all creating AIs. It provided researchers with a starting point, enabling them to re-engage with AI. And he believes that Llama 3.1 will do the same.

Army of AI Agents 

Meta released AI Studio yesterday, a new platform where people can create, share, and discover AIs without needing technical skills. AI Studio is built on the Llama 3.1 models. It allows anyone to build and publish AI agents across Messenger, Instagram, WhatsApp, and the web.

Taking a dig at OpenAI, Zuckerberg said, “Some of the other companies in the industry are building one central agent.

“Our vision is to empower everyone who uses our products to create their own agents. Whether it’s the millions of creators on our platform or hundreds of millions of small businesses, we aim to pull in all your content and quickly set up a business agent.” 

He added that this agent would interact with customers, handle sales, take care of customer support, and more. 

Forget Altman, SAM 2 is Here 

While the world is still awaiting the voice features in GPT-4o as promised by Sam Altman, Meta released another model called SAM 2. Building upon the success of its predecessor, SAM 2 introduces real-time, promptable object segmentation capabilities for both images and videos, setting a new standard in the industry.

SAM 2 is the first model to unify object segmentation across both images and videos. This means that users can now seamlessly apply the same segmentation techniques to dynamic video content as they do to static images.

One of the standout features of SAM 2 is its ability to perform real-time segmentation at approximately 44 frames per second. This capability is particularly beneficial for applications that require immediate feedback, such as live video editing and interactive media.

Huang said this would be particularly useful since NVIDIA is now training robots, believing that the future will be physical AI. “We’re now training AI models on video so that we can understand the world model,” said Huang.

He added that they will connect these AI models to the Omniverse, allowing them to better represent the physical world and enabling robots to operate in these Omniverse worlds.

On the other hand, this model would be beneficial for Meta, as the company is bullish on its Meta Ray-Ban glasses. “When we think about the next computing platform, we break it down into mixed reality, the headsets, and the smart glasses,” said Zuckerberg.

The post Why Mark Zuckerberg Is Selfish With Open Source appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-origins-evolution/why-mark-zuckerberg-is-selfish-with-open-source/feed/ 0
Soon, There Could be More AI Agents Than Humans  https://analyticsindiamag.com/ai-origins-evolution/soon-there-could-be-more-ai-agents-than-humans/ https://analyticsindiamag.com/ai-origins-evolution/soon-there-could-be-more-ai-agents-than-humans/#respond Tue, 30 Jul 2024 12:09:29 +0000 https://analyticsindiamag.com/?p=10130759

Everyone could have an AI Agent built into their smartphones

The post Soon, There Could be More AI Agents Than Humans  appeared first on AIM.

]]>

Meta CEO Mark Zuckerberg recently said there could possibly be more AI agents in the world than humans. 

While speaking to Rowan Cheung in a podcast, he said there are millions of small businesses in the world, and in the future all of them could have AI agents carrying out some functions for the company, like customer support and sales.

“I think every business – just like they have an email address, a website, and a social media presence today – will have an AI agent that their customers can talk to in the future,” he said.

AI agents will not just be limited to businesses, but content creators too could have their own AI agents. He said there are over 200 million people on Meta’s social media platforms who consider themselves as creators. 

However, they struggle with limited time to engage with their communities, which desire more interaction. According to Zuckerberg, a potential breakthrough could involve integrating social media data with AI to reflect creators’ values and goals. 

This could result in interactive, artistic artefacts and diverse AI agents tailored to individual needs and functions.

AI Agents for Everyone 

In a recent interaction with AIM, Okta’s president for business operations, Eugenio Pace, said every employee in an organisation could have their own AI agents carrying out certain aspects of their day-to-day tasks. 

On the consumer front too, there could be an AI agent built into your smartphone which could make a reservation, book a hotel or order a pizza on your behalf. 

Venture capitalist Vinod Khosla predicts that most consumer interactions online will involve AI agents handling tasks and filtering out marketers and bots. 

By 2025, the world is projected to have 7.4 billion smartphone users, which could potentially mean 7.4 billion AI agents. These AI agents need not be limited to smartphones; they would be accessible through your PCs or voice-activated virtual assistants like Alexa.

If this happens, Zuckerberg’s vision of more AI agents than humans would indeed come true. What’s more? Most individuals could have multiple AI agents that could be leveraged for different purposes, be it for business or personal use. 

But do we have the technology to enable this?

Everyone is Building One

Earlier this year, OpenAI unveiled GPT-4o, demonstrating its impressive ability to converse almost indistinguishably from a human. 

While OpenAI is yet to make the voice capabilities of GPT-4o available, according to reports, the startup is working on a new AI technology under the code name–Strawberry–to significantly enhance the reasoning capabilities of its AI models.

At Google I/O 2024, the tech giant unveiled Project Astra, a first-of-its-kind initiative to develop universal AI agents capable of perceiving, reasoning, and conversing in real-time.

In the same podcast, Zuckerberg reiterated that if every business were indeed going to have its own AI agent, he would want to be the enabler for these businesses. 

Very recently, speaking to NVIDIA chief Jensen Huang at the ACM SIGGRAPH conference, Zuckerberg announced AI Studio, which helps create custom AI chatbots and characters. 

Built with Llama 3.1, AI Studio, creators can build an extension of themselves or an AI agent based on certain interests. 

That is not all. Hyperscalers like Microsoft, Google, and AWS have announced capabilities that can help enterprises build AI agents. Earlier this month, at the AWS New York Summit, the cloud provider announced that AI agents built through Amazon Bedrock would have enhanced memory and code interpretation capabilities.

Interestingly, there are many startups, which too are building AI agents for specific use cases. YC-backed Floworks recently announced ThorV2, which they claim is better than GPT-4o. Similarly, another Indian startup called KOGO is helping enterprises build AI agents in Indian languages. 

Laying the Groundwork 

Companies are not only developing AI agents but also laying the groundwork for their implementation. Meanwhile, others are discovering new business opportunities in the emerging field.

For instance, Pace revealed that Okta, which sells subscriptions to enterprises for its identity software, could provide the same for AI agents, thus significantly boosting their revenue. 

Meanwhile, Beckn Protocol, which is an open specification that creates a common language for interoperability among different digital platforms and networks, could prove to be useful in the era of AI agents.

Sujith Nair, the CEO and co-founder of FIDE, earlier told AIM that a foundational contract structure will be needed when two AI agents communicate on humans’ behalf.

 “This necessitates a programmable, machine-readable method of contracting in real-time, and achieving this requires an interoperable protocol like Beckn,” he said. 

Beckn ensures there’s a verification process for these transactions when two AI agents (buyer and seller) are talking to each other.

The post Soon, There Could be More AI Agents Than Humans  appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-origins-evolution/soon-there-could-be-more-ai-agents-than-humans/feed/ 0
Web Development was Designed by Satan https://analyticsindiamag.com/ai-origins-evolution/web-development-was-designed-by-satan/ https://analyticsindiamag.com/ai-origins-evolution/web-development-was-designed-by-satan/#respond Tue, 30 Jul 2024 10:36:43 +0000 https://analyticsindiamag.com/?p=10130750 Web Development was Designed by Satan for Software Engineers

Web development may be seen by some as a "necessary evil”, but it remains an integral part of the software engineering landscape.

The post Web Development was Designed by Satan appeared first on AIM.

]]>
Web Development was Designed by Satan for Software Engineers

Web development is often seen as a chaotic field by many in the software engineering community. This sentiment was aptly captured by a user who famously went on a ranting rampage in his post ‘Web development is f*** stupid’. 

“I have never seen such poorly written languages as JavaScript and TypeScript in my life,” he wrote in his Reddit post.  

He further added that the dependency management with npm and Yarn (JavaScript package managers) is a nightmare and “frameworks like React, Redux, and Next.js are constantly changing for no reason, making the entire process unnecessarily complicated”.

Despite the strong language, his viewpoint resonates with many developers who find web development frustrating. One user commented, “I hate how most of the software engineering is web development.” Another added, “I started off learning video games and desktop apps, but by the time I finished college, I realised most of software engineering is web apps, which I despise.”

Why the Hate?

Many developers feel that web development is plagued by poor management and constant changes. “There’s always some new framework everyone thinks will be the holy grail, but it’s always just as bad as everything else,” one user lamented. 

Another agreed, saying that no other area of software engineering releases as many updates and new frameworks as web apps, making it hard to keep up with.

However, not everyone shares this negative view. Some see the challenges of web development as opportunities for interesting projects. “There are so many interesting projects in web development. People think it’s just building simple websites, but web apps can have so many interesting challenges and depth,” a user noted. 

Another pointed out, “Web applications today are just the desktop applications of yesterday, thanks to advancements in computing power.”

In web development, frameworks often become obsolete within a few years, replaced by newer technologies. Developers must learn these new frameworks, update existing code, and hope for smooth transitions. And now, there are people who say that TypeScript is going to replace JavaScript, which brings back the same conversation about the changing landscape of web development. 

Too Much to Deal With

Web developers in software development agencies face even greater challenges. While some agencies stick to a specific technology stack, most take on any web-related projects they can find.

Others believe that the constant updates and new frameworks are a necessary part of the evolution of technology. “Web development has nothing to do with websites. It has everything to do with web applications. The reason web dev jobs are so common is because 99.9% of software is delivered through a browser or mobile app now,” a user explained.

Despite the criticism, some developers find web development to be intuitive and refreshing. “And no one needs to switch to every latest web framework either. If you have to, it’s not that bad if your foundation is solid.”

Web development may be seen by some as a “necessary evil”, but it remains an integral part of the software engineering landscape. 

“Computers are literally designed based on the principle of abstraction. Even if you’re coding in binary, you’re still abstracting away a ton of stuff,” said a user in the discussion. Web development, with all its layers and complexities, is just another step in the ongoing evolution of technology.

The hate for web development is not new. A simple search on Google, X, Quora, Reddit, or any community platform throws thousands complaining about the constantly changing paradigm of web development for more than a decade.

Regardless, according to the Stack Overflow Developer Survey 2024, JavaScript has remained the most popular programming language in the past decade. This can be attributed to the fact that JS has been used almost everywhere, not because it is loved by everyone.

The post Web Development was Designed by Satan appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-origins-evolution/web-development-was-designed-by-satan/feed/ 0
Skilled Techies Don’t Want to Work for Big-tech Companies Anymore https://analyticsindiamag.com/ai-origins-evolution/skilled-techies-dont-want-to-work-for-big-tech-companies-anymore/ https://analyticsindiamag.com/ai-origins-evolution/skilled-techies-dont-want-to-work-for-big-tech-companies-anymore/#respond Tue, 30 Jul 2024 08:17:07 +0000 https://analyticsindiamag.com/?p=10130716 Skilled Techies Don’t Want to Work for Big-tech Companies Anymore

The same goes for Indian IT.

The post Skilled Techies Don’t Want to Work for Big-tech Companies Anymore appeared first on AIM.

]]>
Skilled Techies Don’t Want to Work for Big-tech Companies Anymore

Despite the allure of big tech’s big-ticket jobs, many top-tier talents from Ivy League schools, award-winning researchers, and prolific authors are shunning giants like Google, Microsoft, Amazon, and Meta to work for smaller or mid-sized companies. 

A Reddit discussion highlighted that this prevailing sentiment stems from a desire to escape the corporate politics often inherent in large organisations. As one ML engineer puts it, “Why deal with the politics of a big company when you can get funding for your projects?” 

The freedom and autonomy that smaller companies promise can often be more appealing than the bureaucratic hurdles of the big tech.

Burnout is another critical factor. Many skilled professionals are now prioritising work-life balance over the prestige associated with working for a tech giant. This sentiment echoes a broader cultural shift where mental health and personal well-being are becoming increasingly important. 

For Indian tech professionals, one of the main attractions of big-tech jobs is the higher salary package, coupled with their long-standing desire to work at one of these prestigious companies.

Less Inspiring Work

Financial motivations, while important, are not always the driving force. The nature of the work itself also plays a role. Some ML experts find the projects at big-tech companies less inspiring. 

“Most of the projects at MAANG [Meta, Amazon, Apple, Netflix, and Google] companies are boring,” one contributor mentioned. There’s a preference for roles where they can have a more significant impact on the AI roadmap, which smaller firms often provide.

Moreover, the elaborate hiring processes at big-tech companies can be off-putting. As an ML engineer pointed out, “Getting into MAANG is an entirely separate field that requires you to study and practise an entire hobby/career path unrelated to your ML expertise.” Busy ML leaders might not have the time or inclination to master the intricate and often lengthy recruitment processes of these giants.

Additionally, the work environment and corporate culture in these tech giants can be stifling. One ex-employee described their experience: “Google was a fun, exciting, and innovative place to work in 2004. Twenty years later, it’s decayed into the same bland, vapid, beige-coloured evil as Microsoft.” 

The transformation of these workplaces over time often leads to disillusionment among those who seek dynamic and innovative environments.

Another compelling reason is the opportunity for a more significant research agency and visibility at smaller firms. “I prefer smaller! Much cosier, less politics, and most importantly: waaaaay more research agency,” said an ML professional. 

In smaller companies, top talents often have more freedom to pursue their research interests without the constraints of a rigid corporate structure. As one ML researcher summarised, “It’s a trade-off for sure, but you get more autonomy. R&D changes so fast, so not having that autonomy can feel a little scary.”

At the same time, it is undeniable that big tech produces some of the top research. People who cite autonomy as the reason, though correct, miss out on the part of producing SOTA research at big tech.

So, while big-tech companies can offer substantial salaries, many skilled professionals find that the trade-offs in terms of autonomy, work-life balance, and ethical considerations make smaller firms more attractive.

The Same Goes for IT

The situation is only a little different for Indian IT. Though the research and development at these companies need good talent, Indian researchers do not want to join them. 

According to several predictions, the number of CS graduates by 2025 is going to be three or four times more than 2020. This just shows the huge supply of graduates in the field. But the same amount of jobs are not available in the Indian sector. Forget big-tech, Indian IT too is not attractive to the graduates from the country. 

Though India is seeing an increase in talent retention, there seems to be a surplus of underskilled STEM graduates

In India, the situation is complex. It is extremely difficult to find good, or even decent, software engineers with coding skills for such small compensation. Meanwhile, the ones who have the skills are either already working for startups at a higher package, or have moved abroad for better opportunities. 

The reluctance of recent graduates to pursue careers in Indian IT can be attributed to the prolonged stagnation of entry-level salaries, which have remained at INR 3.5-4 LPA for over a decade. High-paying product companies with compensation packages ranging from Rs 10-20 LPA have become more attractive.

The post Skilled Techies Don’t Want to Work for Big-tech Companies Anymore appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-origins-evolution/skilled-techies-dont-want-to-work-for-big-tech-companies-anymore/feed/ 0