Machine Learning News, Stories and Latest Updates https://analyticsindiamag.com/news/machine-learning/ Artificial Intelligence news, conferences, courses & apps in India Fri, 09 Aug 2024 18:10:35 +0000 en-US hourly 1 https://analyticsindiamag.com/wp-content/uploads/2019/11/cropped-aim-new-logo-1-22-3-32x32.jpg Machine Learning News, Stories and Latest Updates https://analyticsindiamag.com/news/machine-learning/ 32 32 Synthetic Data Generation in Simulation is Keeping ML for Science Exciting https://analyticsindiamag.com/ai-breakthroughs/synthetic-data-generation-in-simulation-is-keeping-ml-for-science-exciting/ https://analyticsindiamag.com/ai-breakthroughs/synthetic-data-generation-in-simulation-is-keeping-ml-for-science-exciting/#respond Fri, 09 Aug 2024 09:38:25 +0000 https://analyticsindiamag.com/?p=10131981 Synthetic Data Generation in Simulation is Keeping ML for Science Exciting

Simulations allow researchers to generate vast amounts of synthetic data, which can be critical when real-world data is scarce, expensive, or challenging to obtain.

The post Synthetic Data Generation in Simulation is Keeping ML for Science Exciting appeared first on AIM.

]]>
Synthetic Data Generation in Simulation is Keeping ML for Science Exciting

If only AI could create infinite streams of data for training, we wouldn’t have to deal with the problem of not having enough data. This is what is keeping a lot of things undiscoverable in the field of science as there is only a limited amount of data available that can be used for training.

This is where AI is taking up a crucial role with the help of simulation. The integration of data generation through simulation is rapidly becoming a cornerstone in the field of ML, especially in science. This approach not only holds promise but is also reigniting enthusiasm among researchers and technologists. 

As Yann LeCun pointed out, “Data generation through simulation is one reason why the whole idea of ML for science is so exciting.”

Simulations allow researchers to generate vast amounts of synthetic data, which can be critical when real-world data is scarce, expensive, or challenging to obtain. For instance, in fields like aerodynamics or robotics, simulations enable the exploration of scenarios that would be impossible to test physically.

Richard Socher, the CEO of You.com, highlighted that while there are challenges, such as the combinatorial explosion in complex systems, simulations offer a pathway to manage and explore these complexities. 

Synthetic Data is All You Need?

This is similar to what Anthropic chief Dario Amodei said about producing quality data using synthetic data and that it sounds feasible to create an infinite data generation engine that can help build better AI systems. 

“If you do it right, with just a little bit of additional information, I think it may be possible to get an infinite data generation engine,” said Amodei, while discussing the challenges and potential of using synthetic data to train AI models.

“We are working on several methods for developing synthetic data. These are ideas where you can take real data present in the model and have the model interact with it in some way to produce additional or different data,” explained Amodei. 

Taking the example of AlphaGo, Amodei said that those little rules of Go, the little additional piece of information, are enough to take the model from “no ability at all to smarter than the best human at Go”. He noted that the model there just trains against itself with nothing other than the rules of Go to adjudicate.

Similarly, OpenAI is a big proponent of synthetic data. The former team of Ilya Sutskever and Andrej Karpathy has been a significant force in leveraging synthetic data to build AI models. 

The development at OpenAI is testimony to the advanced growth of generative AI in the entire ecosystem, but not everyone agrees that they will be able to achieve AGI with the current methodology of model training. Likewise, Microsoft is also researching in this direction; its research on Textbooks Are All You Need is a testament to the power of synthetic data.

Google’s AlphaFold, which is spearheading protein fold prediction and creations for drug discovery, too can benefit immensely from synthetic data. At the same time, it can be scary to use this data for a sensitive field like science.

Synthetic Data is Too Synthetic

However, the potential of simulations extends beyond mere data generation. Giuseppe Carleo, another expert in the field, emphasised that the most exciting aspect is not just fitting an ML model to data generated by an existing simulator. 

Instead, true innovation lies in training ML models to become advanced simulators themselves—models that can simulate systems beyond the capabilities of traditional methods, all while remaining consistent with the laws of physics.

This is becoming possible with synthetic data generated by agentic AI models, which are increasing in the field of AI. Models that can test, train, and fine-tune themselves using the data they created is something that is exciting for the future of AI research. 

Moreover, the discussion around simulations also touches on broader applications. Sina Shahandeh, a researcher in the field of biotechnology, for example, suggested that the ultimate simulation could model entire economies using an agent-based approach, a concept that is slowly becoming feasible.

Despite the excitement, the field is not without its sceptics. Stephan Hoyer, a researcher with a cautious outlook on AGI, pointed out that simulating complex biological systems to the extent that training data becomes unnecessary would require groundbreaking advancements. 

He believes this task is far more challenging than achieving AGI. Similarly, Jim Fan, senior AI scientist at NVIDIA, said that while synthetic data is expected to have a noteworthy role, blind scaling alone will not suffice to reach AGI.

When it comes to science, using synthetic data can be tricky. But its generation in simulation shows promise as it can be tried and tested without deploying in real-world applications. Besides, the possibility of it being infinite is what keeps ML exciting for researchers.

The post Synthetic Data Generation in Simulation is Keeping ML for Science Exciting appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-breakthroughs/synthetic-data-generation-in-simulation-is-keeping-ml-for-science-exciting/feed/ 0
You See, C is Still the King in the Sea of Languages https://analyticsindiamag.com/ai-insights-analysis/c-language/ https://analyticsindiamag.com/ai-insights-analysis/c-language/#respond Thu, 01 Aug 2024 13:00:00 +0000 https://analyticsindiamag.com/?p=10131175

Despite the drawbacks, the world still benefits from C even though higher-level languages are more commonly used

The post You See, C is Still the King in the Sea of Languages appeared first on AIM.

]]>

In a recent experiment, TalentNeuron’s machine learning lead, Andriy Burkov, optimised a Python-based text processing task by rewriting it in C. With the help of AI assistant Claude, the Python implementation took 63 minutes, while the C version completed the task in just 2.3 minutes, showcasing a significant performance boost.

With all the new “modern” languages out today, how is C still believed to be the fastest and “closest to the machine”? 

C Over Python

While Python is renowned for its simplicity and ease of use, it is also known for its slower execution times. C, on the other hand, is renowned for its speed. This is primarily because it can be compiled straight into assembly or machine code before being executed.

C programs execute quickly primarily because they are translated into machine code prior to execution. Since machine code is the language that computers comprehend directly, no additional translation is required when the program is operating. 

Pre-translation prevents the needless extra steps that can impede the speed of programs written in other languages that may require real-time translation into machine code. C programs can operate substantially more quickly by omitting this translation stage during execution.

Thanks to its portability and effectiveness, “C is frequently used to implement compilers, libraries, and interpreters for other programming languages”.  The main implementations of interpreted languages, such as PHP, Python, and Ruby, are written in C. 

Another factor that distinguishes C from other languages is it is a smaller, simpler language. In a discussion on Reddit, a developer named Blargh said, “It hasn’t changed very much in 30 years. Its limitations are usually tolerable for the problem domain it was designed for.”

There isn’t Much That’s Special About C

Having said that, many developers believe speed is natural to C and not a unique thing. In a Stack Overflow discussion, developer 

Sebastian Karlsson pointed out that C lacks features like garbage collection, dynamic typing and other facilities. 

“Newer languages have all these, hence there is additional processing overhead which will degrade the performance of the application,” he added.

If anything, that is actually a minus point for C. “These require programmers to manually manage memory allocation and deal with static typing,” said Karlsson.

In another Reddit discussion, a developer pointed out the same problem. “If you don’t mind that a program might behave unpredictably when given bad inputs, a C program might run faster because it doesn’t always check for things like whether you’re accessing an array outside its bounds. This lack of checks can make C code run faster than code in other languages that do include these safety checks.”

However, if you need your program to be very secure and free from exploits like arbitrary code execution, you would need to add extra safety checks in C, making it cumbersome.

For some, Rust presents a compelling alternative to C, particularly for projects that prioritise safety and maintainability without sacrificing performance. So much so that Microsoft CTO Mark Russinovich said, “It’s time to halt starting any new projects in C/C++ and use Rust for those scenarios where a non-GC language is required. For the sake of security and reliability, the industry should declare those languages as deprecated.”

C is Still the King

Despite the drawbacks, the world still benefits from C even though higher-level languages are more commonly used. A majority of the work on Microsoft’s Windows kernel is done in C, however some are done in assembly languages. 

With almost 85% of the market share, the most popular operating system in the world has been running on a C-written kernel for many years. 

Additionally, Linux is primarily written in C, with some assembly code. The Linux kernel is used by roughly 97% of the 500 most potent supercomputers in the world. 
The kernels for iOS, Android, and Windows Phone are also written in C. These are merely versions of the current Windows, Linux, and macOS kernels for mobile devices. Thus, the C kernel powers the devices you use on a daily basis.

The post You See, C is Still the King in the Sea of Languages appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-insights-analysis/c-language/feed/ 0
Google Slashes Computer Power Needed for Weather Forecast by 2-15 Days https://analyticsindiamag.com/ai-trends-future/google-slashes-computer-power-needed-for-weather-forecast-by-2-15-days/ https://analyticsindiamag.com/ai-trends-future/google-slashes-computer-power-needed-for-weather-forecast-by-2-15-days/#respond Mon, 29 Jul 2024 09:47:12 +0000 https://analyticsindiamag.com/?p=10130505

The NeuralGCM seems like a significant advancement in pure ML-based modelling at first glance.

The post Google Slashes Computer Power Needed for Weather Forecast by 2-15 Days appeared first on AIM.

]]>

Google Research has developed a breakthrough hybrid general circulation model (GCM) that combines cutting-edge machine learning components with conventional physics-based techniques to improve weather forecast. 

This innovative research on Neural General Circulation Models, which was published in Nature, demonstrates how NeuralGCM may improve weather and climate prediction accuracy beyond that of standalone machine-learning models and traditional GCMs.

NeuralGCM, which was created in collaboration with the European Centre for Medium-Range Weather Forecasts (ECMWF), enhances simulation efficiency and accuracy by fusing ML with conventional physics-based modelling.

Breakthrough in Climate Modelling

Google CEO Sundar Pichai has called it a breakthrough in climate modelling. This is because when compared with the existing gold-standard physics-based models, Google claims that this approach offers weather forecasts that are 2–15 days more accurate. 

Besides, it is also capable of reproducing temperatures over the last 40 years more accurately than traditional atmospheric models. Unlike traditional models, NeuralGCM combines traditional physics-based modelling with ML for improved simulation accuracy and efficiency. 

According to Stephan Hoyer, an AI researcher at Google Research, NeuralGCM is a combination of physics and AI.

To prove their claim, the researchers used a defined set of forecasting tests called WeatherBench 2 to compare NeuralGCM against other models. NeuralGCM performed comparably to other machine-learning weather models like Pangu and GraphCast for three- and five-day forecasts.  

Not The Only One

While NeuralGCM can be called a breakthrough in climate modelling, it isn’t the only one. NVIDIA Earth-2 is a full-stack, open platform that combines physical simulations and machine learning models, like FourCastNet and GraphCast, with NVIDIA’s tools for data visualisation. 

However, unlike NeuralGCM, Earth 2 focuses on creating a virtual representation of Earth to quickly and accurately simulate and visualise the global atmosphere.

Then, there is the AI2 Climate Emulator developed by the Allen Institute for Artificial Intelligence (AI2). ACE focuses on quickly mimicking complex climate models using deep learning, allowing researchers to run fast simulations and test climate scenarios efficiently. 

Not A Big Achievement

“An important advance in atmospheric modelling and long-term weather prediction, but not necessarily a giant leap in climate prediction.” This was how Texas A&M University atmospheric sciences professor R Saravanan described the findings. Saravanan was not involved in the research.

“The NeuralGCM seems like a significant advancement in pure ML-based modelling at first glance,” Saravanan remarked. “In reality, the opposite is true—the paper emphasises the shortcomings of purely ML-based approaches.”

NASA’s Goddard Institute for Space Studies director Gavin Schmidt said that scientists estimate global heating from greenhouse gases as a range due to climate’s inherent chaos, similar to weather predictions like a “40% chance of rain”. 

“Physics-based models can better address this uncertainty by simulating the underlying physics, while AI models, lacking this capability, struggle to account for the inherent unpredictability,” Schmidt added.

He also cautioned about the latest findings, saying machine learning isn’t a replacement for physics. He claimed that because “weather models don’t conserve [things like] energy and water”, “you end up with massive drifts”, which cause systems that are trained on meteorological data to gradually diverge from reality.

Furthermore, Schmidt said that merely using meteorological data to train an AI does not ensure that the results it produces will adhere to these physical bounds.

The post Google Slashes Computer Power Needed for Weather Forecast by 2-15 Days appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-trends-future/google-slashes-computer-power-needed-for-weather-forecast-by-2-15-days/feed/ 0
From a Small Town in Maharashtra to Silicon Valley: Aqsa Fulara’s Inspiring Journey with Google https://analyticsindiamag.com/intellectual-ai-discussions/from-a-small-town-in-maharashtra-to-silicon-valley-aqsa-fularas-inspiring-journey-with-google/ https://analyticsindiamag.com/intellectual-ai-discussions/from-a-small-town-in-maharashtra-to-silicon-valley-aqsa-fularas-inspiring-journey-with-google/#respond Tue, 16 Jul 2024 06:53:15 +0000 https://analyticsindiamag.com/?p=10129243

Fulara is responsible for scaling AI and ML products including Recommendations AI and now Meridian models.

The post From a Small Town in Maharashtra to Silicon Valley: Aqsa Fulara’s Inspiring Journey with Google appeared first on AIM.

]]>

Growing up in a small town in Sangli, Maharashtra, Aqsa Fulara, an AI/ML product manager at Google since 2017, like many other women, faced societal norms that often discouraged women from pursuing higher education far from home. 

“Coming from a community where moving out of my parent’s home to a hostel for higher education was frowned upon, I put all my energy towards getting into this prestigious engineering college in the same city,” Fulara told AIM in an exclusive interview.  

Her dedication paid off when she was admitted to Walchand College of Engineering, where she did her BTech in computer science and engineering. This academic achievement was just the beginning. 

Fulara’s passion for learning and her desire to push the boundaries led her to the University of Southern California (USC), where she pursued a master’s degree in engineering management, and since then “there was no looking back!”, Fulara shared gleefully. 

“While my experiences in India provided me with a solid technical foundation and analytical approach to solving problems, my experiences at USC and Stanford focused a lot more on practical applications of cutting-edge technology,” she added. 

According to recent surveys, compared to other developing countries, fewer women in India reported being discouraged from pursuing scientific or technical fields (37% vs. 65%). The primary challenges faced by women students in India are high-stress levels (72%), difficulties in finding internships (66%), and a gap between their expectations and their current curriculum (66%).

Fulara’s path to AI and ML was not marked by a single dramatic moment but rather a gradual buildup of curiosity and fascination with technology. Her inclination towards solving problems and understanding complex systems drew her to this field. 

“That led me to my capstone project on behaviour recognition and predicting traffic congestion in large-scale in-person events and thus, building products for congestion management,” she added. 

Leadership Mantra: Building the Culture of Innovation

If you’re familiar with Google’s Vertex AI Search, you likely know about Recommendations AI. Now branded as Recommendations from Vertex AI Search, this service leverages state-of-the-art machine learning models to provide personalised, real-time shopping recommendations tailored to each user’s tastes and preferences. 

One of the key figures in scaling this product is Fulara, who has been instrumental in its growth since 2021. Fulara has also been the force behind the highly acclaimed products in Google Cloud’s Business Intelligence portfolio, such as Team Workspaces and Looker Studio Pro. 

Fulara considers Looker Studio as one of her favourite projects. “Imagine having a personal data analyst assistant who can provide customised recommendations and help you make informed decisions,” she added. 

Having worked with Google for over seven years now, one thing that Fulara values most about the company is the freedom to explore and innovate. “Whether it’s pursuing a 20% project in a new domain, growing into a particular area of expertise, or participating in company-wide hackathons, Google provides much space for creativity and innovation,” she shared. 

This environment has allowed her to pivot her career towards product management, building on her AI experiences and focusing on delivering business value through customer-centric solutions.

Leading AI product development comes with its own set of challenges. “AI products have a larger degree of uncertainty and ambiguity, with challenges in terms of large upfront investment, uncertain returns, technical feasibility, and evolving regulations,” she explained. 

To manage these challenges, Fulara fosters a culture of experimentation and agility. “We release MVPs for testing far ahead of production cycles to rigorously test and benchmark on production data and user behaviours,” she added, allowing her team to make informed decisions even with incomplete information.

Fulara emphasises the importance of managing scope creep tightly and sharing outcome-based roadmaps upfront. “We’re solving for problem themes, not necessarily just churning out AI features,” she noted. This strategy helps maintain focus and adapt to changes quickly. 

Future of AI 

Looking ahead, Fulara sees generative AI, personalised recommendations, and data analytics as transformative forces in the coming decade, making data and insights more accessible and workflows more collaborative. 

AI and ML models are becoming increasingly pervasive, assisting in personalised shopping journeys, optimising marketing strategies, and improving data-driven decision-making across various industries.

Read more: Beyond Pride Month: True Allyship Needs No Calendar

The post From a Small Town in Maharashtra to Silicon Valley: Aqsa Fulara’s Inspiring Journey with Google appeared first on AIM.

]]>
https://analyticsindiamag.com/intellectual-ai-discussions/from-a-small-town-in-maharashtra-to-silicon-valley-aqsa-fularas-inspiring-journey-with-google/feed/ 0
Why Data Quality Matters in the Age of Generative AI https://analyticsindiamag.com/ai-insights-analysis/generative-ai/ https://analyticsindiamag.com/ai-insights-analysis/generative-ai/#respond Thu, 04 Jul 2024 04:30:00 +0000 https://analyticsindiamag.com/?p=10125699

GenAI can augment human intelligence by identifying patterns and correlations that humans may miss.

The post Why Data Quality Matters in the Age of Generative AI appeared first on AIM.

]]>

In the dynamic realm of data engineering, the integration of Generative AI is not just a distant aspiration; it’s a vibrant reality. With data serving as the catalyst for innovation, its creation, processing, and management have never been more crucial.

“While AI models are important, the quality of results we get are dependent on datasets, and if quality data is not tapped correctly, it will result in AI producing incorrect results. With the help of Gen AI, we are generating quality synthetic data for testing our models,” said Abhijit Naik, Managing Director, India co-lead for Wealth Management Technology at Morgan Stanley.

Speaking at AIM’s Data Engineering Summit 2024, Naik said that Gen-AI, machine learning, neural networks, and deep learning models that we have, is the next stage of automation post the RPA era.

“Gen AI will always generate results for you. And when it generates results for you, sometimes it hallucinates. So, what data you feed becomes very critical in terms of the data quality, in terms of the correctness of that data, and in terms of the details of data that we feed,” Naik said. 

However, it’s important to note that human oversight is crucial in this process, Naik added. When integrated carefully into existing pipelines and with appropriate human oversight, GenAI can augment human intelligence by identifying patterns and correlations that humans may miss.

The task of documenting every aspect of their functioning and the knowledge they derive from data is a complex one. This underscores the need for caution and thorough understanding when integrating Generative AI. 

Due to their vast size and training on extensive unstructured data, generative models can behave in unpredictable, emergent ways that are difficult to document exhaustively.  

“This unpredictability can lead to challenges in understanding and explaining their decisions” Naik said.

GenAI in Banking

Naik emphasised GenAI’s importance in the banking and finance sectors. “It can generate realistic synthetic customer data for testing models while addressing privacy and regulatory issues. This helps improve risk assessment,” he added.

This is especially critical when accurate data is limited, costly, or sensitive. A practical example could be creating transactional data for anti-fraud models.

Gen AI models, including Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and massive language models like GPT, may generate synthetic data that mimics the statistical features of real-world datasets. 

For example, Capital One and JPMorgan Chase use GenAI to strengthen their fraud and suspicious activity detection systems. Morgan Stanley implemented an AI tool that helps its financial advisors find data anomalies in detecting fraud, and Goldman Sachs uses GenAI to develop internal software.  Customers globally can benefit from 24/7 accessible chatbots that handle and resolve concerns, assist with banking issues, and expedite financial transactions.

A recent study showed that banks that move quickly to scale generative AI across their organisations could increase their revenues by up to 600 bps in three years.

“Of course, the highly regulated nature of banking/finance requires careful model governance, oversight, explainability and integration into existing systems,” Naik concluded.

The post Why Data Quality Matters in the Age of Generative AI appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-insights-analysis/generative-ai/feed/ 0
Hacking India’s Electronic Voting Machine is Next to Impossible https://analyticsindiamag.com/ai-insights-analysis/indias-evm-dilemma-can-they-truly-be-hacked/ Tue, 18 Jun 2024 06:44:15 +0000 https://analyticsindiamag.com/?p=10123872 Indian EVMs Are Unhackable

Musk set the ball rolling with his tweet doubting the reliability of EVMs. However, research shows that EVMs are impossible to hack

The post Hacking India’s Electronic Voting Machine is Next to Impossible appeared first on AIM.

]]>
Indian EVMs Are Unhackable

Elon Musk, the CEO of Tesla and SpaceX, whipped up a political storm of sorts in India ever since he wrote that post on X criticising electronic voting machines (EVMs).

Musk set the ball rolling with his tweet doubting the reliability of EVMs, citing media allegations of vote abnormalities in hundreds of EVMs during the elections in Puerto Rico.

Rajeev Chandrasekhar, who led the previous government’s electronics and information technology ministry, responded to Musk by stating that the X owner’s comment implied that “no one can build a secure digital hardware”.

He added that he’d be happy to run a tutorial for Musk on how to build a secure EVM. Musk responded to the BJP leader, saying, “Anything can be hacked.”

How EVMs Work

An EVM is made up of two units: the control unit and the ballot unit, which are connected by a cable.

The Election Commission of India (ECI) now uses third-generation EVMs, called M3 machines, which are not connected to the internet. These lack the physical components to connect to Bluetooth or Wi-Fi, making them resistant to remote hacking efforts.

Each EVM functions as an independent device, akin to a rudimentary calculator, and does not require an external power source. Instead, they are fueled by an internal battery that BEL instals.

Raising Questions

India’s election authorities have consistently stated that voting machines cannot be tampered with, and physical interference, if any, is quickly detectable.

However, these claims have been disputed on several occasions.

In 2010, University of Michigan researchers attached a homemade device to a machine and were able to modify the outcome by sending text messages from a cell phone. Indian authorities denied the assertion, stating that simply obtaining the equipment to tamper with would be difficult.

In 2017, Saurabh Bharadwaj, an Aam Aadmi Party (AAP) politician, revealed how easily a dummy EVM could be hacked.

Bharadwaj alleged that a “manipulator” may enter the polling booth during voting and insert a unique secret code, directing the following votes to a specific candidate. He also stated that an EVM’s motherboard could be swapped in mere a 90 seconds. 

Can EVMs be Hacked?

In the 2011 municipal and state primaries in Pennsylvania (USA), experts concluded that their EVMs were remotely managed (controlled).

Indian electronic voting machines, on the other hand, operate differently. 

EVMS are stand-alone machines that are not networked or connected to the internet. Machines in the United States are connected to a server and operate through the internet, rendering them vulnerable to cyberattacks.

Layers Of Protection

EVMs can be hacked in two ways: wirelessly and wired. According to several cybercrime and election specialists, EVM hacking is a tremendously difficult task. EVMs are not networked devices, therefore hacking one would necessitate modifying the machine itself.

This means that anyone seeking to hack an EVM cannot do so remotely and must have physical access to the machines themselves, necessitating coordination with the EVM manufacturing authority, the ECI, and corporations that manufacture the chips used in EVMs.

EVMs are currently created by only two public sector units in India, and the engineers who make them have no idea where an EVM they have manufactured will be deployed.

First, take a quick glance at the conventional architecture of an EVM. Each one-time programmable microcontroller includes instructions for the machine, such as storing one vote in the EVM memory for candidate A when button A is pressed.

It is important to note that this device can only be programmed once. This means that the physical microcontroller chip must be modified EVM by EVM in order to change the microcontroller’s functionality.

This is not viable due to two factors. First, each saved programme has a checksum (derived from the unique sequence of instructions) that is recorded on the device. If a hacker successfully replaces all EVMs with new microcontrollers (by desoldering the old and soldering the new), the checksums will change, indicating a malicious attack.

Not pre-decided

In addition, the system in use today requires the EC officer to use double randomisation, meaning the EVM used in each polling booth is randomly assigned at the last minute. In addition, the candidate list and the order of the candidates in each EVM are not pre-decided. 

This means that there would be no way to change the microcontroller behaviour to favour a candidate in advance.

Next, we’ll look at the microcontroller’s associated memory, which stores results. This memory normally contains the number of votes for each contender. To gain access, hackers must physically open each EVM, bypass the microcontroller, read micro-traces, and modify memory contents.

By the way, when the results are announced, the total is also tallied, thus the hacker must exercise caution when changing the results to ensure that the total is preserved.

A successful hack would result in the physical destruction of the EVM and clear evidence of the attack.

Finally, someone may hack the EVM’s display, insert an alternate display unit, and display erroneous results. This would necessitate the production of new circuit boards ahead of time, as well as the repetition of many of the loops outlined above.

Again, it goes without saying that a simple inspection of the EVM (which is always conducted in public and in front of each candidate’s representatives) would disclose hacking.

Additional measures

Despite the numerous methods used to construct and design EVMs to prevent tampering, concerns regarding tampering persist. In the 2010s, the ECI opted to implement an additional layer of protection known as the Voter-Verified Paper Audit Trail (VVPAT).

The VVPAT enables each EVM to record each vote by generating a voter slip, which is displayed to voters. This serves two purposes: voters are quickly informed that their vote has been registered, and the slips are collected and counted at the end of the voting process.

In conclusion, while it is impossible to declare with absolute certainty that EVMs cannot be hacked, it is undeniable that EVMs are arguably the greatest voting technology available today. While they may have shortcomings, their built-in checks and balances create a system that is incredibly difficult to tamper with or hack into. 

The post Hacking India’s Electronic Voting Machine is Next to Impossible appeared first on AIM.

]]>
AI Can Never Replace Excel https://analyticsindiamag.com/ai-insights-analysis/ai-can-never-replace-excel/ Mon, 17 Jun 2024 04:31:14 +0000 https://analyticsindiamag.com/?p=10123772

Businesses continue to rely on Excel, so much so that financial specialists claim you'll "have to pry Excel out of their cold, dead hands" before they ever stop using it.

The post AI Can Never Replace Excel appeared first on AIM.

]]>

The world is built on spreadsheets. Even before the introduction of modern machine learning tools, most of the data of the world was stored on Excel spreadsheets. Cut to today, and that’s still the case.

Just like any other tool, though, with the advent of applications such as ChatGPT, it was thought that there would be no need for Excel sheets anymore, since data science could easily be done by just allowing LLMs to make sense of the data.

The AI Revolution in Excel

One of the strongest arguments against Excel was that it was becoming obsolete. Excel is not user-friendly, and the application rounds off very large figures with accurate computations, which reduces its accuracy.

Excel is also a stand-alone application that is not fully integrated with other corporate systems. It does not provide sufficient control because users do not have a clear and consistent view of the quotations sent by their representatives, as well as the history of those quotes.

So, what does the future of Excel look like? We can anticipate a significant role for AI. Microsoft has been steadily integrating AI aspects into Excel, and with its ever-expanding capabilities, it is set to continue revolutionising the software, sparking excitement about the future possibilities.

Microsoft unveiled Copilot, its innovative AI tool, in mid-March 2023. This cutting-edge technology is set to revolutionise the functionality of Microsoft Office programs, including Word, Excel, PowerPoint, Outlook, and Teams.

Consider the practical implications of Copilot in Excel. By harnessing natural language processing (NLP) AI techniques, it empowers users to ask questions in plain language and receive accurate, context-aware answers. This transformative technology enhances Excel’s usability, providing valuable recommendations and precise results.

By integrating Copilot into the Microsoft 365 suite, Microsoft is placing generative AI tools in front of over a billion of its users, possibly changing how large segments of the global workforce communicate with one another.

One example is the Data Analyse feature found in the most recent versions of Excel. NLP can also recommend functions, formulas, and features the user may not know, making it easier to identify the best answer.

Python in Excel

The introduction of Python in Excel has given the application a major boost. Python in Excel uses Anaconda, a prominent repository notable for allowing developers to run multiple Python environments.

Now users can do advanced data analysis within the familiar Excel interface leveraging Python, which is available on the Excel ribbon.

Access to Python allows users to use Python objects within cell functions and calculations. Consider a Python object being referenced or its data used in a PivotTable. 

Even popular libraries such as scikit-learn, Seaborn, and Matplotlib can be utilised with Excel. This allows Python-created visualisations, data models, and statistical calculations to be combined with Excel functions and plugins.

The integration of Python cements Excel’s position in data analytics, suggesting that Excel’s utility in the workplace is far from diminishing.  Last year, Microsoft announced that they would be experimenting with GPT in their office applications

Interestingly, Microsoft has been following this path the entire time in acquiring Github, acquiring OpenAI, which helped them make Copilot for writing Python code and use ChatGPT’s Code Interpreter, AI assistance on PowerBI, and, now, Python in Excel.

Looking at the progress of Excel, a future where AI writing Python on our Excel sheets cannot be ruled out.  Moreover, the talk of AGI being built around Python makes the future of Excel more secure.

Why AI in Excel Is a Win-Win Situation

Microsoft Excel is the world’s most popular spreadsheet software. Approximately 54% of organisations use Excel. According to a recent report by corporate planning company Board International, 55% of organisations undertake their enterprise planning, including budgeting and sales forecasting, on spreadsheets.

Four out of five Fortune 500 companies use Excel, and over two billion people worldwide use spreadsheets.

Excel and spreadsheets became popular in the 1980s, and despite attempts by competitors, Excel and its ilk continue to dominate. Office workers today utilise tools like Excel to visualise, analyse, organise, distribute, and present chunks of corporate data—whether exported from databases or created on the fly.

Businesses continue to rely on Excel, so much so that financial specialists claim you’ll “have to pry Excel out of their cold, dead hands” before they ever stop using it.

The post AI Can Never Replace Excel appeared first on AIM.

]]>
6 Incredible Ways LLMs are Transforming Healthcare https://analyticsindiamag.com/ai-mysteries/6-incredible-ways-llms-are-transforming-healthcare/ Fri, 14 Jun 2024 06:09:26 +0000 https://analyticsindiamag.com/?p=10123619 6 Incredible Ways LLMs are Transforming Healthcare

Large language models are reshaping healthcare, moving from exploration to practical use

The post 6 Incredible Ways LLMs are Transforming Healthcare appeared first on AIM.

]]>
6 Incredible Ways LLMs are Transforming Healthcare

Last year, Google decided to explore the use of large language models (LLMs) for healthcare, resulting in the creation of Med-PaLM, an open-source large language model designed for medical purposes. 

The model achieved an 85% score on USMLE MedQA, which is comparable to an expert doctor and surpassed similar AI models such as GPT-4.

Just like Med-PaLM, several LLMs positively impact clinicians, patients, health systems, and the broader health and life sciences ecosystem. As per a Microsoft study, 79% of healthcare organisations reported using AI technology currently.

The use of such models in healthcare is only expected to grow due to the ongoing investments in artificial intelligence and the benefits they provide. 

LLMs in Medical Research

Recently, Stanford University Researchers used an LLM to find a potential new heart disease treatment. Using MeshGraphNet, an architecture based on graph neural networks (GNNs), the team created a one-dimensional Reduced Order Model (1D ROM) to simulate blood flow.

MeshGraphnet provides various code optimisations, including data parallelism, model parallelism, gradient checkpointing, cuGraphs, and multi-GPU and multi-node training, all of which are useful for constructing GNNs for cardiovascular simulations.

https://twitter.com/Jousefm2/status/1772151378279899345

Llama in Medicine

Researchers at the Yale School of Medicine and the School of Computer and Communication Sciences at the Swiss science and technology institute EPFL used Llama to bring medical know-how into low-resource environments.

One such example is Meditron, a large medical multimodal foundation model suite created using LLMs. Meditron assists with queries on medical diagnosis and management through a natural language interface. 

This tool could be particularly beneficial in underserved areas and emergency response scenarios, where access to healthcare professionals may be limited.

According to a preprint in Nature, Meditron has been trained in medical information, including biomedical literature and practice guidelines. It’s also been trained to interpret medical imaging, including X-ray, CT, and MRI scans.

Bolstering Clinical Trials

Quantiphi, an AI-first digital engineering company, uses NVIDIA NIM to develop generative AI solutions for clinical research and development. These solutions, powered by LLMs, are designed to generate new insights and ideas, thereby accelerating the pace of medical advancements and improving patient care.

Likewise, ConcertAI is advancing a broad set of translational and clinical development solutions within its CARA AI platform. The Llama 3 NIM has been incorporated to provide population-scale patient matching for clinical trials, study automation, and research.

Data Research

Mendel AI is developing clinically focused AI solutions to understand the nuances of medical data at scale and provide actionable insights. It has deployed a fine-tuned Llama 3 NIM for its Hypercube copilot, offering a 36% performance improvement. 

Mendel is also investigating possible applications for Llama 3 NIM, such as converting natural language into clinical questions and extracting clinical data from patient records.

Advancing Digital Biology

The Techbio pharmaceutical companies and life sciences platform providers use NVIDIA NIM for generative biology, chemistry, and molecular prediction. 

This involves using LLMs to generate new biological, chemical, and molecular structures or predictions, thereby accelerating the pace of drug discovery and development.

Transcripta Bio, a company dedicated to drug discovery has a Rosetta Stone to systematically decode the rules by which drugs affect the expression of genes within the human body. Its proprietary AI modelling tool Conductor AI discovers and predicts the effects of new drugs at transcriptome scale.

It also uses Llama 3 to speed up intelligent drug discovery. 

BioNeMo is a generative AI platform for drug discovery that simplifies and accelerates the training of models using your own data and scaling the deployment of models for drug discovery applications. BioNeMo offers the quickest path to both AI model development and deployment.

Then there is AtlasAI drug discovery accelerator, powered by the BioNeMo, NeMo and Llama 3 NIM microservices. AtlasAI is being developed by Deloitte.

Medical Knowledge and Medical Core Competencies

One way to enhance the medical reasoning and comprehension of LLMs is through a process called ‘fine-tuning’. This involves providing additional training with questions in the style of medical licensing examinations and example answers selected by clinical experts. 

This process can help LLMs to better understand and respond to medical queries, thereby improving their performance in healthcare applications.

Examples of such tools are First Derm, a teledermoscopy application for diagnosing skin conditions, enabling dermatologists to assess and provide guidance remotely, and Pahola, a digital chatbot for guiding alcohol consumption. 

Chatdoctor, created using an extensive dataset comprising 100,000 patient-doctor dialogues extracted from a widely utilised online medical consultation platform, could be proficient in comprehending patient inquiries and offering precise advice. 

They used the 7B version of the LLaMA model.

The post 6 Incredible Ways LLMs are Transforming Healthcare appeared first on AIM.

]]>
Could AI have Prevented the Exit Poll Mess in India? https://analyticsindiamag.com/ai-insights-analysis/could-ai-have-prevented-the-exit-poll-mess-in-india/ Tue, 11 Jun 2024 12:45:00 +0000 https://analyticsindiamag.com/?p=10123279

GNN and BERT are some of the techniques for predicting and performing sentiment analysis for exit polls.

The post Could AI have Prevented the Exit Poll Mess in India? appeared first on AIM.

]]>

As the vote counting for the grand 2024 Indian Lok Sabha elections began on June 4, many people resorted to social media platforms like X to express their dissatisfaction with the exit poll results, calling out their inaccuracies and calling for more reliable methods.

Various pollsters predicted the incumbent National Democratic Alliance (NDA) would secure 350-400 seats. However, the alliance only managed to secure 293 seats, with the BJP winning 240 seats.

With exit polls having strayed way off the mark even in the past, could AI emerge as a potential game-changer here?

Intuit WDSW June 2024

In Comes AI

“Instead of directly questioning individuals—which can introduce social desirability bias—we extrapolate their opinions from their online interaction. This method minimises bias and eliminates the need for lengthy and tedious interviews,” said Matteo Serafino, chief data scientist at KCore Analytics, in an exclusive interaction with AIM.   

The data research company predicted voter preferences using AI collected from people’s online activities on social media—what they were reading, writing, and reacting to. 

This data, collected in real-time, was then analysed using AI algorithms that take into account various factors that could affect elections, such as inflation, thereby providing more accurate predictions.

Streamlining Data

“We compile a basket of users with identified preferences, akin to a sample in traditional polling. This data is then integrated with the macroeconomic and historical data through a reweighting process, leading to our final insights. Crucially, this is all done while preserving user privacy,” said Serafino. 

KCore converts unstructured input, including text, audio, and images, into structured data for analysis using techniques from network theory, natural language processing (NLP), and computer vision. 

It employed Graph Neural Networks (GNN) for predictions and Bidirectional Encoder Representations from Transformers (BERT) for sentiment analysis.

It’s Not New Though

Numerous AI startups have already developed models to forecast elections. 

Expert.ai, a software startup specialising in natural language processing, employed AI to examine social media remarks regarding Donald Trump and Joe Biden in the months leading up to the 2020 US elections.

The company’s AI interprets the emotions conveyed in social media posts and predicts how these will translate into votes. Using NLP, it classifies the attitude expressed in posts using over 80 distinct emotional categories.

Another AI company, Unanimous.ai, used its programme to survey people in the United States in September 2020. It united vast groups of individuals via the internet, forming a “swarm intelligence” that magnified the members’ collective knowledge and ideas.

Unanimous.ai correctly predicted the presidential election victor in 11 states.

Outdated Traditional Methods

In a typical exit poll, voters are interviewed as they leave the building after voting. Surveyors are trained and stationed at polling booths, and data is traditionally collected using pen and paper (now digitally). 

However, the accuracy of the results can vary depending on many factors. These include sample size, demographic representation, structured questionnaires, random telephone or in-person interviews for fairness, and data compilation in a timely manner. 

Thus, results can be distorted

Yeshwant Deshmukh, the founder of C-Voter, one of India’s major polling organisations, identified sample sizes and limited resources as problems. He claims that polling in India is as complex as polling in a diverse region like the European Union, but “pollsters don’t have that kind of budget”.

With such challenges, AI-driven exit polls are the key to having close-to-accurate results. “In the future, traditional pollsters will integrate AI algorithms with their existing data. Given the continuous decline in response rates for traditional surveys, a gradual shift is anticipated, although the current industry mindset may resist such change,” said Serafino.

The post Could AI have Prevented the Exit Poll Mess in India? appeared first on AIM.

]]>
6 Incredible Ways AI is Helping Wildlife Conservation https://analyticsindiamag.com/ai-insights-analysis/6-incredible-ways-ai-is-helping-wildlife-conservation/ Sun, 09 Jun 2024 10:30:00 +0000 https://analyticsindiamag.com/?p=10122922 6 Incredible Ways AI is Helping Wildlife Conservation

AI has transformed conservation techniques using advanced technologies such as machine learning, computer vision, and predictive modelling

The post 6 Incredible Ways AI is Helping Wildlife Conservation appeared first on AIM.

]]>
6 Incredible Ways AI is Helping Wildlife Conservation

While biodiversity and wildlife may not immediately spring to mind when considering AI, conservation agencies have long employed a range of technologies to monitor and ensure the well-being of ecosystems and wildlife. 

As per research, the market for AI in forestry and wildlife was estimated to be worth US $1.7 billion in 2023. It is projected to expand at a compound annual growth rate (CAGR) of 28.5% to reach US $16.2 billion by 2032. 

Let’s look at some of the top use cases of AI in wildlife conservation.

AI-Powered Wildlife Monitoring 

Conventional techniques frequently depended on manual observation, which was labour-intensive and liable to human mistake. AI-powered monitoring systems with cutting-edge sensors and cameras, help address this.
Real-time tracking, identification, and detection of animals by these technologies can gather information about their habitat preferences, and population dynamics. Large-scale datasets are analysed by machine learning algorithms, which allow researchers to derive meaningful insights.

For instance, wildlife officials track the movement of animals in the Kanha-Pench corridor in Madhya Pradesh using the TrailGuard AI camera-alert system.

It runs on-the-edge algorithms to detect tigers and poachers and transmit real-time images to designated authorities responsible for managing prominent tiger landscapes.

Guardians of the Wild

Many national parks have installed camera traps – or cameras with infrared sensors, deployed in forests to monitor the movement of potential poachers – that harness the power of AI. 

Recently, Susanta Nanda, a wildlife enthusiast and an Indian Forest Service (IFS) officer, recently shared images of intruders captured by an AI-enabled camera at Similipal Tiger Reserve in Odisha on X. This quick response time, made possible by AI, not only helped apprehend intruders but also deterred potential poachers.


Indian Forest Officer Sushant Nanda. Source: X

AI-based surveillance systems will soon be equipped in elephant corridors across the country by the name Gajraj.

Species Identification


Using AI for camera detection. Image source: X/@ai_conservation

The Wildbook project uses AI in species identification. AI algorithms are used for identifying specific animals based on their distinct physical qualities, such as the pattern of spots on a giraffe or the form of a whale’s tail. The time and effort needed by scientists for species identification are greatly decreased by this automated method

Satellite Imagery to Track Endangered Wildlife

SilviaTerra (now known as NCX) creates comprehensive maps of woods by analysing satellite pictures. These maps offer important information about the kinds of trees found there, how well-maintained the forests are, and how much carbon they can store. To manage forests in a way that lessens the effects of climate change, this information is essential.

An Eagle’s Eye for The Wild

Traffic, a well-known non-governmental organisation that works on the worldwide trade in wild animals and plants, have created an AI programme that analyses internet data about the trade in wildlife.

The “AI Wildlife Trade Analyst” an AI tool can interpret enormous volumes of data from many internet sources, such as social media, online forums, and e-commerce platforms. Information about wildlife commerce, including species names, items, prices, and locations, is identified and categorised. The data is then utilised to produce insights regarding the trade’s scope, makeup, and patterns. 

PATTERN, which was created with the aid of Microsoft Azure AI Custom Vision, is an end-to-end computer vision platform and AI service that offers a user-friendly interface for labelling photos. 

Habitat Analysis

An example of the land-cover mapping work around part of the Chesapeake Bay. Image source: Microsoft

The High-Resolution Land Cover Project of the Chesapeake Conservancy used AI to produce a high-resolution map of the watershed of the Chesapeake Bay, which is roughly 100,000 square miles. Compared to traditional 30-metre resolution land cover data, the map’s one-metre resolution offers 900 times more information. It’s important to note that implementing AI technologies in wildlife conservation can be costly and may require significant technical expertise. Despite these challenges, AI’s benefits and potential applications in wildlife conservation are vast and promising.

The post 6 Incredible Ways AI is Helping Wildlife Conservation appeared first on AIM.

]]>
AI Whistleblowers Stand in the Way of AGI? https://analyticsindiamag.com/ai-insights-analysis/ai-whistleblowers-stand-in-the-way-of-agi/ Fri, 07 Jun 2024 10:37:08 +0000 https://analyticsindiamag.com/?p=10122846 AI Whistleblowers Stand in the Way of AGI?

AI is the most transformative innovation any of us will see in our lifetimes. While the concerns are real, there’s a good reason to think that we can deal with them.

The post AI Whistleblowers Stand in the Way of AGI? appeared first on AIM.

]]>
AI Whistleblowers Stand in the Way of AGI?

In an unconventional move, but not the first, a group of current and former employees, primarily from AI behemoth OpenAI, has urged their employers to prohibit non-disclosure agreements (NDAs) and empower whistleblowers to speak openly if the firms they work for prioritise growth and profit over safety.

Seven former and four current employees of OpenAI and two Google DeepMind employees, one present and one former have signed the open letter

They claim that since governments have little to no obligation to receive information about AI companies’ technologies and civil society has none, “current and former employees are among the few people who can hold them accountable to the public”.

Wide-ranging secrecy agreements, however, put whistleblowers at the risk of losing their equity in the business if they choose to come forward.

Yoshua Bengio, Geoffrey Hinton, and computer scientist Stuart Russell, who together provided crucial research that resulted in the development of modern AI and later became some of its most vocal detractors, all backed the letter. 

The letter’s authors expressed deep concern about the incentives of AI businesses to avoid governance and responsibility, underlining the importance of transparency and accountability in the industry.

Joshua Segren, co-founder of ShopCierge.ai. Source: X

A representative for OpenAI responded to the letter stating that the business is proud of its “track record of providing the most capable and safest AI systems” and believes in its scientific approach to addressing risk. 

The representative also said that the company acknowledges that “rigorous debate is crucial given this technology’s significance”.

Little Overboard, Perhaps?

In his interview, Daniel Kokotajlo, former researcher in OpenAI’s governance division, raised the concern that “some employees believed” Microsoft had improperly tested and distributed a new version of the GPT-4 on Bing in India

Microsoft refutes the claim.

While those calling for greater transparency and protections for whistleblowers need to be celebrated, it’s important to study the other side.

Some of the former employees are affiliated with the radical effective altruism movement, which tends to focus on the most catastrophic movement and emphasises the long-term effects of our actions. This includes the possibility that an out-of-control AI system could take over and wipe out humanity.

Critics have accused the movement of spreading apocalyptic scenarios regarding technology without proper backing.


Rachid Flih, co-founder of open-source platform Panora.

One of the signatories, Kokotajlo, said that before joining OpenAI, he predicted that artificial general intelligence (also known as AGI), an AI capable of human-like cognition, wouldn’t arrive until 2050. 

Now, he says there’s a 50 percent chance the tech will arrive by 2027. He also believes there’s a 70 percent chance that this advanced AI will destroy humanity.

AI scientists like Yann LeCun of Meta have called people concerned about the field’s rapid advancement “doomers”, claiming that their “misplaced sense of urgency reveals an extremely distorted view of reality”.

Leopold Aschenbrenner published a 165-page study outlining a path from GPT-4 to superintelligence, its risks, and the difficulty of aligning such intelligence with human aims. Aschenbrenner was a member of OpenAI’s super alignment team and was sacked for leaking confidential information in April.

This has been discussed extensively previously, but his statements still remain unproven (at least for now).

The signatories’ opposition to NDAs that keep business insiders from raising risk-related concerns is another key demand. Legal hazards associated with this procedure include infringement of intellectual property. For sensitive information such as private data, trade secrets, or creative ideas that provide you a competitive edge, NDAs offer an essential layer of protection. 

Profit Over Safety’

The letter comes after two of OpenAI’s senior staffers, co-founder Ilya Sutskever and key safety researcher Jan Leike, quit last month. Leike claimed that OpenAI had abandoned a culture of safety in favour of “shiny products” after he left.

A Ploy to Impede AGI? 

Former OpenAI board member Helen Toner claimed in an interview that aired last week that Altman frequently misled and concealed facts from the board, particularly on safety procedures. 

According to her, the board “was not informed in advance” about ChatGPT and actually learned about it on Twitter. (Although the corporation didn’t say it outright, it expressed disappointment in Toner’s continued revisiting of these concerns in a statement.)

Musk, who owns a rival chatbot and an AI business, will not be outdone. He is suing OpenAI on the grounds that it prioritises profits and its Microsoft partnership over human growth.

Joshua Achiam, research scientist at OpenAI, criticised the open letter on social media, arguing that employees going public with safety fears would make it harder for labs to address susceptible issues. 

In a post on X, he said, “I think you are making a serious error with this letter. The spirit of it is sensible. However, disclosing confidential information from frontier labs, well-intentioned, can be outright dangerous. This letter asks for a policy that would, in effect, give safety staff carte blanche to make disclosures at will, based on their judgement.”

He isn’t the only one. “It would be more helpful if they raise specific problems with current or upcoming systems than just vaguely point to process generalities,” Meta’s lead product manager, GenAI, Arun Rao, said. 

Similarly, former OpenAI employee and founder of Interdimensional.ai Andrew Mayne echoed the same. He said, “This has created a situation where people with good intentions could create a scenario in which the opposite of what they want to happen occurs.”

With both parties defending their positions on AI safety, it remains to be seen whether all this is merely noise on the path to AGI.

The post AI Whistleblowers Stand in the Way of AGI? appeared first on AIM.

]]>
EPAM’s Elaina Shekhter Envisions a Future with Human-AI Agents https://analyticsindiamag.com/ai-trends-future/epams-elaina-shekhter-envisions-a-future-with-human-ai-agents/ Wed, 05 Jun 2024 09:59:54 +0000 https://analyticsindiamag.com/?p=10122520 EPAM’s Elaina Shekhter Envisions a Future with Human-AI Agents

GenAI is not about replacing humans but about enhancing and augmenting human intelligence and decision-making

The post EPAM’s Elaina Shekhter Envisions a Future with Human-AI Agents appeared first on AIM.

]]>
EPAM’s Elaina Shekhter Envisions a Future with Human-AI Agents

The power of generative AI (GenAI) is already reshaping our work environments and daily lives, marking a significant turning point. GenAI is propelling us towards a future of truly enterprise-wide AI, a realm that was once the domain of specialised functions only, as articulated by EPAM chief marketing and strategy officer Elaina Shekhter.

At AIM’s Data Engineering Summit (DES) 2024, Shekhter emphasised the role of GenAI in shaping our future. “GenAI, as a transformative agent, is not just a glimpse into the future, but a tool that helps us shape it. With its capabilities advancing rapidly, we can expect to see new tools emerging frequently.

“It took us about 40,000 years to get to the point of fire and cook our food. It took us several 1,000s of years to build basic technology. And it’s taken us about 1,000 years to get from basic agricultural societies to the steam locomotive. This tells us that no matter what, change is inevitable,” she said.

“It’s changing very quickly. This calls for adaptiveness. Now, whether it’s a disruption or an opportunity is difficult to predict?” she added.

Wave Of Change

Shekhter envisions the future of GenAI in three waves. The first wave, which we are currently in, is about humans with copilots. It’s not transformative yet, but it will be in the second wave as humans with agents. In this stage, the discerning eye will be able to tell whether they are interacting with a human or an AI, underscoring the continued significance of human involvement.

Wave three will be a very subtle but pivotal flip, where it isn’t going to be humans with agents; it will be agents with humans. In this stage, humans would assist agents with their tasks. This wave may occur in specialised domains like customer service in the next few years. Broader impacts on work and society will take longer. 

One Step At A Time

Shekhter, however, had a word of caution: “Generative AI will likely become more integrated into our lives, with agents helping or replacing humans in many tasks. This could lead to major productivity gains but also disruption.”

People expect organisations to continue to be human-centric. There’s an element of this responsible AI mandate that lands directly on the desk of the engineer. We must develop software with the notion of security and responsible use of AI in mind, and we are also indebted to the organisation of enterprises to establish responsibility so that we bring people along.

Shekhter reassured the audience that GenAI is not about replacing humans but about enhancing and augmenting human intelligence and decision-making. “This technology is designed to make us better at what we already do, not to replace us,” she said.

Shekhter further underscored the importance of responsible AI use. She emphasised, “AI must be used responsibly and only for the benefit of humanity. It’s crucial that we don’t let technology control us, but rather, we control the technology.”

Responsible AI By Design

Businesses can already safely and responsibly integrate GenAI tools into their workflows. But as GenAI further permeates enterprise technology stacks, it will expand beyond simply automating single tasks.

“Future advances in natural language processing, computer vision, robotics and other AI subfields will further accelerate GenAI’s impacts across many industries and applications.

“AI is transformative. It is scary. It has the potential to take over. It does. And anyone who doesn’t believe that there’s a real threat, as well as a real opportunity, isn’t paying attention,” she concluded.

The post EPAM’s Elaina Shekhter Envisions a Future with Human-AI Agents appeared first on AIM.

]]>
OpenAI’s GPT-4 Shows Prowess in Picking Stocks https://analyticsindiamag.com/ai-insights-analysis/openais-gpt-4-shows-prowess-in-picking-stocks/ Mon, 03 Jun 2024 12:30:00 +0000 https://analyticsindiamag.com/?p=10122354 OpenAI’s GPT-4 Shows Prowess in Picking Stocks

GPT-4 outperformed the majority of human financial analysts, who registered an accuracy rate of 53% to 57%.

The post OpenAI’s GPT-4 Shows Prowess in Picking Stocks appeared first on AIM.

]]>
OpenAI’s GPT-4 Shows Prowess in Picking Stocks

Researchers at the University of Chicago’s Booth School of Business have demonstrated that OpenAI’s GPT-4 can perform as well as or even better than human experts in financial statement interpretations. 

Using a method called chain-of-thought, the researchers trained GPT-4 to simulate the mental processes of a human financial analyst. This allowed the robot to analyse and forecast future market movements.

The team taught the model to produce precise predictions by teaching it to recognise patterns, calculate ratios, and synthesise data. The study claimed that GPT-4 could forecast future profit direction with 60% accuracy, outperforming the majority of human financial analysts who averaged between 53% and 57% accuracy.

The researchers concluded, “LLM prediction does not stem from its training memory.” Instead, we discover that the LLM produces insightful narratives regarding a business’s potential performance.

Skeptics Aren’t Convinced

However, it’s crucial to exercise caution and not draw excessive conclusions from these findings.

 ChatGPT uses the data it was trained on to answer user questions and works on their prompts. To be accurate, you must ask the correct question.

On Hacker News, a user pointed out that the researchers’ artificial neural network model, which they used as a benchmark, is from 1989 and cannot be compared to most advanced models utilised by financial analysts today. 

On X, AI researcher Matt Holden questioned the researchers’ assertions, stating that it is improbable for GPT-4 to select equities that can outperform a more general index like S&P 500. These concerns reflect the ongoing debate about the effectiveness of AI in stock market analysis. 

In one experiment, ChatGPT reported a 26.9% net return for the benchmark stock index the previous year, even though the index had dropped 20%. This was in response to a question on the performance of S&P 500.

These examples demonstrate both the potential and the limitations of AI in stock market analysis.

Researchers from Virginia Tech, Queen’s University, and JPMorgan AI Research have looked at how ChatGPT and GPT-4 performed on simulated Chartered Financial Analyst (CFA) exams. The outcomes are not that impressive. ‘The researcher found that, in tested situations, ChatGPT probably wouldn’t be able to pass CFA levels I and II.’ 

This suggests that while ChatGPT shows promise, it still has a long way to go before it can match the expertise of human financial analysts.

For now, the market seems to have chosen to stick with more traditional approaches, emphasising the continued importance of human discretion in financial analysis.

The post OpenAI’s GPT-4 Shows Prowess in Picking Stocks appeared first on AIM.

]]>
Soon, LLMs Can Help Humans Communicate with Animals https://analyticsindiamag.com/ai-trends-future/soon-llms-can-help-humans-communicate-with-animals/ Mon, 03 Jun 2024 11:30:00 +0000 https://analyticsindiamag.com/?p=10122346 LLMs Can Help Humans Communicate With Animals

Understanding non-human communication can be significantly aided by the insights provided by models like OpenAI’s GPT-3 and Google's LaMDA

The post Soon, LLMs Can Help Humans Communicate with Animals appeared first on AIM.

]]>
LLMs Can Help Humans Communicate With Animals

A common cliche held in the language industry is that translation helps to break the language barrier. Since the late 1950s, researchers have been attempting to understand animal communication. Now, scientists are blending animal wisdom with LamDA’s secrets, embracing GPT-3’s essence. 

By studying massive datasets, which can include audio recordings, video footage, and behavioural data, researchers are now using machine learning to create a programme that can interpret these animal communication methods, among other things.

Closer to Reality

The Earth Species Project (ESP) seeks to build on this by utilising AI to address some of the industry’s enduring problems. With projects like mapping out crow vocalisations and creating a benchmark of animal sounds, ESP is establishing the groundwork for further AI research. 

The organisation’s first peer-reviewed publication, Scientific Reports, presented a technique that could separate a single voice from a recording of numerous speakers, demonstrating impressive strides being made in the field of animal communication with the help of AI, inspiring the audience with the possibilities.

Scientists refer to the complex task of isolating and understanding individual animal communication signals in a cacophony of sounds as the cocktail-party problem. From there, the organisation started evaluating the information in bloggers to pair behaviours with communication signals.

ESP co-founder Aza Raskin stated, “As human beings, our ability to understand is limited by our ability to perceive. AI does widen the window of what human perception is capable of.”

Easier Said than Done

A common mistake is assuming that animals employ sounds as one form of communication. Visual and tactile stimuli are as equally significant in animal communication as auditory stimuli, highlighting the intricate and fascinating nature of this field, which is sure to pique the interest of the audience.

For example, when beluga whales communicate, specific vocalisation cues show their social systems. Meerkats utilise a complex system of alarm cries in response to predators based on the predator’s proximity and level of risk. Birds also convey danger and other information to their flock members in the sky, such as the status of a mating pair.

These are only a few challenges researchers must address while studying animal communication.

To do this, Raskin and the ESP team are incorporating some of the most popular and consequential innovations of the moment into a suite of tools to actualise their project – generative AI and huge language models. These advanced technologies can understand and generate human-like responses in multiple languages, styles, and contexts using machine learning. 

Understanding non-human communication can be significantly aided by the insights provided by models like OpenAI’s GPT-3 and Google’s LaMDA, which are examples of such generative AI tools.

ESP has recently developed the Benchmark for Animal Sounds, or BEANS for short, the first-ever benchmark for animal vocalisations. It established a standard against which to measure the performance of machine learning algorithms on bioacoustics data.

On the basis of self-supervision, it has also created the Animal Vocalisation Encoder, or AVES. This is the first foundational model for animal vocalisations and can be applied to many other applications, including signal detection and categorisation.

The nonprofit is just one of many groups that have recently emerged to translate animal languages. Some organisations, like Project Cetacean Translation Initiative (CETI), are dedicated to attempting to comprehend a specific species — in this case, sperm whales. CETI’s research focuses on deciphering the complex vocalisations of these marine mammals. 

DeepSqueak is another machine learning technique developed by University of Washington researchers Kevin Coffey and Russell Marx, capable of decoding rodent chatter. Using raw audio data, DeepSqueak identifies rodent calls, compares them to calls with similar features, and provides behavioural insights, demonstrating the diverse approaches to animal communication research.

ChatGPT for Animals

In 2023, an X user named Cooper claimed that GPT-4 helped save his dog’s life. He ran a diagnosis on his dog using GPT-4, and the LLM helped him narrow down the underlying issue troubling his Border Collie named Sassy.

Though achieving AGI may still be years away, Sassy’s recovery demonstrates the potential practical applications of GPT-4 for animals.

While it is astonishing in and of itself, developing a foundational tool to comprehend all animal communication is challenging. Animal data is hard to obtain and requires specialised research to annotate, in contrast to human data, which is annotated in a simple manner (for humans). 

Compared to humans, animals have a far limited range of sounds, even though many of them are capable of having sophisticated, complex communities. This means that the same sound can have multiple meanings depending on the context in which it is used. The only way to determine meaning is to examine the context, which includes the caller’s identity, relationships with others, hierarchy, and past interactions.

Yet, this might be possible within a few years, according to Raskin. “We anticipate being able to produce original animal vocalisations within the next 12 to 36 months. Imagine if we could create a synthetic crow or whale that would seem to them to be communicating with one of their own. The plot twist is that, before we realise what we are saying, we might be able to engage in conservation”, Raskin says. 

This “plot twist”, as Raskin calls it, refers to the potential for AI to not only understand animal communication but also to facilitate human-animal communication, opening up new possibilities for conservation and coexistence.

The post Soon, LLMs Can Help Humans Communicate with Animals appeared first on AIM.

]]>
Google Research Introduce PERL, a New Method to Improve RLHF https://analyticsindiamag.com/ai-news-updates/google-research-introduce-perl-a-new-method-to-improve-rlhf/ Tue, 19 Mar 2024 09:26:48 +0000 https://analyticsindiamag.com/?p=10116725 Google Search is Killing the SEO Experience

The paper introduces PERL, a method using LoRA for efficient training of language models with RLHF to reduce computational costs and complexity while maintaining comparable performance.

The post Google Research Introduce PERL, a New Method to Improve RLHF appeared first on AIM.

]]>
Google Search is Killing the SEO Experience

Google Research has introduced a new technique called Parameter Efficient Reinforcement Learning (PERL), which aims to make the process of aligning LLMs with human preferences more efficient and accessible. 

This research paper published is available here

The researchers propose using a parameter-efficient method called Low-Rank Adaptation (LoRA) to fine-tune the reward model and reinforcement learning policy in the Reinforcement Learning from Human Feedback (RLHF) process. 

In PERL, LoRA, a method that fine-tunes a small number of parameters, is applied to make training more efficient. It’s used in both the reward model and the reinforcement learning (RL) policy of language models by attaching LoRA adapters to specific parts. 

During training, only these adapters are updated, leaving the main part of the model unchanged. This approach reduces the amount of data needed to train and speeds up the process, making it possible to train the models with less computational power.

The team conducted extensive experiments on seven datasets, including two novel datasets called ‘Taskmaster Coffee’ and ‘Taskmaster Ticketing,’ which they released as part of this work. 

The results showed that PERL performed on par with conventional RLHF while training faster and using less memory. This finding is significant because the computational cost and complexity of the RLHF process have hindered its adoption as an alignment technique for large language models. This advancement could lead to wider adoption of RLHF as an alignment technique, potentially improving the quality and safety of large language models.

The post Google Research Introduce PERL, a New Method to Improve RLHF appeared first on AIM.

]]>
[Exclusive] Pushpak Bhattacharyya on Understanding Complex Human Emotions in LLMs  https://analyticsindiamag.com/intellectual-ai-discussions/pushpak-bhattacharyya-on-understanding-complex-human-emotions-in-llms/ Wed, 14 Feb 2024 10:30:14 +0000 https://analyticsindiamag.com/?p=10112873 Pushpak Bhattacharyya on Understanding Complex Human Emotions in LLMs

Bhattacharyya has been working on emotional and sentimental problems in NLP since his Master’s at IIT Kanpur, and has published over 350 papers.

The post [Exclusive] Pushpak Bhattacharyya on Understanding Complex Human Emotions in LLMs  appeared first on AIM.

]]>
Pushpak Bhattacharyya on Understanding Complex Human Emotions in LLMs

“A sentence can either be positive, negative or neutral – not a mix of all three. Emotionality, on the other hand, has this exciting thing about it that it can have a mixture of emotions within it,” said IIT Bombay professor and computer scientist Pushpak Bhattacharyya, explaining the complexity of understanding emotions in language. 

Earlier this month, OpenAI CEO Sam Altman was seen experimenting on X, where he posted, “Is there a word for feeling nostalgic for the time period you’re living through at the time you’re living it?” The next thing you know, everyone was on ChatGPT asking what the word was. And many demonstrated creativity crafting their own versions of the word – like ‘Nowstalgia’. This is truly human. But can it be implemented on LLM chatbots?  

Bhattacharyya knows the answer. “Chatbots that are polite, and understand sentiment, emotion, etc give rise to better businesses. Chatbots that are closer to human beings, emotional and sentiment, bring commercial profits along, which is quite motivating,” he added, highlighting  a study that reported that organisations that used polite chatbots, benefitted from them, instead of the generic ones. 

Bhattacharyya has been working on these emotional and sentimental problems in NLP since his master’s at IIT Kanpur, and has published over 350 papers. “What got me interested in linguistics, emotions, and AI was the similarities between the words of different languages and their respective sounds,” Bhattacharyya said. He narrated how he used to collect proverbs from each country and city he visited to understand how language operates. 

Bhattacharyya told AIM that he is also working on Plutchik’s wheel of emotions with eight emotions at Centre for Indian Language Technology (CFILT) lab at IIT Bombay, which is a subsequent work of his recent paper – Zero-shot multitask intent and emotion prediction from multimodal data. This problem deals with combining different types of emotions within one context, a foundational problem that he said no one has taken up before. “At our CFILT lab, we take up problems that no one else has before, which includes not just Indian languages,” he added. 

What Indian LLMs need

Bhattacharyya emphasised on building a trinity model for creating Indic language models, which means deciding one language, one task, and one domain for creating models. “For example, creating a model in Konkani for question answers on agriculture or a sentiment analysis system for railway reservation in Manipuri,” he explained, saying that these models are easier to build and then can be connected later into a larger model. 

Talking about Indic models built on top of Llama, Bhattacharyya said that though these models are a step in the right direction, it is essential to also build them for specific tasks and domains, and then gradually expand into other tasks. 

Bringing Altman’s recent news about raising $7 trillion for making AI chips, Bhattacharyya said that India also needs similar efforts in making indigenous chips. “Being self-sufficient in hardware is very crucial because we cannot wait for the switches and GPUs from outside. We should make in-house capabilities that will facilitate AI research and further development,” he said. 

AI Awareness 

“In our scriptures, buddhi (wisdom) is above manas (mind) and then comes our body and sensory organs,” Bhattacharyya said, explained that every generation of students is smarter than the last and the volume of information also increases. “It is important for these students to learn the basics and not just get stuck with what is exciting, for instance instant gratification tasks like programming,” he added, talking about the need for students to focus on larger problems, but by starting small.

“Introducing basic mathematics along with programming starting from Class 5 is necessary for education systems, along with teaching them about every other field like social science and others, to give them proper alignment with the world they are building for,” added Bhattacharyya about having holistic education for students.

During his PhD, he spent an extensive amount of time at MIT AI Lab and Stanford University, studying different flavours of AI with pioneers in the field such as Marvin Minsky and the father of modern linguistics, Noam Chomsky. “But my interest in linguistics started way back when I was in Class 4 in school,” said Bhattacharyya.

“Today’s AI takes a lot more computation when compared to when I started doing AI,” Bhattacharyya said. He also highlighted how Bill Gates said in the early 1990s that NLP will drive computation requirements forward. “My advice to young people starting in the field would be to understand the fundamentals of mathematics, build foundations and then stick to a problem for a longer time,” he added.

A Complex Human 

Starting his BTech at IIT Kharagpur studying digital electronics, Bhattacharyya came across a circuit board made for adding two numbers. Unlike others who did not think of it much, he was astounded at how a lifeless system made of diodes and resistors had decision making capabilities. This got him into studying intelligence outside bodies of human beings and animals, leading him to AI. 

“I’m one of the few NLP researchers who give equal importance to linguistics and computation,” he beamed. He added that his course is inspired by both the fields and how his Master’s thesis was also focused on Sanskrit to Hindi machine translation. Talking about his paper on sarcasm detection in 2017, Bhattacharyya said that the researchers were able to build a computational algorithm that could detect sarcasm in LLMs which would benefit the field of psychology, cognitive science, and philosophy as well. 

The post [Exclusive] Pushpak Bhattacharyya on Understanding Complex Human Emotions in LLMs  appeared first on AIM.

]]>
Top 7 Hugging Face Spaces to Join https://analyticsindiamag.com/industry-insights/ai-in-space-tech/top-7-hugging-face-spaces-to-join/ Wed, 17 Jan 2024 09:30:00 +0000 https://analyticsindiamag.com/?p=10110945

Integrating the Hugging Face hub enhances the experience by allowing easy utilisation of existing models and datasets.

The post Top 7 Hugging Face Spaces to Join appeared first on AIM.

]]>

Hugging Face Spaces is an exceptional platform for hosting and deploying machine learning (ML) models and applications. With its user-friendly interface and accessibility, it offers a seamless experience for sharing ML projects globally, facilitating real-time collaboration, and showcasing one’s work in an impressive portfolio for potential employers or clients.

With the arrival of GenAI, many cool models and apps have been rising on Hugging Face, allowing researchers and developers to create space and models. Leveraging Gradio, Streamlit, Docker, or static HTML, users can effortlessly create interactive web interfaces or self-contained applications for their ML models. 

Integrating the Hugging Face hub enhances the experience by allowing easy utilisation of existing models and datasets. 

The platform’s scalability accommodates projects of varying sizes, ensuring deployment flexibility. Since Hugging Face Spaces is cost-free for most use cases and prioritises security, it provides a secure environment for hosting ML applications.

Draw to Search Art by Merve 

This captivating and innovative space on Hugging Face transforms simple sketches into stunning works of art through the power of AI. It offers users the delightful experience of seeing their sketches come to life. The AI model analyses key elements and matches them with a vast database of high-quality images from WikiArt

Beyond being a mere image search engine, Draw to Search Art is an open-source platform for exploration and inspiration. Users can gain new perspectives on their artistic vision, learn from professional artists, and fuel their creativity by discovering visually similar artworks.

Check out the space here

PhotoMaker by TencentARC 

This particular space helps with photo editing by offering an AI-powered platform that effortlessly transforms ordinary photos into captivating works of art. With an extensive array of pre-trained artistic styles ranging from classic Van Gogh to futuristic and anime aesthetics, users can easily add a touch of creativity to their images.

Its user-friendly interface sets PhotoMaker Style apart, allowing casual photo enthusiasts and aspiring artists to fine-tune effects, adjust details, and explore personalised visions without complex editing skills. The platform’s high-quality results, courtesy of a vast dataset of high-resolution images, ensure clarity and detail retention in artistic transformations. 

Beyond its core features, PhotoMaker Style encourages creative exploration by connecting users within the Hugging Face Spaces platform, fostering a community where individuals can share creations and discover new artistic styles.

Check out the space here

Open LLM Leaderboard by HuggingFaceH4 

This space is an engaging and accessible front-row experience of the landscape of large language models (LLMs). Unlike traditional research papers, this space offers a dynamic platform with comprehensive tracking of state-of-the-art LLMs, presenting their performance on various tasks through interactive visualisations.

Beyond being a mere data repository, the leaderboard is a valuable resource for learning about AI, gaining inspiration, and participating in discussions on ethics and the future of these powerful models.

Whether you’re an AI expert, a curious learner, or simply intrigued by artificial intelligence, the Open LLM Leaderboard is a must-visit space, providing a window into the cutting edge of AI development and showcasing the thrilling race towards more advanced language models. 

Check out the space here.  

Segment Anything Web by Xenova 

This is a solution for effortlessly extracting valuable data from websites, eliminating the struggles associated with traditional web scraping. This user-friendly space allows you to extract text, images, and more with a simple point-and-click interface, eliminating the need for coding expertise. 

Its precise targeting features enable users to define specific areas of interest easily, and with multiple output formats, the extracted data can be tailored to individual preferences. The tool saves time and effort by automating repetitive tasks, and it enhances productivity by facilitating quick and efficient data extraction from multiple websites. 

Whether you’re a researcher, journalist, data analyst, or need efficient information gathering, Segment Anything Web opens up a world of possibilities, making web data extraction a seamless and transformative experience. 

Check out the space here

ReplaceAnything by Model Scope

Hosted on the Hugging Face Space, it acts as a transformative AI-powered tool for refining your writing with precision. Bid farewell to typos, grammatical mishaps, and word choice struggles as ReplaceAnything employs cutting-edge AI to identify and correct errors in your text seamlessly. 

Its user-friendly process allows you to paste your writing, review AI-generated suggestions, and customise corrections according to your preferences. Beyond basic corrections, ReplaceAnything demonstrates contextual awareness, offers style suggestions, and allows users to define personalised rules. 

Serving as more than just a grammar checker, it acts as a comprehensive writing assistant, enhancing productivity, refining skills, and instilling confidence in your written work. 

Check out the space here

TinyLlama

This Hugging Face Space, developed by VatsaDev, introduces an open-source small language model based on the T5 architecture. With a compact size of 137MB, TinyLlama boasts versatility, proving adept at tasks such as text generation, translation, question answering, summarization, and code generation. 

Its speed in generating text or answering queries in milliseconds and high accuracy make it a standout choice for applications on devices with limited resources. 

TinyLlama’s availability on Hugging Face extends to various spaces, showcasing its creative potential. Noteworthy examples include the TinyLlama Chat Space for interactive conversations with a TinyLlama-powered bot and the LlamaReviews Space, where amusing product reviews are generated using TinyLlama. 

Check out the space here

Pheme by PolyAI

This leading company, specialising in conversational AI for customer service, has established a notable presence on Hugging Face, a platform for open-source AI models and tools. With a verified profile and listed team members, PolyAI focuses on contributing to the open-source AI community by sharing Text-to-Speech (TTS) models and datasets. 

Notably, their Pheme Space offers a TTS model trained on high-quality audio for realistic and expressive voices, currently running on Hugging Face infrastructure. The availability of three TTS models, including PolyAI/BigVGAN-L, and datasets such as PolyAI/minds14, PolyAI/evi, and PolyAI/banking77, underscores PolyAI’s commitment to enhancing dialogue systems. 

This integration of commercial expertise with open-source contributions makes PolyAI a valuable addition to the Hugging Face platform, offering insights into realistic voice generation for customer service applications and linking industry advancements and the broader AI community. 

Check out the space here

The post Top 7 Hugging Face Spaces to Join appeared first on AIM.

]]>
7 Must-Read Generative AI Books https://analyticsindiamag.com/ai-mysteries/7-must-read-generative-ai-books-for-unleashing-your-technology-prowess/ Thu, 28 Dec 2023 06:05:46 +0000 https://analyticsindiamag.com/?p=10109589 generative ai books

Here AIM has listed the top seven must read generative AI books of 2023

The post 7 Must-Read Generative AI Books appeared first on AIM.

]]>
generative ai books

Generative AI has gained significant attention in 2023. As everyone is busy experimenting with it and building innovative applications and tools for the betterment of humanity, it becomes increasingly more important to understand the basics and technical nuances, and not just fall prey for the hype. 

Here AIM has listed the top seven must read generative AI books of 2023 for machine learning engineers and data scientists, enhancing your understanding and skills in the field of Generative AI.


Generative AI with Python and TensorFlow 2 

by Joseph Babcock and Raghav Bali

In this book, Generative AI with Python and TensorFlow 2 by Joseph Babcock and Raghav Bali gives you a glimpse of generative models evolution, from Boltzmann machines to VAEs and GANs, learn TensorFlow model implementation, and stay updated on deep neural network research.

Access the Book here.

Generative Deep Learning 

By David Foster  (Author) & Karl Friston (Foreword)

Generative Deep Learning by David Foster and Karl Friston talks about machine learning engineers and data scientists how to create generative deep learning models using TensorFlow and Keras, including VAEs, GANs, Transformers, normalizing flows, energy-based models, and denoising diffusion models. It covers deep learning basics and advanced architectures, providing tips for efficient learning and creativity.

Access the book here.

Generative AI with LangChain

By BenAuffarth

Generative AI with LangChain by Ben Auffarth explores the functions, capabilities, and limitations of LLR models like ChatGPT and Bard, and how to use the LangChain framework for production-ready applications. It covers transformer models, attention mechanisms, training and fine-tuning, data-driven decision-making, automated analysis and visualization using pandas and Python, and heuristics for model usage. The goal is to provide a comprehensive understanding of LLMs and their potential for enhancing our understanding of the world.

Access the book here.

Generative AI on AWS

by Chris Fregly, Antje Barth, Shelbee Eigenbrode

You’ll learn the generative AI project life cycle including use case definition, model selection, model fine-tuning, retrieval-augmented generation, reinforcement learning from human feedback, and model quantization, optimization, and deployment. And you’ll explore different types of models including large language models (LLMs) and multimodal models such as Stable Diffusion for generating images and Flamingo/IDEFICS for answering questions about images.

Access the book here.

Artificial Intelligence & Generative AI for Beginners

by David M. Patel

For those eager to delve into the world of AI, particularly the buzz around generative AI, and seeking practical ways to harness tools like ChatGPT, MidJourney, or RunwayML for both business and personal advancement, this comprehensive guide is an invaluable resource. It begins with an exploration of AI’s history and its key components, delves into machine learning types, and discusses the crucial roles of data and algorithms. The guide further elucidates the major fields of AI, including NLP, computer vision, and robotics. In its deep dive into generative AI, it explains the concept, types, and offers business case studies, alongside a step-by-step approach to building and developing generative AI models. The final part focuses on practical applications in various fields like copywriting and graphic design, presenting the best AI tools of 2023 and addressing ethical considerations.

Access the book here.

Generative AI in Practice

by Bernard Marr

In Generative AI in Practice, renowned futurist Bernard Marr offers readers a deep dive into the captivating universe of GenAI. This comprehensive guide not only introduces the uninitiated to this groundbreaking technology but outlines the profound and unprecedented impact of GenAI on the fabric of business and society. It’s set to redefine all our jobs, revolutionize business operations, and question the very foundations of existing business models. Beyond merely altering, GenAI promises to elevate the products and services at the heart of enterprises and intricately weave itself into the tapestry of our daily lives. 

Access the book here.

The Equalizing Quill

by Angela E. Lauria

As AI technology rapidly advances, AI-assisted book writing is becoming increasingly accessible to writers of all backgrounds. Learning how to unlock the potential of large language models is critical for communities who have been disenfranchised and are ready to make a bigger impact on society’s thinking. It is time to read The Equalizing Quill and finally make your voice heard.

Access the book here.

The post 7 Must-Read Generative AI Books appeared first on AIM.

]]>
2024 is the Year of AMD https://analyticsindiamag.com/ai-breakthroughs/2024-is-the-year-of-amd/ Tue, 26 Dec 2023 08:33:27 +0000 https://analyticsindiamag.com/?p=10106503 2024 is the Year of AMD

"I am encouraged with the progress that we're making on hardware and software and certainly with the customer set,” said Lisa Su.

The post 2024 is the Year of AMD appeared first on AIM.

]]>
2024 is the Year of AMD

AMD has surely taken a stance to bolster itself for 2024. Starting from its partnership with leading companies such as Microsoft, Oracle, and Meta for using MI300X; spearheading AI PC revolution with Ryzen AI; and software play with ROCm; the company is super optimistic about the future, tapping into the AI boom. 

“I think what we’ve seen is the adoption rate of our AI solutions has given us confidence in not just the Q4 revenue number, but also sort of the progression as we go through 2024,” said Lisa Su, the CEO of AMD.

Su announced at a recent earnings call that AMD is expecting a revenue of $400 million from GPUs in the fourth quarter, and exceed $1 billion by the end of 2024. “This growth would make MI300 the fastest product to ramp to $1 billion in sales in AMD history,” she said. 

Su expects the data centres AI market will now be around $400 billion by 2027 — a 2.7 times more than her previous estimation of $150 billion in the same time period. 

Su highlighted that the market is huge and there will be multiple winners in this market. “From our standpoint – we’re playing to win and we think the MI300 is a great product, but we also have a strong road map beyond that for the next couple of generations.”

“But overall, I would say that I am encouraged with the progress that we’re making on hardware and software and certainly with the customer set,” she said. 

Ryzen AI PCs

Leading OEMs such as Acer, Asus, Dell, HP, Lenovo, and Razer are set to feature the Ryzen 8040 Series processors announced at the event. This was along with Ryzen AI 1.0 software for seamless deployment of models on the hardware making it more convenient for users to harness the power of AI in various computing scenarios.

The significance of the mobile phone CPU market is likely to grow as well, especially with the presence of formidable competitors. AMD’s emphasis on Chiplet technology has proven beneficial, evident in its lower waste rate compared to Intel’s monolithic approach. Intel has also acknowledged this advantage and introduced its initial Chiplet lineup, Meteor Lake, which is still new. 

In PCs, there are now more than 50 notebook designs powered by Ryzen AI in the market, said Su. “We are working closely with Microsoft on the next generation of Windows that will take advantage of our on-chip AI Engine to enable the biggest advances in the Windows user experience in more than 20 years.”

AMD still holds the upper hand due to its longer and more profound experience with this technology.

Then it’s about GPUs

“People believe that AMD GPUs are not that suited for machine learning, but the company has been increasingly proving everyone wrong,” Jungwhan Lim, head of AI group at Moreh told AIM. AMD GPUs seem to have witnessed a surge in community adoption, proving their mettle in the field of AI.

The testimonial from Moreh is just one. There are several AI companies and startups that are partnering with AMD to prove its prowess in the market. Databricks, the company giving close competition to big players in the AI race, has been testing AMD GPUs for the whole of 2023, and it revealed the secret only later. Same is the case with Lamini, another partner of AMD.

All of these companies narrate a similar story of how AMD is definitely rising up because there is a shortage of GPUs in the market. Gregory Diamos, co-founder of Lamini, said, “we have figured out how to use AMD GPUs, which gives us a relatively large supply compared to the rest of the market.” 

Not just GPUs, AMD is strategically partnering with the Ultra Ethernet Consortium (UEC) to enhance its inter-chip networking technology and challenge NVIDIA’s dominance. The collaboration involves incorporating Broadcom’s next-gen PCIe switches, supporting AMD’s Infinity Fabric technology for improved data transfer speeds between CPUs.

The open software approach

Su also highlighted that Lamini is enabling enterprise customers to easily deploy production-ready LLMs fine-tuned for their specific data on Instinct MI250 GPUs with minimal code changes. Lamini co-founders claimed that they have the most competitive perf/$ on the market right now, “because we figured out how to use AMD GPUs to get software parity with CUDA, trending beyond CUDA.” 

Then comes the software approach. It has been intensively discussed that NVIDIA’s real moat for AI has always been CUDA, its parallel computing framework. But AMD is closing in slowly, but surely with ROCm, and it has made its focus. “As important as the hardware is, software is what really drives innovation,” Lisa Su said, talking about the ROCm.

For this, the company has partnered with Nod.ai and Mipsology, which is helping the company build its software stack on par with NVIDIA.

“ROCm runs out of the box from day one,” said Ion Stoica, co-founder of Databricks, highlighting it was very easy to integrate it within Databricks stack after the acquisition of MosaicML, with just a little optimisation. “We have reached beyond CUDA,” said Sharon Zhou, the co-founder of Lamini, and how ROCm is production-ready.

All of this is while NVIDIA is planning to release GH200 AI accelerators starting the first quarter of 2024. Though on performance, CUDA can be touted as better than ROCm, but AMD is hell-bent on making its offering better.

The post 2024 is the Year of AMD appeared first on AIM.

]]>
LangChain, Redis Collaborate to Create a Tool to Improve Accuracy in Financial Document Analysis https://analyticsindiamag.com/ai-news-updates/ai-models-revolutionised-the-field-of-natural-language-processing/ Wed, 20 Dec 2023 12:44:25 +0000 https://analyticsindiamag.com/?p=10105297

As ChatGPT faces challenges in accurately answering complex questions derived from Securities and Exchange Commission filings

The post LangChain, Redis Collaborate to Create a Tool to Improve Accuracy in Financial Document Analysis appeared first on AIM.

]]>

The emergence of advanced AI models has revolutionised the field of natural language processing, enabling machines to analyse, interpret, and respond to human language with increasing accuracy and sophistication. However, despite the significant advancements achieved in these models, some AI-powered assistants, such as ChatGPT, still face challenges in accurately answering complex questions derived from Securities and Exchange Commission filings. Researchers from Patronus AI discovered that even the best-performing AI model configuration, OpenAI’s GPT-4-Turbo, can only answer 79% of questions correctly on Patronus AI’s new test.

Partnering with LangChain, Redis has produced the Redis RAG template, optimized for creating factually consistent, LLM-powered chat applications. By leveraging Redis as the vector database, the template ensures rapid context retrieval and grounded prompt construction, making it a crucial tool for developers to create chat applications that provide responsive and precise AI responses.

The Redis RAG template is a REST API that allows developers to interact with public financial documents, such as Nike’s 10k filings. This application uses FastAPI and Uvicorn to serve client requests via HTTP. It also uses UnstructuredFileLoader to parse PDF documents into raw text, RecursiveCharacterTextSplitter to split the text into smaller chunks, and the ‘all-MiniLM-L6-v2’ sentence transformer from HuggingFace to embed text chunks into vectors. Moreover, it utilizes Redis as the vector database for real-time context retrieval and the OpenAI ‘gpt-3.5-turbo-16k’ LLM to generate answers to user queries.

In a recent interaction with AIM, Redis CTO Yiftach Shoolman said  “Your data is everywhere, on your laptop on the organization repository on AWS s3 on Google Cloud Storage, whatever. You need a platform to bring the data to a vector database like Redis. cut to pieces based on the relevant knowledge”.

Criticising ChatGPT he said, “ ChatGPT doesn’t know anything because it was not trained on your data, adding users need to look for the data that is relevant to their request in their knowledge base that they just created. 

The RAG template offers deployable reference architectures that blend efficiency with adaptability, providing developers with a comprehensive set of options to create factually consistent, LLM-powered chat applications with responsive and precise AI responses.

LangChain’s hub of deployable architectures also includes tool-specific chains, LL M chains, and technique-specific chains, which reduce the friction in deploying APIs. LangServe is central to deploying these templates, using FastAPI to transform LLM-based Chains or Agents into operational REST APIs, enhancing accessibility, and production-ready.

The post LangChain, Redis Collaborate to Create a Tool to Improve Accuracy in Financial Document Analysis appeared first on AIM.

]]>
Apple Smoothly Crafts ‘Mouse Traps’ for Humans https://analyticsindiamag.com/ai-origins-evolution/apple-smoothly-crafts-mouse-traps-for-humans/ Wed, 20 Dec 2023 09:30:00 +0000 https://analyticsindiamag.com/?p=10105256

Apple's iOS 17.2 update enables recording spatial videos and experiencing them on a particular device. Guess which one?

The post Apple Smoothly Crafts ‘Mouse Traps’ for Humans appeared first on AIM.

]]>

Apple surely knows how to develop amazing products and thrives on its ever-evolving ecosystem. But, what many don’t know is, it’s a trap – the ‘Apple Trap’ – for once you’re in, you are a happy captive.  

The tech giant recently released a new software update, iOS 17.2. With this, iPhone15 Pro and iPhone 15 Pro Max users can now record spatial videos, which can be brought to life with an Apple Vision Pro, which is expected to be released early next year. 

Experiencing spatial videos on Apple Vision Pro. Source: Apple Blog

In a Walled Garden

It’s not new that Apple has always pushed for a closed ecosystem. It serves as a way to not only give its users products exclusive to the club, but to retain those users through new products that work best when all Apple products are owned. With the Continuity feature, one can use their Mac with other Apple devices, allowing for smarter work and seamless transitions between the devices. 

The first Apple wearable, Apple watch, which was released in 2015, continues to allow sync only via iPhones. At the time of launch, though the iOS market was only 29%, when Android had over 70%, Apple continued its philosophy of launching it only for their niche club. Similarly, the easy pairing and content sync within iPhones and Macbooks.

Similarly, Apple TV that operates on tvOS, the operating system developed by Apple, includes related products such as Siri Remote and an Apple TV+ subscription. Clubbing this with other home devices such as Apple Pods, the whole home setup is addressed. 

Push Towards Hardware

As per a recent Bloomberg report, Apple is said to heavily concentrate on its wearable segment in 2024. The iPhone, which has always been at the center of annual Apple product launches, will also see the launch of a new iPhone version, however, it will have no significant upgrades. Instead, Apple will be focusing on their hardware products this year. 

In addition to Vision Pro, the most-anticipated Apple product for the coming year, the company will be releasing advanced versions of AirPods and watches. The healthcare industry, which has been one of the sought-after categories that Apple is now chasing, will be the main focus for the product launches. 

The company has announced plans to add more health detection features in their next series of watches, particularly hypertension and apnea among others. The Apple AirPods that also assist the hearing-disabled, will likely have more advanced features.

Apple’s new patent shows AirPods with brain wave-detecting sensors that could measure brain activity, muscle movement, blood volume, and more features. It is a given that these features will require an iPhone for use, and the way the company will integrate it across other devices is to be seen.

AI, rather ML, For its Users

Being adamant on referring to machine learning and not ‘AI’ while announcing Apple hardware updates, Apple is going big on building an ecosystem in the PC category too. Moving away from wearables, Apple announced a string of next-generation chips for Macbook on the eve of Halloween. The M3, M3 Pro and M3 Max built with 3-nanometer process technology with an improved neural engine, will boost high-performace ML models, at improved speed and efficiency. 

By announcing chips, Apple is going head-to-head against Intel, AMD and Qualcomm. With the chip offerings, Apple is trying its best to capture the chip market which is currently dominated by Intel with 68.4% followed by AMD with 31.6%. In 2020, Apple parted ways with Intel after 15 years when they launched their M1 chip, to power their MacBooks, indicating Apple’s long-term vision of non-reliability from any potential competitor.  

Apple is also building its own internal chatbot, Apple GPT, on its framework Ajax, and is also working towards bringing generative AI features into its voice-assisted platform Siri and other Mac products. The move is further fortifying Apple’s closed ecosystem, and pushing for more user adoption.

When other companies such as OpenAI and Microsoft are way ahead on AI developments, Apple’s presence in hardware and software gives it an advantage, thereby, locking-in its customers forever. 

The post Apple Smoothly Crafts ‘Mouse Traps’ for Humans appeared first on AIM.

]]>
Lights, Camera, Action! Womenpreneur Duo Reinvent Text-to-Video AI https://analyticsindiamag.com/ai-breakthroughs/lights-camera-action-these-two-indian-womenpreneurs-reinvent-text-to-video-ai/ Tue, 12 Dec 2023 11:37:53 +0000 https://analyticsindiamag.com/?p=10104711

Lica is a pathbreaking platform that works on converting every form of writing, such as documents, presentations, emails, and many others, into captivating videos.

The post Lights, Camera, Action! Womenpreneur Duo Reinvent Text-to-Video AI appeared first on AIM.

]]>

A former product manager who has worked with Microsoft, Snap and Waymo, entered a hackathon in May this year. The solo woman hacker won against hundreds of other hackers, where Replit CEO, Amjad Masad, was one of the judges. “If you’re going to start a company, I’m going to invest in you,” assured Masad.

However, having a full time job and not really confident about starting a company at that time, Priyaa K, went about her work. 

After a few months, she quit her job and joined South Park Commons (community of technologists and builders), and the word got out. Masad reached out – “I heard you quit your job. So, when are you going to raise money?” he asked.

Thus, began Lica! 

Companies such as Midjourney, Runway and the latest Pika, that employ generative AI to transform text to image/video have been built for a specific niche where creators use these tools for the creative/movie industry alone. Furthermore, the generation is from a pure text prompt. 

However, a storytelling platform for all modes of writing was missing, and a couple of Indian-origin women were on a mission to change that. Priyaa K and Purvanshi Mehta have been working on Lica, a platform that works on converting every form of writing such as documents, presentations, emails, and many others, into captivating videos. 

“We are building something where video storytelling, which is the most effective form of storytelling, can be democratised for people who don’t have access to the most powerful video editors or even have the knowledge to be able to use them effectively,” said Priyaa, co-founder of Lica World, an AI startup in San Francisco, in an exclusive interaction with AIM

Need for Lica

“Lights, Camera, Action,” or Lica, arose from the need to address common modes of office communication, such as powerpoint and other office products, that were built in the 90s and early 2000s and have not evolved since. Priyaa believes that every time a story needs to be told, for example, a developer presenting a technical documentation, or a journalist writing a brief for their executive team, they are constrained within the paradigm of a tool. 

“We realised video storytelling, specifically interactive videos, are really going to be a powerful tool for storytelling because you can embed more information within a video than in text. You can encode more things more succinctly in a video because it’s visual and it’s the closest proxy you have to watch something live in front of you. So that’s why we chose videos,” said Priyaa. 

Furthermore, with a number of large language models and research teams working on similar platforms of text-to-video, there still exists a big market gap as Priyaa believes that a person needs an interface to connect with all these AI models because they are all “disparate models in disparate silos in disparate applications”. 

A Mix of Models

Lica uses a mix of proprietary and open-source models and looks to give humans the control to fine-tune. The models will involve different levels of human intervention which will allow a person to create customised video depending on the occasion, and even fine tune at various stages to change  background, voiceover sound, and many more as per one’s directorial style.

Journey to Effective Communication

The startup is not only backed by Amjad Masa, but also Replit VP of AI Michele Catasta, and Village Global, a venture capital firm chaired by Reid Hoffman and backed by Jeff Bezos, Bill Gates, Mark Zuckerberg and others. Lica recently closed a pre-seed round of funding. 

After winning the hackathon where Priyaa met Amjad, he tweeted about the same which went viral. VC Vinod Khosla recently referred to that tweet while talking about AI predictions. 

Amjad’s tweet after Priyaa’s win at the hackathon. Source: X

Combining Forces

At Microsoft, Priyaa had worked with the designer team to make presentation, design and automation possible, and also started Microsoft Designer which used to look different back then. “After working there for a couple of years, I got a lot of faith and I could see a lot of feedback from users, like no matter how cool AI is, it’s always five steps behind what’s there on TikTok or Instagram, where consumers enjoy a form of communication that business users don’t,” said Priyaa.

Mehta comes from a machine learning and applied research background of building large language models, or rather ‘the intersection of graph intelligence and multimodal models’, and has also worked with Microsoft, building features for Microsoft 365 suite. “We kind of joined forces together to build Lica where anyone can create videos in AI,” said Priyaa. 

Stemmed from the need to effectively build communication between audiences, such as teachers who want to find ways to engage with their students who liked TikTok and not powerpoint, Priyaa always had the idea of building agents for video. Playing on the idea of how to tell an effective story and having been a public speaker and participating in competitive debating throughout school and college, Priyaa has learned how to tell stories. She also believes that anyone can tell a good story, but tools are the limitation. 

Still in the experiment phase, the beta programme is expected to start in January 2024 as a gradual rollout with specific users for feedback.

The post Lights, Camera, Action! Womenpreneur Duo Reinvent Text-to-Video AI appeared first on AIM.

]]>
How Amazon’s Silicon Innovation Is Instrumental in AWS Success https://analyticsindiamag.com/ai-breakthroughs/how-amazons-silicon-innovation-is-instrumental-in-aws-success/ Thu, 30 Nov 2023 11:07:03 +0000 https://analyticsindiamag.com/?p=10103924

Amazon’s foray into AI chips and processors started five years before ChatGPT

The post How Amazon’s Silicon Innovation Is Instrumental in AWS Success appeared first on AIM.

]]>

With a series of announcements showcasing Amazon’s prowess not only in the cloud space but also generative AI, and even taking digs at OpenAI’s security flaws in the process, AWS re:Invent2023 at Las Vegas had a lot to offer. Advancing their AI chip ambitions, the company announced two new AI chips, AWS Graviton4 and AWS Trainium 2, at the event. 

Almost a decade ago, Amazon realised that in order to consistently improve the cost-effectiveness of tasks, it needed to redefine general-purpose computing for the cloud era, thereby pushing innovation to the silicon level. “In 2018, we became the first major cloud provider to develop our own general compute processors,” said Adam Selipsky, CEO of Amazon Web Services, at the event. 

Releasing Graviton Power

Adam Selipsky at AWS re:Ignite 2023. Source: AWS Youtube

AWS Graviton, a family of processors, which was first released in 2018, was designed for cloud computing infrastructure running in Amazon Elastic Compute Cloud (EC2). The fourth version – Graviton4, which was unveiled yesterday, is the most powerful and energy efficient chip built by AWS, with 50% more cores and 75% more memory bandwidth than the previous version Graviton3. 

Furthermore, Selipsky announced the preview of R8g Instances that is powered by Graviton4 and is part of the memory optimise instance family. R8g Instances are designed to deliver fast performance for large data sets, and are energy efficient for memory intensive workloads. 

Breaking Barriers 

In 2020, CEO Andy Jassy, who was then the CEO of AWS, emphasised Amazon’s commitment to advancing the cost-effectiveness of machine learning training by investing in proprietary chips. With these chips, AWS lowered the cost barrier for ML training

Similarly in 2018, the release of Graviton was to break the existing market of processors where Intel was comfortably placed. The lack of hardware options for building data centres and cloud services gave them an advantage. Furthermore, the power efficiency of Arm cores made Graviton well-suited for mobile computing and enterprises with extensive arrays of data centres, especially AWS. Today, Amazon has got 150 Graviton-based instances across the EC2 portfolio and more than 50,000 customers. 

AWS Had It Planned All Along

With GPUs being the indispensable component for AI compute, AWS has strategically placed themselves in the said race, and it wasn’t a recent development. Trainium, a chip built for training machine learning models, and Inferentia, a chip optimised for running inference on those models, were released a few years ago. At the event, Selipsky released the second version of Trainium 2. 

In 2015, Amazon acquired Annapurna Labs for $350 million and in 2017, AWS launched Graviton, thereby entering the chip race

Chirag Dekate, VP analyst at Gartner, had earlier quoted that Amazon’s true differentiation is in bringing their technical capabilities. “Microsoft does not have Trainium or Inferentia,” he said.  

The chips were designed to provide accelerated performance and cost-effectiveness for ML training workloads on the cloud platform. This was something that started before ChatGPT rage, but probably gained steam post OpenAI’s chatbot.  

AWS has already found a number of partners that have utilised their AI chips. Companies  such as Adobe, Deutsche Telekom, Leonardo AI have deployed Inferentia 2 for their generative AI models at scale. Similarly, Trainium has already been used by partners such as Anthropic, Databricks and Ricoh. The use cases lie in the internal search team as well, where the chip has been used to train large, deep learning models. 

Strategic Partnerships

Amazon’s partnership with AI research company Anthropic has been a crucial one to showcase their united power in the generative AI space. By not just investing in the company but hosting Claude models on Amazon’s Bedrock platform, the partnership goes beyond simple compute. 

Anthropic CEO and co-founder, Dario Amodei said that AWS is the “primary cloud provider for our mission critical workloads,” and that there are three components to their partnership – compute, customer and hardware. On the hardware aspect, Amodei spoke about working to optimise Trainium and Inferentia for their use cases.  

The biggest chip competitor NVIDIA has also built a strategic partnership with AWS. NVIDIA chief Jensen Huang, also made an appearance at the AWS event and spoke about how AWS was the “world’s first cloud to recognise the importance of GPU accelerated computing.” The act was seen as a reassurance of powerful combat teams, making it a symbiotic partnership favouring both parties. 

Amazon’s steady growth in silicon innovation has helped shape Amazon’s stance in the current AI market. With crucial partnerships AWS is proving to be an essential part of AI compute power. 

The post How Amazon’s Silicon Innovation Is Instrumental in AWS Success appeared first on AIM.

]]>
KorrAI’s Mission against Urban Subsidence https://analyticsindiamag.com/ai-origins-evolution/korrais-mission-against-urban-subsidence/ Thu, 30 Nov 2023 06:12:00 +0000 https://analyticsindiamag.com/?p=10103893

The Y combinator incubated company uses machine learning and satellite data to secure a sustainable urban future through ground motion study.

The post KorrAI’s Mission against Urban Subsidence appeared first on AIM.

]]>

The world is sinking, and some prominent cities in the world are facing the threat of extinction. India has witnessed land subsidence in Joshimath and other Himalayan regions. The threat of subsidence, or sinking of the ground because of underground material movement, has also been affecting cities like Jakarta and Venice, amongst others.

A study recently identified 200 urban locations in 34 countries that experienced land subsidence in the past century. Calculations suggest that nearly 2.2 million square kilometres of land, which is nearly 8% of the global land surface, is exposed to a high- to very-high probability for potential land subsidence, involving 1.2 billion urban inhabitants and threatening nearly $8.2 trillion in GDP. 

At this crucial stage, scientists have come together and are applying various methods like artificial recharge and deep soil mixing to control the situation. Moreover, satellites and imagery extracted from them could be effectively used to study the regions’ ground motion and geological build and preemptively anticipate and act on potential land subsidence (PLS). 

KorrAI, an Indian-Canadian geospatial tech company, has been directly involved in providing the solution. It employs satellite imagery and machine learning to address the challenges arising from ground motion— identifying and mitigating risks associated with subsidence, landslides, and settlement phenomena.

The Y Combinator-funded company has clientele spanning urban planners for mine sites, airports, and construction projects, property insurers, and private and public infrastructure entities that construct highways, railways, subways, and utility lines. 

The services provided by the company become increasingly prevalent as an estimated one-fifth of the world’s population is said to be affected by land subsidence by 2040, intricately linked to flooding and rising sea levels.

With a mission to harness technology to ensure safer, sustainable urban futures, KorrAI’s CEO Rahul Anand, in an exclusive interview with AIM, defined their goal, vision, and challenges. “In the US alone, land sinking causes an annual economic damage of approximately $4 billion,” Anand stated, drawing a concerning parallel to escalating wildfire damages, which have spiked to nearly $230 billion annually.

Anand co-founded KorrAI in 2020 along with Rob McEwan, a data scientist skilled in remote sensing with satellites. However, there’s an exciting story to KorrAI’s focus shifting towards this aspect from being a project funded by the Canadian Space Agency to constructing cloud-based data processing pipelines for the RadarSat constellation mission, specifically in SAR technology.

Back in 2020, Anand’s computer science and engineering background led him to stumble upon an intriguing computer vision problem. “A mining company manually searching for mineral deposits like gold, sparked my curiosity. I created a model to identify patterns, aiding them in locating these deposits in the wild.”

This chance encounter gave birth to myriad opportunities, steering Anand into uncharted territories. “It was a colossal market waiting to be tapped, especially with the escalating capabilities of satellite imagery this decade.” 

UrbanSAR and GlobalSAR

Anand detailed the company’s proprietary data pipelines: “In the past two years, we’ve honed our focus on ground motion monitoring, culminating in the development of two pivotal algorithms, UrbanSAR and GlobalSAR.”

He added that UrbanSAR is meticulously crafted for urban environments, while GlobalSAR is adept at handling diverse settings, spanning both urban and non-urban areas. Anand also detailed that their preferred use of radar satellites stems from the uninterrupted data flow, which empowers them to analyse and interpret ground movements over time with remarkable precision.

Radar satellites, equipped with Synthetic Aperture Radar (SAR) excel in detecting minuscule changes on Earth’s surface, reaching down to the millimetre level. Unlike optical satellites reliant on light, SAR systems can penetrate through cloud cover, ensuring data collection irrespective of weather conditions or time of day.

Anand highlighted the technological prowess embedded in their infrastructure and detailed an essential aspect of their pipeline. “Our system integrates a crucial calibration mechanism using a global network of GNSS stations,” he added, explaining that the calibration process serves as a cornerstone, correcting the atmospheric noise and ensuring enhanced data accuracy.

Crucially, Anand underscored the backbone of their cutting-edge framework—an adaptive, scalable compute environment. “Our entire data processing framework operates within a highly adaptable compute infrastructure.” It adjusts seamlessly based on demand, empowering us to offer insights with unparalleled flexibility and competitive rates in the market.

Looking to Launch Their Own Satellites

KorrAI employs multispectral and hyperspectral imagery from various space agencies and companies in their operations. “However, we have encountered limitations with this model, particularly due to the competitive nature of satellite tasking by governments and the cost structures set by private operators, which restrict our ability to develop new use cases,” he said.

“In light of these challenges, our strategy is to find a balance. We aim to leverage the broad array of existing data while also launching our own satellites to enable tailored data collection for our unique requirements, particularly for scenarios demanding extremely high spatial resolution.”

However, to encounter problems arising from hosting a vast amount of data and to streamline the process, the company has inked a partnership with AWS. ”These datasets can be enormous, reaching petabytes, so one of our key strategies for optimising processing time is to organise the raw data into an efficient structure. We’re collaborating with the AWS open data program to host the European Space Agency’s Sentinel 1 data coverage in North America,” said Anand, adding that they plan on expanding this initiative to India as well.

Y Combinator’s Influence

Being a Y Combinator (YC) company has also had a profound impact on the business. The company has clientele from its partnership with YC and is looking to scale its operations globally which is already spread across regions like Hong Kong, the Maldives, the US, Canada, and Australia, tailored to regional needs. 

“Through Y Combinator, we’ve connected with diverse companies venturing into satellite launches,” he highlighted. “This partnership potential has been instrumental. Collaborating with satellite companies to develop custom payloads allows us to target high-value monitoring in the near future.”

Speaking of the holistic impact of YC beyond immediate business gains, Anand praised its ecosystem-building prowess. “YC’s contribution transcends mere business dealings,” he reflected. “It’s about investments, networking, and accessing a wealth of entrepreneurial expertise. It’s among the finest accelerators, fostering a community where founders can contribute and glean insights from thousands of experienced peers.”

The post KorrAI’s Mission against Urban Subsidence appeared first on AIM.

]]>
Hamas War Highlights Israel’s Cutting-Edge AI Military Tech https://analyticsindiamag.com/innovation-in-ai/hamas-war-highlights-israels-cutting-edge-ai-military-tech/ Tue, 17 Oct 2023 09:03:49 +0000 https://analyticsindiamag.com/?p=10101575

Israel's commitment to advanced military tech aids citizens' safety despite evolving threats

The post Hamas War Highlights Israel’s Cutting-Edge AI Military Tech appeared first on AIM.

]]>

Back in June at the Tel Aviv University, OpenAI CEO Sam Altman had spoken about how Israel will play a huge role in the AI revolution, and rightly so. There can’t be a better time to showcase that prowess than now – a war. 

While international agencies have expressed shock at the intelligence failure, with former CIA director John Brennan saying that the situation, “raises questions about Israeli intelligence capabilities… and whether their intelligence sources were compromised in some way”, its technological advancements have helped lower the brunt of this well-planned attack.

Over the years, Israeli Defence Forces (IDF), the Mossad, and the Shin Bet (Shabak) have intercepted many such attacks and defused them.

Israel has prioritised knowledge-building on machine learning and algorithm-driven warfare and has invested in AI and its applications in the military because of its geopolitically volatile location. Of the many interesting tales, the usage of AI to assassinate a leading nuclear scientist in Iran to derail their capacity-bolstering stands out.

In an audacious and technologically advanced assassination plot, Israeli intelligence agency Mossad orchestrated the remote-controlled killing of one of Iran’s top nuclear scientists, Mohsen Fakhrizadeh. The operation, which unfolded on November 27, 2020, near the town of Absard, east of Tehran, sheds light on a new era of covert killings, where a souped-up machine gun, controlled by artificial intelligence from over 1,000 miles away, executed the target.

Fakhrizadeh, a key figure in Iran’s military establishment, had long been on Israel’s hit list for his suspected involvement in the country’s nuclear weapons program. Israel had previously employed various methods to eliminate scientists involved in Iran’s nuclear ambitions, but Fakhrizadeh proved an elusive target. 

The scientist, though relatively unknown to the world, played a pivotal role in Iran’s nuclear program. He managed to build an underground network for acquiring sensitive technology and equipment from around the world. His secrecy and meticulous planning made it challenging for international inspectors to understand the true extent of Iran’s nuclear weapons program.

However, the physicist’s commitment to living a normal life despite the persistent threats against him, his love for domestic pleasures, and his insistence on driving his car to Absard made him vulnerable.

The operation marked a significant shift in the tactics for Mossad, as the agency had traditionally favoured field operatives for such missions. However, with the help of a high-tech, computerised sharpshooter equipped with artificial intelligence and multiple-camera eyes, Mossad successfully eliminated Fakhrizadeh without a single agent physically present at the scene.

This technologically sophisticated killing machine, capable of firing 600 rounds a minute, adds a new dimension to the world of remote-targeted killings. Unlike drones, which can be shot down and draw attention in the sky, this robot is inconspicuous and can be placed almost anywhere.

The operation’s execution was intricate. Mossad transported a remote-controlled machine gun, weighing about a ton, in parts, and reassembled it in Iran. The machine gun was designed to be mounted on a Zamyad pickup truck, making it inconspicuous and mobile. To ensure accuracy, additional cameras were placed on the truck to provide a comprehensive view of the surroundings.

To ensure that the right target was engaged, a fake disabled car was strategically positioned along Fakhrizadeh’s route, equipped with another camera. This car allowed the command room to positively identify the scientist and initiate the operation.

The assassin’s role was to monitor the situation from an undisclosed location, more than 1,000 miles away, adjusting the machine gun’s sights and firing the lethal shots with the help of artificial intelligence. The delay in communication caused by the distance, coupled with the car’s movement, posed significant challenges, but the AI was programmed to compensate for these factors.

As the convoy carrying Fakhrizadeh neared the designated kill zone, the remote-controlled machine gun fired a burst of bullets, striking the physicist’s car. Fakhrizadeh and his wife were in the vehicle, and though the initial shots may not have hit him, the car swerved and came to a stop. The AI-controlled shooter adjusted the sights and fired another burst, hitting Fakhrizadeh at least once.

After the attack, the scientist’s vehicle exploded as part of the cover-up.

This unprecedented operation of Fakhrizadeh’s elimination, concealed behind the curtain of technology had left Iran shaken and further highlighted the vulnerability of high-profile targets in the face of evolving assassination techniques. It challenges traditional notions of intelligence operations and raises questions about the ethical and strategic implications of remote-controlled killing machines. 

Shin Bet Using Generative AI

Not just this, Israeli security agency Shin Bet recently revealed its usage of generative AI to counteract significant threats, marking a milestone in the integration of AI into national security.

Ronen Bar, the director of Shin Bet, made this announcement during the Cyber Week conference at Tel Aviv University. This innovation includes the development of their proprietary generative AI platform, comparable to systems like ChatGPT or Bard.

One of the primary advantages of AI in Shin Bet’s operations is its ability to efficiently analyse vast amounts of surveillance data. By detecting anomalies within this data, AI has become a crucial asset in filtering through overwhelming volumes of intelligence. Director Ronen Bar emphasised that AI has also taken on a secondary role in decision-making, operating as a partner during the process.

AI Tanks & More

Though the country is caught up in an all-out war, its advancements in using AI in warfare cannot be taken lightly. Mark Dubowitz, the CEO of the Foundation for Defence of Democracies, emphasised Israel’s commitment to becoming an “AI superpower” and its demonstration of this ambition in the defence sector. 

The country’s defence force, IDF, over the 71 years since its establishment has also consistently pioneered cutting-edge technologies to ensure national security and maintain a qualitative edge. 

While its Iron Dome — which leverages AI to identify incoming short-range rockets and missiles, ensuring they won’t hit critical assets or civilian areas is vastly discussed, there’s a lot more to the country’s tech capabilities in defence.

They are equipped with technology on every frontier, be it air, land or water. The F-16I “Sufa” represents a highly customized version of the F-16, allowing pilots to respond to threats with unmatched precision and agility.

The “Merkava IV”, Israel’s premier battle tank, combines firepower with versatility, designed for rough terrains. In September, Israel unveiled the “Barak”, a state-of-the-art main battle tank, which integrates artificial intelligence for streamlined operations. 

The “Trophy” system safeguards these armoured vehicles against anti-tank missiles, significantly enhancing the survivability of armoured units in the field. While the “Tzefa Shirion”, safeguards against unknown threats, clearing paths in mined areas.

The “Namer”, also known as the “Leopard”, features the “Trophy” missile defence system, enhancing soldiers’ protection in the field. The IDF’s expertise in surveillance and reconnaissance is exemplified by drones like the “Eitan” and “Skylark I-LE”. Enhancing situational awareness, the “EyeBall” provides a 360-degree image of rooms for soldiers’ safety.

On the offensive front, the “Spike” rocket launcher offers precise targeting at considerable distances.

These innovations underscore Israel’s commitment to cutting-edge military technology, ensuring the safety and security of its citizens, even in the face of evolving threats and challenges caused by intelligence failure.

The post Hamas War Highlights Israel’s Cutting-Edge AI Military Tech appeared first on AIM.

]]>
What Can Java Do for Machine Learning? https://analyticsindiamag.com/innovation-in-ai/what-can-java-do-for-machine-learning/ Thu, 05 Oct 2023 05:46:45 +0000 https://analyticsindiamag.com/?p=10101123 Java role in ML

Java has flexible capabilities, vast libraries, and with endorsements from major tech companies the language is gaining traction in Machine learning.

The post What Can Java Do for Machine Learning? appeared first on AIM.

]]>
Java role in ML

Python and R are undoubtedly the most widely used languages for machine learning, and yet there is no dearth of developers who use Java for the same purpose. In fact, the language is slowly catching up with Python. 

Meanwhile, LinkedIn and Oracle released Dagli and Tribuo frameworks, respectively, in 2020, which are also contributing to the Java Machine Learning Library (JavaML). The library gives users access to an extensive range of machine learning tools, apart from wrappers and APIs to integrate different frameworks to Java.  

How Java is used in ML

Java is the go-to tool for many machine learning tasks. Users can create algorithms, build models, and easily launch applications with this language. The good thing about Java is its flexibility—it can handle everything from preparing data to making models. 

Evelyn Miller, data science lead at Magnimind Academy, said, “You should remember that Java gives support for development in any field you want, and data science is no different.”

Developers can use Java to make it easy for different parts of their app to talk to the ML features. Using third-party open source libraries and frameworks, users can leverage Java to implement what any other language does. For instance, the open source library TensorFlowJava can run on any JVM for building, training and deploying machine learning models.

Java also helps make the launch of machine learning applications smooth and offers libraries with specific tools for different tasks. A popular Java machine learning toolkit Weka provides a graphical interface for data preprocessing, modelling, and evaluation. 

This library, developed by the University of Waikato, is as old as the language itself. However, it is still the most widely used library available and its popularity continues to rise because of its flexible data mining software

Even big tech companies, including Google, Amazon, and Microsoft, are leveraging Java for machine learning. Google developers use Java for various applications, in fact, the entire Google Suite is built especially in Java code. 

Apart from Weka, Apache Mahout is another framework widely used by enterprises like Facebook, LinkedIn, Twitter, and Yahoo. This is mostly because the framework is scalable. Complex data structures are manipulated in Java, which might not be possible in Python. 

This can be done using different frameworks, for example, Mahout uses a distributed linear algebra while ADAMS (Advanced Data mining And Machine learning System) is a tree-like structure. This allows data manipulation in a variety of ways. 

Adopting Java

There are 8-10 million Java developers in the world. Frank Greco, a senior consultant at Google, said at a talk, “All the big tech companies are interested to know more about using Java for ML.” 

He, along with his peers, are working on promoting the language for ML. “Java’s role in ML will come as a revelation,” Greco said. His team engaged with major players, the likes of Twitter, Oracle, IBM, and Amazon. 

The excitement for using Java in ML is unanimous across these industry giants — there is a genuine interest in exploring how Java could be harnessed for ML. “It isn’t a case of dismissing Java in favour of Python; instead, all are keen to understand Java’s potential in the ML realm,” he explained. 

Greco built the JSR 381, a Java-friendly API for visual recognition and generic ML API which can be used for high-level abstractions. This API is not tied to any ML framework but developers can choose a framework that best suits their needs. 

“The goal was to make visual recognition and ML easy to use by non-experts,” he said. Amazon implemented this API, and Greco says it is a good starting point for the language.  He said, “I believe that with feedback from the community, we can move this forward.” 

The post What Can Java Do for Machine Learning? appeared first on AIM.

]]>
Meet the Researcher Curing the Healthcare System with ML https://analyticsindiamag.com/intellectual-ai-discussions/meet-the-researcher-curing-the-healthcare-system-with-ml/ Tue, 26 Sep 2023 07:48:07 +0000 https://analyticsindiamag.com/?p=10100638

Ziad Obermeyer is bringing the long-delayed impact of ML in healthcare

The post Meet the Researcher Curing the Healthcare System with ML appeared first on AIM.

]]>

Modern technology is omnipresent in our lives, yet its application remains long delayed in healthcare. Among the global healthcare research community is Ziad Obermeyer, a professor at the University of California, Berkeley, who has been focused on the intersection of machine learning and health since 2017. 

There are a number of moonshots — large-scale government or conglomerates-backed initiatives — promising to revolutionise the sector but haven’t been able to come up with a pocket-friendly solution. Obermeyer, who made it to the TIME AI 100 list, has managed to chart a different course. He is working on a research project with MIT researchers to build AI-based diagnostics that can run on wearable devices. 

“Given how cheap and reliable technology is for acquiring data for laymen through a smartphone or a wearable device, it opens up a whole new way to access healthcare, outside the traditional medical system,” he told AIM in an exclusive interview. 

For countries like India, that have technological talent and a crippling healthcare system, Obermeyer believes there’s the potential to leapfrog cumbersome electronic health records and clumsy data problems as observed in the case of the US. Through the ongoing project, along with his team, Obermeyer is training algorithms for smartphones to diagnose things like heart attack, Alzheimer’s, dementia and other cognitive problems. They are also testing whether the diagnostic tools can be deployed using community health workers in people’s homes outside of government, hospitals or other established firms to deliver healthcare at much lower cost, he explained. 

Tech-ing it Personally

During certain instances as a researcher, Obermeyer found it increasingly difficult to access health data. “The data lives inside of silos controlled by hospitals, government and agencies. It’s hard to collaborate for research, build healthcare-based AI products and evaluate how they work,” he pointed out the case of growing dominance of a handful of big tech companies who have access to a huge amount of medical data

“Ultimately, it shouldn’t be the case that AI in healthcare or anywhere else. That’s not really how we want markets to work or deliver value,” he stated. 

To solve the case in point, the healthcare practitioner launched a nonprofit Nightingale Open Science project in 2021 with $6 million philanthropic funding from Eric Schmidt’s Foundation. “We used that to build datasets in collaboration with health systems and governments around the world, making sure it is ethical and secure. We put that data on our cloud platform where any researcher in the world can access it for free. That is Nightingale’s open science mentality,” he explained.

While there’s no way to reduce the risk to zero, the team at Nightingale have multiple layers of process to protect patients’ privacy and make sure the data is ethically used.

First the data is de-identified before it’s taken out of the health system, in compliance with US as well as European Union laws. Secondly, the data is maintained on their own cloud platform to monitor how it is being put to use. Obermeyer highlighted that this doesn’t compromise the client’s IP. “Everything is logged and stored so if there’s any allegations of malfeasance, we can always go back to the record to figure out exactly what happened,” he said. 

Beyond these, there are two additional layers of ethical oversight — internal and external.  

Internally, Nightingale evaluates proposals to ascertain alignment with its own ethical standards. Externally, the health systems that serve as data sources wield a veto power, ensuring that nothing transpires that might not be in the best interest of the patients whose data is being used, he explained.

The Hallucination Issue 

One cannot talk about AI models without discussing hallucinations—a concern not lost on Obermeyer.

There’s no way for you to check the output as a layperson and it’s incredibly dangerous for people to use chatbots without resources to evaluate the output. The broader problem is how these AI models are currently evaluated. “We need meaningful benchmarks that can’t be gamed or memorised, but give us a real view of how things work,” Obermeyer suggested.  

In 2020, he co-founded Dandelion Health, a free public service platform for evaluating algorithms, starting with electrocardiograms, moving on to other data modalities, including notes. “Soon we’ll be able to evaluate large language models,” he revealed. 

“Anyone can upload an algorithm to the environment where their intellectual property is protected. We will run that algorithm on our data and give them feedback on the performance of their algorithm on any chosen metric. But this is not possible because they don’t have access to our data. Third-party independent evaluation is also important to know what these models are good at and where they can be really dangerous,” he added. 

The startup has agreements with a hedge fund, and US health systems through which the team can access their clinically generated data — electronic health records, images, waveforms, sleep monitoring, studies, everything.

“We identify the data and make curated subsets available to AI product developers, startups and companies that want to build products for better health care. A company can come to us and say they need a specific dataset and we can create it for them to build AI algorithms and then develop them for clinics. The two separate projects are working towards the same goal; to get more people building and using and validating AI products,” Obermeyer said, in conclusion.

The post Meet the Researcher Curing the Healthcare System with ML appeared first on AIM.

]]>
SiMa.ai Revolutionises Deployment of ML Applications, from Weeks to Hours https://analyticsindiamag.com/ai-news-updates/sima-ai-revolutionises-deployment-of-ml-applications-from-weeks-to-hours/ Wed, 13 Sep 2023 07:06:08 +0000 https://analyticsindiamag.com/?p=10099950

SiMa's new browser-based visual programming makes AI & ML accessible to all, streamlining the creation & deployment of ML apps from months to minutes

The post SiMa.ai Revolutionises Deployment of ML Applications, from Weeks to Hours appeared first on AIM.

]]>

SiMa.ai released Palette Edgematic to accelerate machine learning applications at the embedded edge. This debut delivers an on-ramp to AI and ML via its ‘no-code’ approach to create and fine-tune ML applications from anywhere via a web browser in no time.

Palette Edgematic enables a “drag and drop” feature – a code-free approach, where users can create and deploy their models and complete computer vision pipelines automatically within minutes. It provides a direct path to implementation at the edge, eliminating the need for an intermediate step in the cloud. Users can also evaluate the performance and power consumption needs of their edge ML application in real time. 

The software is a free visual development environment designed for any organisation to get started and accelerate ML at the edge. It usually takes multiple weeks to evaluate and several months to deploy ML applications as customers have to hand optimise models and the entire end-to-end applications to get necessary performance and accuracy. 

“ML adoption at the edge has been slow due to the lack of easy-to-use software tools. Our Palette Edgematic software provides a no code visual GUI platform that enables customers to evaluate our MLSoC platform in a few hours,” said Gopal Hegde, senior vice president of engineering and operations at SiMa.ai. 

Using Palette Edgematic, developers can prototype and evaluate ML pipelines on edge devices within minutes and use real time data streams to measure KPIs. They can use the new visual canvas to iterate design and improve pipeline performance, eliminating tedious coding transitions for edge ML implementations. 

The Palette Edgematic can convert visual representations of the pipeline to executable code with the push of a button. 

SiMa.ai announced earlier this week a FPA/W excellence in the most recent ML Perf. 3.1 benchmark competition where it competed against NVIDIA’s newest Jetson NX. SiMa.ai was leading by a margin of 85%. This positions them as a dominant player in the competitive world of edge AI. 

The post SiMa.ai Revolutionises Deployment of ML Applications, from Weeks to Hours appeared first on AIM.

]]>
The Go-To Friend for AI Programming https://analyticsindiamag.com/ai-origins-evolution/the-go-to-language-for-ai-programming/ Mon, 04 Sep 2023 09:30:00 +0000 https://analyticsindiamag.com/?p=10099418

Python still remains a dominant force in AI development, with more than 275,495 companies using it.

The post The Go-To Friend for AI Programming appeared first on AIM.

]]>

Yes, we are talking about Python. This modern programming language is ubiquitous in machine learning, data analysis and pretty much the entire tech ecosystem. If you scroll down papers with code you’ll find most of the research on machine learning is done using Pytorch, a framework built out of python. The language isn’t used only in research but also in scripting, automation, web development, testing etc. But why is the language so popular?

It has a simple and readable syntax which resembles natural language. With more than 137,000 libraries that include everything from data analysis, deep learning, computer vision, web development to name a few, Python serves as a general purpose language with a dynamic use case. Python enjoys a strong community support of active developers who contribute to the growth of the language  by creating libraries, frameworks and tools, for example the Python Package Index (PyPi) which hosts  thousands of third party packages that extend Python’s capabilities, enabling developers to solve complex problems efficiently. 

Python and AI

Python is widely used to build AI models, more so than any other languages. The language is the second most used one because it is simple, direct, and easy to learn. Python also allows computationally expensive libraries to be written in C and then imported as Python modules, meaning users do not have to write in C which is more clunky and difficult to work with. 

This is done with Python’s CFFI. This module allows Python to leverage libraries in C and combined with tools like Cython, allows developers to write Pythonic code while achieving speeds comparable to those of C, which is particularly useful for performance-critical applications. This is evident in its 30 million downloads per month

Not limited to C but other programming languages that provide C-compatible interfaces, allowing interaction by creating a C layer around functions in these languages.

Most importantly, Python is better focused, as a community, on finding a Pythonic way to proceed, and then advocating it, than previous cultures. They have multiple independent communities of use: web, data science, ML, devops. They also built the right kind of libraries like Numpy and pandas (for data analysis and machine learning respectively) that sealed the deal for it in the scientific and research communities. 

The language also saw massive support from corporates, Google invested heavily in building Tensorflow. While PyTorch is primarily developed and maintained by Facebook’s AI Research (FAIR) lab, which is part of Meta Platforms, Inc. It isn’t surprising that a bigger community usually means better support and more libraries, which feeds back into its growth of the massive community. 

The Python Software Foundation has been responsible for maintaining and developing Python, and they are constantly adding new features and functionality. Users can be sure that the language will be supported with for the foreseeable future makes Python a good choice for AI development. 

Other languages catching up?

While none of the other languages hold up to the breadth of development in Python, they are nonetheless used for specific purposes. Rust is gaining attention in AI development due to its focus on memory safety, performance, and concurrent programming. Rust is known for preventing common programming errors that can lead to security vulnerabilities. This is crucial for AI systems that handle sensitive data. Its memory management is more manual compared to Python, but this provides fine-grained control over resources. 

Ruby’s adoption in AI is not as widespread as Python, but its ease of use and community support make it an attractive choice for AI development in certain contexts. Ruby has gained attention in AI development, especially in the context of web applications that leverage AI features. Ruby has libraries like TensorFlow.rb, which brings TensorFlow to the Ruby community, and other AI-related gems. 

Python still remains a dominant force in AI development, with more than 275,495 companies using it. The language is beginner friendly while at the same time being used by experts for the development of AI thanks to its extensive documentation. 

The language has a bright future as it’s now being taught to children in schools, and now is a part of the curriculum for students as young as 7 years old

The post The Go-To Friend for AI Programming appeared first on AIM.

]]>
Modular’s Attempt to Steal NVIDIA’s Mojo https://analyticsindiamag.com/ai-insights-analysis/modulars-attempt-to-steal-nvidias-mojo/ Wed, 30 Aug 2023 11:30:03 +0000 https://analyticsindiamag.com/?p=10099268

Modular software enables developers to run models on non-NVIDIA servers like AMD, Intel, and Google, potentially addressing the GPU crunch

The post Modular’s Attempt to Steal NVIDIA’s Mojo appeared first on AIM.

]]>

AI startup Modular recently raised $100 million in a recent funding round, led by General Catalyst, Google Ventures, SV Angel, Greylock, and Factory. 

The 20-month-old startup by former Google and Apple alumnus, Tim Davis and Chris Lattner is making waves in the AI industry, offering software that aims to challenge the dominance of NVIDIA’s CUDA and similar alternatives, providing developers with a more accessible and efficient solution for building and combining AI applications.

“Modular’s technology scales across a wide range of hardware and devices, which gives users more flexibility as their needs evolve,” said Modular CEO and co-founder, Chris Lattner, in an exclusive interaction with AIM. 

NVIDIA’s CUDA software, used for machine learning, only supports its own GPUs, whereas Modular’s software aims to enable developers to run models on non-NVIDIA servers like those from AMD, Intel, and Google—potentially providing a solution to the GPU crunch as the demand by companies like Microsoft, Google, OpenAI, xAI and Meta strain supply. 

Lattner explained saying that their Modular’s AI engine and Mojo have been designed with ease of use in mind, catering to machine learning engineers by employing a Python-based approach rather than a more intricate C++ foundation, as seen in CUDA. “When compared to CUDA for GPU programming, Modular’s engine and Mojo are easier to use and more familiar to ML engineers, notably being Python-based instead of C++ based,” he added.

He said that Modular exhibits remarkable scalability across an expansive spectrum of hardware and devices giving users a higher degree of flexibility, ensuring that their AI solutions can evolve with their requirements. The AI engine builds on these strengths to provide higher performance and productivity by combining individual operations into an efficient optimised execution environment,” said Lattner. 

Modular’s tools seamlessly integrate into existing workflows, negating the need for wholesale rearchitecting or code rewriting in C++ or CUDA. This affords developers a frictionless transition and empowers them to unlock heightened productivity and performance without incurring exorbitant costs.

A cornerstone of Modular’s arsenal is the Mojo toolkit, which represents a concerted effort to simplify AI development across diverse hardware platforms. The Mojo programming language, blends the ease of use associated with Python with features like caching and adaptive compilation techniques, targeting improved performance and scalability in AI development.

Towards an AI Future 

In an era where tech alumni-founded startups command high valuations, Modular’s approach to validating its commercial momentum and proving its value proposition to investors remains crucial. 

Modular’s journey is not without challenges. The adoption of a new programming language like Mojo can be hindered by Python’s established status in the machine-learning landscape. However, Lattner’s conviction in Mojo’s distinct advantages and its potential to revolutionize AI development processes remains unshaken.

Given the duo’s experience, the venture exhibits potential to make it big. For instance, Lattner has led the creation of Swift, a programming language by Apple, while Davis has led Google’s machine-learning product efforts, focusing on getting the models working directly on devices.

With a rapidly growing community of over 120,000 developers, Modular claims that it has gauged demand from thousands of prominent enterprises that are excited to deploy its infrastructure.

“We have been able to achieve tremendous momentum in only 20 months,” said Davis, Modular co-founder and President. “The financing will allow us to accelerate our momentum even more, scaling to meet the incredible demand we have seen since our launch,” he added, talking about the recent funding that it raised. 

Competition Galore

Lattner acknowledged that while Modular faces competition, a lot of companies are offering point solutions that fail to resolve the challenges across the AI infrastructure stack for developers and enterprises. Besides NVIDIA, some of its competitors include Cerebras and Rain among others. 

“There is no solution in the market that unifies the frameworks and compute architectures, and supports all their weird and wonderful features with really minimal migration pain,” said Lattner, stating the USP of the company. 

Further, he said that while others allege that they are fast, deeper dive forces one to change application or model-specific codes, and they don’t scale across different hardware types either.

Lattner also said that Modular’s technologies are designed to complement NVIDIA’s existing AI infrastructures and that the chip giant is an important partner in this endeavour. The overarching mission is to facilitate broader adoption of hardware among AI developers by unifying technology stacks, simplifying complexities, and making the process more accessible, he said.

Simply put, Modular’s strategy hinges on its holistic approach, which seeks to unify frameworks and compute architectures with minimal migration challenges. Unlike some competitors, Modular’s solutions aim to address end-to-end challenges, fostering accessibility, innovation, and ethical considerations in AI technology.

NVIDIA vs the World 

Modular is not alone, there are several other startups challenging Nvidia’s dominance in GPU manufacturing and the associated software that binds users to its chips. Notable companies in this competition include d-Matrix, Rain Neuromorphics, and Tiny Corp. The collective aim is to transform the AI chip landscape by providing alternatives to NVIDIA’s products, which can be expensive for training and running machine-learning models. These startups are focusing on designing chips and software that they claim offer improved efficiency compared to Nvidia’s GPUs.

Rain Neuromorphics, now known as Rain AI, is addressing the high costs of training and running machine-learning models on conventional GPUs. Their approach combines memory and processing, similar to human synapses, resulting in cooler and energy-efficient operation compared to Nvidia’s GPUs, which require continuous cooling and drive up electricity costs.

Tiny Corp, founded by George Hotz, the former CEO of Comma AI, focuses on open-source deep-learning tools named tinygrad. These tools aim to accelerate the training and running of machine-learning models.

However, NVIDIA stands apart and according to Databricks CEO Naveen Rao, has separated itself from competitors. Despite the challenges and past bankruptcies of startups attempting to compete with Nvidia, these companies are betting on the transformative potential of AI to gain traction in the competitive AI chip sector.

The post Modular’s Attempt to Steal NVIDIA’s Mojo appeared first on AIM.

]]>