Apart from NVIDIA becoming the world’s most valuable company, its chief Jensen Huang answers what new applications in AI he is the most excited about going forward.
Bets Big on Proactive Customer Service
Jensen believes that the future of customer service is going to change significantly.
“The number one most impactful AI application will probably be customer service,” said Huang, explaining that the important thing about the chatbot and the customer service is the data flywheel that can capture all the conversation and engagement and create more data.
“Currently, we’re seeing data growing about 10x every five years. I would not be surprised to see data growing 100x every five years because of customer service,” he said, adding that it will help companies collect more data and insights to extract better intelligence and provide better service.
He further highlighted that it might help to reach a time when companies are able to contact the customer and proactively solve a problem even before it arises. “Just like preemptive maintenance, we’re going to have proactive customer support,” Huang said while earlier mentioning that every company’s business data is its gold mine.
GenAI is already changing the game for customer service. Today, many companies are leveraging it to supercharge their customer support.
Recently, Bland AI put up a cool billboard advertising promoting its AI agent that can handle all sorts of phone calls for businesses in any voice, and it created a buzz.
Automation Anywhere, a leader in AI-driven automation, also launched new AI Agents that can slash the time of process tasks from hours to minutes, increasing business impact up to tenfold in areas like customer service.
Velocity, a top Indian cash flow-based financing platform launched Vani AI, India’s first AI-based interactive calling solution for financial institutions to help reduce operational costs by 20-30% while enhancing customer experience.
Fractal Analytics, a leading AI solutions provider for Fortune 500 companies, effectively reduced call handling time by up to 15% using its latest innovation, dubbed Knowledge Assist, on AWS.
During a six-month pilot program, nearly 500 knowledge workers in contact centres adopted Knowledge Assist, handling hundreds of thousands of queries monthly and managing complex data from over 10,000 documents across pdf, doc, and ppt formats. The pilot showed a 10-15% reduction in average data retrieval time and a 30% call deflection rate due to self-service capabilities.
Generative AI for Everyone
NVIDIA’s chief said that GenAI is everywhere and we’re at the beginning of a new industrial revolution. Instead of generating electricity, we’re generating intelligence.
“Recently, using GenAI, we made it possible to make regional weather predictions down to a couple of kilometres. It would have taken a supercomputer about 10,000 times more capability to predict weather down to a kilometre,” he added, saying that he’s also excited about the fact that GenAI is being used to generate chemicals, proteins, and even physics or physical AI.
Huang believes GenAI can help enhance logistics, insurance, and also keep people out of harm’s way. From physical things, biological things, and GenAI for 3D graphics and digital twins, to creating virtual worlds for video games, every industry is involved in GenAI according to him and those that are not, are just not paying attention.
When asked about his thoughts on how enterprises can make AI that’s more sustainable, Huang said that sustainability has a lot to do with energy and we don’t need to put AI training data centres where the energy grid is already challenged.
“The Earth has a lot more energy, it’s just in the wrong places. We can capture that excess energy, compress it into an AI model, and then bring these AI models back to the society where we could use it,” he said, adding that AI doesn’t care where it went to school.
The Future is Small
Finally, Huang added that while today the computing experience is retrieval-based, in the future, it’s going to be more contextual, more generative, and right there on the device running a small language model. This will dramatically reduce the amount of internet traffic.
“It’ll be much more generative with some retrieval to augment. The balance of computation will be dramatically shifted towards immediate generation. This way of computing is going to save a ton of energy and it’s very sensible,” he said.
Similarly, Microsoft and Meta also made announcements focused on small language models.
Huang highlighted that the big idea about the future as working with AIs is prompting, adding, “We’re going to have so many more interesting questions because we’re going to get a lot of answers very quickly.”
When asked how to best help customers and organisations get started today with GenAI, Huang said that users can leverage platforms like Databricks’ Data Intelligence Platform (DIP) and NVIDIA NIMs.
NIMs (NVIDIA Inference Microservices) are containerized AI microservices designed to accelerate the deployment of GenAI models across various infrastructures.
It simplifies the creation of GenAI applications such as copilots and chatbots, by providing scalable deployment, advanced language model support, flexible integration, and enterprise-grade security, thereby enabling developers to build powerful AI applications quickly.
“Go get yourself a NIM on DIP,” he said, encouraging people to engage with AI.
“Whatever you do, just start and engage! GenAI is one of those things you can’t learn by watching or reading about. You just learn by doing. It is growing exponentially and you don’t want to wait and observe an exponential trend because in a couple of years you’ll be left so far behind. So, just get on the train, enjoy it and learn along the way!” suggested the NVIDIA chief.