UHG
Search
Close this search box.

AI-Powered Innovation: Lentra’s Role in Shaping the Future of Indian Banking

“What we've seen with generative AI is the ability for it to seemingly reason about test scenarios that that could be interesting but may have been overlooked,” said Rangarajan Vasudevan, CDO of Lentra

Share

Dealing with highly critical data and being one of the most regulated industries, the digital lending space has massively transformed in India. While AI and ML models have been part of such platforms, the evolution and advancement of generative AI is now finding its way here as well.

With more than a decade-long experience in the digital lending space, digital lending SaaS platform, Lentra AI has been a prominent player in empowering major banks including HDFC, Standard Chartered, Federal Bank, and many more in India. 

AI Enabling Nuanced Personalisation

In an exclusive interaction with AIM, Rangarajan Vasudevan, chief data officer at Lentra, spoke about the involvement of machine learning models in the field of digital payments, which have been implemented for credit scoring and credit decisioning for over decades. “It’s not new. Citibank pioneered it a long time ago, and now everybody is caught up on it. However, I think what’s changing of late is the emphasis on how we create these persona-specific positioning models,” he said. 

From a generic approach that used to exist earlier, where a scorecard is created, that applies different ML models, and is pushed to specific demographics, it has become a more nuanced method now. For instance, a Gen Z from tier two or tier three city or town is different from a GenZ in the urban sector, so the score card you apply, will not be the same, and you have to make it tailor-made to individual personal types. This shows the importance of how to break down an ML model and create a “consortium of models’ that can be applied in different personnel categories. 

Vasudevan goes on to explain how nuanced models are already in place at Lentra, and spoke about how one of their flagship case studies which has not been released yet, is in the agri space. “There was a big push by the government earlier on what credit scheme to have for the Kisaan sector (farmers), and we were one of the pioneers to have worked with our major client in rolling out a version which is highly tailor-made in terms of positioning to that sector. The models are very different from what you would normally do when you try to push these kinds of products,” he said. 

Consortium of ML Models

A mix of models is something that Lentra has always worked on. “We are a VC-backed company, so our core USP is the innovation that we keep having to do, otherwise there’s not much to it. The models are ours and are proprietary to us, and it’s something that we have grown in-house,” he said. 

Vasudevan also mentioned that they work on top of open source platforms too. “There are platforms on top of which we build our own models. We use Sci-kit learn, Py-Spark, along with corresponding bindings to TensorFlow, Py-torch and others.”

Working in one of the most regulated spaces in the industry, Lentra has to ensure that their models are ethically fair, right and the models have to be explainable. For instance, the reasoning for lending to specific categories of population, should be explainable to a non-techie or regulator. Furthermore, keeping this motive behind, Lentra has restricted itself to using models such as XGBoost and random forests which make it easy for them to explain things. 

“A consortium of models where the models themselves are orchestrated using elaborate business logic, which makes it slightly more complex than just directly using an XGBoost,” he said. For cases where the regulatory burden is less, they resort to deep learning models where they don’t have to worry about explainability.   

Vasudevan concluded with the need for a collaborative innovative approach and bringing a vernacular angle to the models, so as to bring a far more meaningful and practical change for us in geography. “The vernacular angle  is just starting to get tapped into that too only because folks like Microsoft or Amazon are releasing expansions of those models to the local market,” he said.  

Generative AI With Caveats

While Lentra has been pioneering their in-house ML models along with continuous work on open-source platforms, the company has also experimented with generative AI. In addition to improved productivity among employees, the maximum use-cases for generative AI are for identifying test cases. “What we’ve seen with generative AI is the ability for it to seemingly reason about what could be interesting test scenarios that you might have missed,” said Vasudevan. 

Speaking about a request form where a user needs to enter an income range and age bracket, generative AI has helped in coming up with test cases. “It’s a very simple form and if I give that kind of a form to GenAI, it is exactly able to reason around 15 different test scenarios that you’ve got to work through and make sure that your product is capable of handling those test scenarios. For instance, what if we give an age group such as 15 to 18 where lending is not legally permitted in some countries, what would we do in this case?” explained Vasudevan. 

Discussing the limitations based on the experiments Lentra conducted on ‘ChatGPT family of GenAI tools’, consistency was the biggest problem. “So to be able to have a specific type of output consistently, for the same or similar type of input is like a given in the world of software like this one, the software is deterministic. We would give an input with a stimuli, and we’ll get some output. That’s very, very common, people just take it for granted. But with this particular experiment, what we saw was the same input in an experiment benchmark that we did back in February or March resulting in a sudden accuracy coverage of our test case which was then repeated in June, giving different results. The numbers were completely off,” said Vasudevan.  

The experiment that gave close to 80% accuracy earlier, gave only 10% accuracy when tested in June. “There was a lot of theorising at that stage because I think it is not just us, but a couple of other companies, who had also highlighted this, but nobody got an answer clearly from OpenAI. So we wouldn’t know if it’s a case of the model itself performing badly or OpenAI did something with transformer models and decided to compress them or, whatnot,” added Vasudevan. 

However, having said that, Vasudevan confirmed that they are in the middle of further experimenting and that in the long run, they should be building their own internally trained language models. 

📣 Want to advertise in AIM? Book here

Picture of Vandana Nair

Vandana Nair

As a rare blend of engineering, MBA, and journalism degree, Vandana Nair brings a unique combination of technical know-how, business acumen, and storytelling skills to the table. Her insatiable curiosity for all things startups, businesses, and AI technologies ensures that there's always a fresh and insightful perspective to her reporting.
Related Posts
19th - 23rd Aug 2024
Generative AI Crash Course for Non-Techies
Upcoming Large format Conference
Sep 25-27, 2024 | 📍 Bangalore, India
Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

Flagship Events

Rising 2024 | DE&I in Tech Summit
April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore
Data Engineering Summit 2024
May 30 and 31, 2024 | 📍 Bangalore, India
MachineCon USA 2024
26 July 2024 | 583 Park Avenue, New York
MachineCon GCC Summit 2024
June 28 2024 | 📍Bangalore, India
Cypher USA 2024
Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA
Cypher India 2024
September 25-27, 2024 | 📍Bangalore, India
discord-icon
AI Forum for India
Our Discord Community for AI Ecosystem, In collaboration with NVIDIA.