UHG
Search
Close this search box.

Generative AI to Drive Future Cybersecurity Threats

With generative AI readily available to the average user, a recent report published by Verizon Business predicts that threat actors can start leveraging AI as soon as this year.

Share

Illustration by Nikhil Kumar

While AI has been leveraged in several areas of expertise, one of the more alarming ones is in the field of cybersecurity, specifically, on the threat-actor side.

With generative AI becoming easily available to the average user, a recent report published by Verizon Business predicts that threat actors can start leveraging AI as soon as this year. “Threat actors of all types, from the most sophisticated nation-states and highly resourced criminal syndicates to solo cybercriminals, will eventually adopt AI,” the report stated.

This isn’t just the use of deepfakes or other such antagonistic general use cases, but the use of AI to take down companies, access confidential information, or just ease up the process of running scams on a large scale.

“They’ll go for the least sophisticated, lowest-effort applications first. Generating more phishing messages? Pretty simple. Translating social engineering attempts into multiple languages? Done,” the report further stated.

Speaking to AIM, Anshuman Sharma, who heads cybersecurity consulting services under the Verizon Threat Research Advisory Center (VTRAC), said that just because there isn’t a lot of discussion surrounding generative AI’s use amongst threat actors doesn’t mean that it’s non-existent.

“It isn’t discussed on the dark web or the clear web by the threat actors, but we already know that there are tools, like WormGPT and FraudGPT, available.

“Obviously, the organisation has to be prepared because three to five years down the line, we’ll move towards strong AI, which can learn things that weak AI can do, but it can use that learning in a different context altogether. It’s going to happen very soon,” Sharma said, elaborating on the research so far.

This, he emphasised, was a major reason why businesses need to start leveraging AI. Thanks to AI, being left behind is no longer an option, with cybersecurity becoming a more serious concern.

Meanwhile, when it comes to the security of AI systems being integrated, Sharma said that it was a relatively new area of concern.

The Weakest Security Link – AI or Humans?

Recently, AIM spoke to a few AI jailbreakers who position themselves as ethical hackers. These hackers focus on jailbreaking AI systems specifically to highlight flaws in the systems of both proprietary and open-source models.

However, while these are largely community-based initiatives for the betterment of generative AI, this may not be the case for much longer. 

“We just witnessed with CrowdStrike how one glitch can affect the entire world. It was reported that we were thrown into the Stone Age for a few hours, during which handwritten boarding cards were given. That’s the repercussion that it can have,” he explained.

Sharma elaborated that while it wasn’t directly caused by AI, it was a case of over-reliance on a third party tool that was given too much internal access, something that organisations integrating AI are doing as well.

“Right now, there is obviously a dependency on the providers and faith that the AI or machine learning products they deliver are helping us fight the battle against the bad guys using AI,” he said.

But, it’s still something to take note of, considering these are third-party tools accessing large amounts of confidential data. Already, while it isn’t public news as yet, Sharma said there have been cases of models being jailbroken for this very purpose.

“Those models were leveraged to bypass guardrails, using loaded language to nudge the LLMs to provide responses, which ideally, they shouldn’t provide,” he told AIM.

The reliance on LLMs and GenAI, however, is not unfounded and inevitable. As is often the case, organisational implementation of GenAI is directly to improve efficiency within a company. What could take hours for a human to do can be done in a few minutes by AI. 

“AI is not going to take your job, but a person with the knowledge of AI is definitely going to take your job. Whether we want it or not, we have to leverage these models because we may be bound by rules, but threat actors are not,” he said.

However, while this could pose a major problem for organisations, a larger problem looms ahead – human error. The reason why GenAI could be leveraged against a company is largely reliant on how the employees within that company use them.

As Sharma mentioned before, models were used to bypass guardrails, but the cause of this was an employee feeding ChatGPT with huge amounts of sensitive data that the model learnt from.

Additionally, as per Verizon’s 2024 Data Breach Investigation Report (DBIR), a significant amount of cybersecurity breaches were caused due to human involvement, with as many as 68% of incidents attributed to non-malicious human actions.

“These range from errors and privilege misuse to stolen credentials and social engineering, showing the persistent risk that human factors pose. Shockingly, breaches resulting from exploitation of these vulnerabilities have surged by 180% — almost triple last year’s rate,” the report stated.

Likewise, Sharma stated that phishing still posed a huge problem, and a lack of education meant that companies were more likely to be at risk thanks to humans rather than AI.

“Phishing remains a pervasive threat, occurring frequently and indiscriminately across all sectors. And AI-based phishing is even creating more issues for the good guys to manage. On the prevention side, it’s the one that always has to be human, which naturally becomes the weakest element in the chain,” he said.

Nevertheless, there are ways to curb this, both on the AI and human side.

Data, Control and Awareness

That’s the formula to prevent these attacks and potential attempts to jailbreak.

Sharma said that organisations potentially at risk of cyber attacks were largely dependent on how good the controls built into their datasets were, and how they were used for training purposes, in order to restrict the LLMs used.

“Technical controls, implementing it by testing it again on the jailbreaking side, and making the users aware – that’s the key to how we make it,” he said.

As previously reported by AIM, pentesting companies, including Verizon, have started taking up the testing of AI systems integrated into companies. Additionally, companies have also begun implementing zero-trust policies when it comes to generative AI usages.

Opportunities Galore 

Similarly, cybersecurity startups have begun sprouting, taking advantage of the gap, with the global AI in the cybersecurity market expected to be valued at $60.6 billion in 2028, growing from $22.4 billion last year.

Seemingly echoing this, larger companies like Google are offering specific programmes catered to AI cybersecurity startups. Additionally, with the niche in the market, several of these security startups, like Sydelabs, Oxeye, Helios, and YC-funded Cyberfend, have been acquired.

Interestingly, Sydelabs has worked on an AI firewall, preventing attempts at jailbreaking organisational AI systems. Similarly, LLM vaults are another method through which this issue is being addressed by startups like BoxyHQ and Skyflow

The need to integrate AI for organisations is vital, whether it’s to improve overall efficiency or fight off threat actors leveraging the same. Like Sharma warned, this could become a threat area that gains increasing significance in the next decade. 

However, with the increase in funding in the area, as well as startups taking the lead in bridging the gap, the fallout may not be as drastic as is expected.

📣 Want to advertise in AIM? Book here

Picture of Donna Eva

Donna Eva

Donna is a technology journalist at AIM, hoping to explore AI and its implications in local communities, as well as its intersections with the space, defence, education and civil sectors.
Related Posts
19th - 23rd Aug 2024
Generative AI Crash Course for Non-Techies
Upcoming Large format Conference
Sep 25-27, 2024 | 📍 Bangalore, India
Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

Flagship Events

Rising 2024 | DE&I in Tech Summit
April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore
Data Engineering Summit 2024
May 30 and 31, 2024 | 📍 Bangalore, India
MachineCon USA 2024
26 July 2024 | 583 Park Avenue, New York
MachineCon GCC Summit 2024
June 28 2024 | 📍Bangalore, India
Cypher USA 2024
Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA
Cypher India 2024
September 25-27, 2024 | 📍Bangalore, India
discord-icon
AI Forum for India
Our Discord Community for AI Ecosystem, In collaboration with NVIDIA.