Generative AI has stirred major concerns in cybersecurity, with malicious actors leveraging the technology to their advantage. Recognising this threat, at Google Cloud Next ‘24 in Las Vegas, the company unveiled new measures to tackle these challenges head-on.
Gemini, Google’s flagship family of LLMs, is expanding its role in security operations across the investigation process, building upon previous releases like natural language search and case summaries. A new feature, available by the end of this month, will assist analysts throughout their workflow in Chronicle Enterprise and Chronicle Enterprise Plus. It recommends actions, conducts searches and creates detection rules to enhance response times.
Moreover, analysts can now request the latest threat intelligence from Mandiant cybersecurity consulting within the platform, with Gemini guiding them to relevant pages for deeper investigation. In 2022, the company acquired Mandiant, which specialises in dynamic cyber defence, threat intelligence, and incident response services.
Gemini enhances threat intelligence through conversational search across Mandiant’s database. VirusTotal now integrates OSINT reports for streamlined analysis. In the Security Command Center, teams can search for threats using natural language and receive critical alert summaries.
It also offers insights into cloud misconfigurations, vulnerabilities, and attack paths. AI is integrated into various security services, including Gemini Cloud Assist, offering IAM Recommendations, Key Insights, and Confidential Computing Insights for enhanced security posture and workload protection.
Similarly, it has also introduced Chrome Enterprise Premium, a solution designed to reinforce endpoint security for organisations. It is generally available now.
Google Cloud has over 513,775 customers with over 85% market share. In 2022, Google Cloud generated revenues of 26 billion U.S. dollars, which represents approximately nine percent of Google’s total revenues.
“More than 60% of funded gen AI startups and nearly 90% of gen AI unicorns are Google Cloud customers, including companies like Anthropic, AI21 Labs, Contextual AI, Essential AI, and Mistral AI who are using our infrastructure,” said Thomas Kurian, chief executive officer, Google Cloud, during the keynote session of the event.
Leading enterprises like Deutsche Bank, Estée Lauder, Mayo Clinic, McDonald’s, and WPP are building new generative AI applications on Google Cloud. Pfizer has accelerated data analysis from days to seconds, while 3M utilizes Gemini in Security Operations to streamline security management. Engineers at Fiserv’s Security Operations Center can now create detections and playbooks more efficiently, resulting in faster responses for analysts.
Google’s new measures come when DSCI, the data protection industry body, reported that over half a million new malware are detected daily, contributing to the already vast pool of one billion circulating malware programs. In 2023, cybersecurity defenders uncovered 400 million instances of malware across 8.5 million endpoints, highlighting the immense scale of the issue.
What are Others up to?
Google is not the only company injecting generative software to solve security issues. At the end of March, Google competitor Microsoft launched Security Copilot to streamline threat intelligence and prioritise security incidents by correlating data on attacks.
Microsoft claims that Security Copilot is the first and only generative AI security product that builds upon OpenAI’s GPT-4 AI to defend organisations at machine speed and scale without compromising customer data. The tool empowers defenders to mitigate risks and respond to security threats effectively.
“Frankly, the cybersecurity threat landscape has never been more challenging or more complicated,” said Microsoft CEO Satya Nadella, during the release of Security Copilot.
Adding on to Nadella, Vasu Jakkal, corporate vice president of security and compliance at Microsoft added, “With Security Copilot your data is always your data. It stays within your control, and it is not used to train the foundational AI models. In fact, it is protected by the most comprehensive enterprise compliance and security controls.
While Google Cloud and Microsoft Azure are trying hard to bolster their security measures, Oracle has leapt ahead. Governments, including India’s Ministry of Education, Bangladesh, and the US, favour Oracle Cloud Infrastructure due to its transparent cloud approach and robust data encryption.
Oracle’s 47 years of trust in governments globally stems from its commitment to data security, distinguishing it from other providers. Oracle database is unique in that it operates on multiple hyperscale clouds, whereas databases on Amazon or Google clouds are proprietary to those platforms and cannot be run elsewhere. Microsoft also partnered with Oracle in a multiyear agreement to enhance AI services. It will now use Oracle Cloud Infrastructure (OCI) AI and Microsoft Azure AI for daily Bing conversational searches.
LLMs are Prone to Vulnerability
“The number and sophistication of cybersecurity attacks continues to increase, and gen AI has the potential to tip the balance in favor of defenders, with Security Agents providing help across every stage of the security lifecycle: prevention, detection and response,” added Kurian.
Google’s Gemini, OpenAI’s ChatGPT and similar LLM-based chatbots are susceptible to security vulnerabilities that can lead to the generation of harmful content, disclosure of sensitive information, and execution of malicious actions.
Recently, a study by Texas-based threat research firm HiddenLayer discovered that attackers could induce Gemini to leak sensitive data by manipulating system prompts. The team also found they could coax Gemini into producing misinformation about elections and providing instructions on illegal activities like hotwiring cars.
Similarly, Microsoft and OpenAI collaborated on research revealing how threat actors use LLMs like GPT-4 to improve their attack game. Their strategy includes incorporating AI as a productivity tool in their offensive tactics, illustrating their tactics like LLM-informed reconnaissance, social engineering, and scripting tasks.
At the same time at an enterprise level, generative AI has brought new challenges for defenders in cybersecurity. These challenges include connecting various events like suspicious website visits, strange device activities, or unusual communications to detect potential threats from unknown sources. Both humans and machines find this task difficult, but AI aids in adapting to evolving attack techniques, assessing risks, and highlighting critical issues for analysts.
The main issue is distinguishing genuine threats from false alarms, which requires reducing irrelevant data. Moreover, AI-driven algorithms are needed to tackle cybersecurity issues as attacks become more machine-centric.
By the end of 2024, generative AI is expected to influence cybersecurity purchasing choices, with widespread integration across security operations indicating a growing trend towards AI-centric cybersecurity solutions.
“We are right to be worried about the impact (of AI) on cybersecurity. But AI, I think actually, counterintuitively, strengthens our defense on cybersecurity,” said Google CEO Pichai at Munich Security Conference in February,
So while generative AI is growing exponentially and touching our everyday lives, the tech titans behind it are also taking proactive steps to mitigate any threats or risks.