There’s a new trend emerging in the generative AI space — generative AI for cybersecurity — and Google is among those looking to get in on the ground floor.
At the RSA Conference 2023 today, Google announced Cloud Security AI Workbench, a cybersecurity suite powered by a specialized “security” AI language model called Sec-PaLM. An offshoot of Google’s PaLM model, Sec-PaLM is “fine-tuned for security use cases,” Google says — incorporating security intelligence such as research on software vulnerabilities, malware, threat indicators and behavioral threat actor profiles.
Cloud Security AI Workbench spans a range of new AI-powered tools, like Mandiant’s Threat Intelligence AI, which will leverage Sec-PaLM to find, summarize and act on security threats. (Recall that Google purchased Mandiant in 2022 for $5.4 billion.) VirusTotal, another Google property, will use Sec-PaLM to help subscribers analyze and explain the behavior of malicious scripts.
Elsewhere, Sec-PaLM will assist customers of Chronicle, Google’s cloud cybersecurity service, in searching security events and interacting “conservationally” with the results. Users of Google’s Security Command Center AI, meanwhile, will get “human-readable” explanations of attack exposure courtesy of Sec-PaLM, including impacted assets, recommended mitigations and risk summaries for security, compliance and privacy findings.
“While generative AI has recently captured the imagination, Sec-PaLM is based on years of foundational AI research by Google and DeepMind, and the deep expertise of our security teams,” Google wrote in a blog post this morning. “We have only just begun to realize the power of applying generative AI to security, and we look forward to continuing to leverage this expertise for our customers and drive advancements across the security community.”
Those are pretty bold ambitions, particularly considering that VirusTotal Code Insight, the first tool in the Cloud Security AI Workbench, is only available in a limited preview at the moment. (Google says that it plans to roll out the rest of the offerings to “trusted testers” in the coming months.) It’s frankly not clear how well Sec-PaLM works — or doesn’t work — in practice. Sure, “recommended mitigations and risk summaries” sound useful, but are the suggestions that much better or more precise because an AI model produced them?
After all, AI language models — no matter how cutting-edge — make mistakes. And they’re susceptible to attacks like prompt injection, which can cause them to behave in ways their creators didn’t intend.
That’s not stopping the tech giants, of course. In March, Microsoft launched Security Copilot, a new tool that aims to “summarize” and “make sense” of threat intelligence using generative AI models from OpenAI including GPT-4. In the press materials, Microsoft — similar to Google — claimed that generative AI would better equip security professionals to combat new threats.
The jury’s very much out on that. In truth, generative AI for cybersecurity might turn out to be more hype than anything — there’s a dearth of studies on its effectiveness. We’ll see the results soon enough with any luck, but in the meantime, take Google’s and Microsoft’s claims with a healthy grain of salt.