Addressing common challenges faced by security operations and threat intelligence teams, the utilization of large-language-model (LLM) systems can enhance and expedite cybersecurity analysis. However, companies have been hesitant to adopt this technology due to a lack of familiarity and understanding.
To successfully implement LLMs, organizations require support and guidance from security leadership. It is crucial to identify solvable problems and evaluate the relevance of LLMs in their specific environment. John Miller, head of Mandiant’s intelligence analysis group, highlights the importance of navigating the uncertainty surrounding LLMs and providing a framework for comprehending their impact.
At Black Hat USA, Miller and Ron Graf, a data scientist at Mandiant’s Google Cloud, will demonstrate how LLMs can augment security personnel, improving the speed and depth of cybersecurity analysis.
Establishing a robust threat intelligence function necessitates three key components: relevant threat data, the ability to process and standardize the data effectively, and interpreting it in the context of security concerns. LLMs can bridge this gap by enabling non-technical language queries and disseminating information to other teams within the organization. This maximizes the effectiveness of the threat intelligence function and enhances return on investment.
While LLMs and AI-augmented threat intelligence offer substantial benefits, potential drawbacks should be considered. LLMs can generate coherent threat analysis and save time but may also produce inaccuracies. Human analysts are essential to validate LLM outputs and identify any fundamental errors. Employing prompt engineering, or optimizing question formulation, can further enhance the quality of LLM responses.
Ron Graf emphasizes that involving humans in the process is crucial. Chaining multiple models together can verify the integrity of results and minimize inaccuracies. This augmentation approach, combining AI with human expertise, has gained traction in the cybersecurity industry.
Leading cybersecurity firms like Microsoft and Recorded Future have embraced LLMs to enhance their capabilities. Microsoft’s Security Copilot leverages LLMs to investigate breaches and hunt for threats, while Recorded Future employs LLMs to synthesize vast amounts of data into concise summaries, saving analysts considerable time.
Threat intelligence inherently deals with “Big Data,” necessitating extensive visibility into various aspects of attacks and attackers. LLMs and AI empower analysts to be more effective in this environment, enabling the synthesis of valuable insights from massive datasets. The combination of AI and human expertise is pivotal to unlocking the full potential of LLMs in threat intelligence.
In conclusion, adopting AI-augmented threat intelligence helps organizations address security shortcomings. By harnessing the power of LLMs and human intelligence, teams can synthesize intelligence effectively, strengthen their threat-intelligence capabilities, and achieve higher efficiency in cybersecurity analysis.