Evolving Cybersecurity Practices Through AI’s Large Language Models

0
Gregor Stewart, Vice President of Artificial Intelligence at SentinelOne

By Gregor Stewart, Vice President of Artificial Intelligence at SentinelOne

The tech world has been abuzz surrounding AI for well over a year. Yet, beyond creating photo-realistic images that involve too many fingers, it remains unclear to the cybersecurity community how exactly this technology can be implemented to their benefit. Even more, how can it help lower the whopping 5 billion cyber attacks in India in 2023, growing a staggering 63% from Q1 to Q4 of that year.

The reality today is that cybersecurity practitioners such as analysts, threat hunters, and others throughout the region are looking for a hay-coloured needle in a field of haystacks. Even if they looked right at it, there’s so much happening around them that the needle is nearly impossible to identify. Not because it wasn’t seen, but because the searcher wasn’t informed that the shape of the needle was changed.

This is just one example of what cybersecurity teams must deal with daily. Threats to organisations change continuously, risking the security of their institutions and the privacy of the data they hold. It impacts the daily lives of everyday users who rely on connected technologies to pay for groceries, manage banking, drive cars, and beyond.

The AI implementation ‘How’
With such a shortage in skilled talent to address modern cybersecurity threats, companies are stopping to ask why they need AI and beginning to consider what implementation will look like.

The most significant difference will be the use of common language based on large language models (LLMs). For comparison, today’s skilled professionals must work across various platforms, each with their own language that demands now only knowing what to ask but how to ask it.

While there are outlined directions, much of the fine-tuning comes from conducting queries over a career and having the right finesse to extract what’s needed. However, combining the powerful capabilities of AI to collect and analyse data from across platforms and sources, with its ease of understanding common language, even junior members can use human language to request queries from various tools, data sets, and far-reaching networks.

They don’t need to learn or master various querying language or the wisdom to know how to ask the right questions. They can simply run a query such as “Can vulnerability ‘X’ be found anywhere in the network?”

Today’s artificial intelligence is already able to identify the value of the information being obtained and can even make suggestions to sharpen the practitioner’s request and assist in extracting more robust information.

Levelling up existing employees
As many companies across the region attempt to fill the tens of thousands of open cybersecurity roles, they can also leverage AI to simultaneously level-up existing employees by using suggestions and next step recommendations.
While above I mentioned the challenge of understanding how to properly query a platform to obtain information, even getting to a point of knowing what to query for is time consuming. Practitioners must ask, What is this alert I’m receiving? Is there a breach happening right now? If so, where is the breach coming from and what are my options for remediation? If not, why am I receiving this alert?

Artificial intelligence today can assist by providing a greater wealth of information based on previous actions. For example, if an alert is triggered, AI can assist by:
Offering previous insight- “This alert is dismissed by 9/10 people and has a low likelihood of impacting your system, how would you like to proceed?”
Raising a red flag– “An event looks suspicious, click here to investigate further”
Make suggestions– If an indicator of Compromise (IOC) appears, the system can make suggestions based on playbooks, just as forcing user re-authentications, quarantine, or another pre-determined appropriate action.
Instead of going through all the queries and languages and other schemas, a junior analyst can follow the prompts to keep operations running smoothly.

Near real-time matching of databases to schemas, IDs, Keys, queries for types can strengthen a junior or senior level employee, all with Ai through basic language.

Making security proactive
Assisting teams in becoming proactive is crucial, as a cybersecurity team that remains inactive is inherently vulnerable. It’s essential for leaders to continuously motivate their team towards enhancing their security awareness, even if it means taking small, ongoing steps.

With AI’s information included alongside its nudges means teams can fully analyse database and network activity, it can prompt users to take immediate action through straightforward yes or no questions. Regardless of the risk assessment criteria, adopting more low-risk actions invariably leads to improved security measures.

There’s also a significant advantage in skill development that comes from taking proactive steps. For novices in the field, receiving suggestions and prompts for the ‘next step’ accelerates the learning process, eliminating the need for extensive shadowing of more experienced team members. Expressing these prompts in natural language that aligns with the user’s intent is key. While this method proves effective, it requires users to discern whether it aligns with their objectives and to make adjustments as necessary. Over time, users learn to interpret these instructions, akin to learning from a patient instructor.

Summarising these interactions allows for constructive feedback, suggesting alternative approaches for future tasks. This methodology not only facilitates immediate learning but also ensures that all actions, whether undertaken by an employee or AI, are documented. Such records and notebooks smooth out communication between human and machine, standardising processes.
Implementing the future, today

Looking towards the future, the current talent shortage in cybersecurity is not merely a temporary challenge but a structural one. Often, those responsible for setting company-wide security policies are detached from the everyday realities of cybersecurity work. The routine tasks associated with maintaining security standards are both tedious and stressful, leading to high attrition rates among professionals. Herein lies the potential for AI to revolutionise the field by automating mundane tasks.

This shift allows cybersecurity professionals to focus more on strategic security initiatives, thereby alleviating the drudgery that currently characterises the profession.

LEAVE A REPLY

Please enter your comment!
Please enter your name here