fbpx

AI Is a Hammer: Who Is Using It, and How?

AI

AI is a hammer. Anyone can use it to hit a nail into a wall. A skilled professional can carve an intricate statue with it. A criminal can use it to break into a house or car. Children could damage themselves with it. Ultimately, it’s a tool and how we use that tool dictates how we are perceived and behave. With valuable insights from cybersecurity expert and Principal Solutions Architect John Moretti of eSentire, we look at the practical uses of AI, including both the pros and cons, and how guard rails are becoming more essential when considering its implementation into a business.

The Multitude of Madness: How AI adds value to an organization’s security

For many solution providers, artificial intelligence is being used in nearly every area of the business. Not only in sales or marketing, but most importantly, in the SOC (Security Operations Centre). With AI technologies, millions of signals can be ingested every day, coming from multiple sources in multiple environments for a multitude of reasons. So, what happens to these signals daily without a cybersecurity partner in charge? What we often hear from customers in response to this question is: “So and so in IT looks after firewalls.” Unfortunately, in this context, this is likely not enough coverage.

With millions of daily signals, how can Barry and Barbara from IT identify and respond amidst the network noise? In response to an issue already occurring, not proactively to guard against threats. In fact, the answer is often “When it’s too late.” Only after a breach has happened is a provider engaged to help provide a solution.

In instances like these, AI is pivotal. A human can read a blog or watch an analytic portal to assess whether the messaging is applicable to the business’s security. They can assess whether action is needed, but only do this process one by one. AI operates differently. It ingests data, normalizes it, applies machine-learned rule sets, and dictates the next action. In ninety-nine percent of these instances, the action is simply, “It’s okay, nothing to worry about here.” However, the 1% that the AI does see and identifies as a threat, is where it gets interesting – and often times, crucial.

The Human in the Loop: Where AI Technology Meets Human Direction

For example, John explained to me that when eSentire’s SOC analysts get an alert, the first action is to block the threat. Then, they use underlying technology to support this action and assess whether they’ve handled similar threats before. The threat then goes into an investigation stage with real human beings. AI isn’t replacing people in the organization. Instead, it’s designed to remove the heavy lifting, eradicate the cumbersome cognitive tasks, and enable us to do what we do best by nature: collaborate.

Bad Guy AI: The other side of AI technologies

Unfortunately, AI is not just for the wholesome and virtuous. It’s also exploited by malicious actors. The bad guys are using AI to enhance ransomware and malware capabilities. They also sell it on the black market, enabling less skilled criminals to use attacks as a service. Much like bidding at an auction, these threat actors purchase pre-packaged ransomware online, widening the scope for potential attacks.

Experts like John have seen this in action. Cyber criminals use AI to bypass everyday technologies, and they enhance threat features to overcome new firewalls and security measures.

The AI Spy

Threat actors are more like digital spies. They don’t make off with your finances and data like a Hollywood film. Instead, they wait and watch whilst avoiding detection. Threat actors breach systems and do nothing for months. After all, if no alarm went off, why call the police? They use AI to listen, learn, and gather data to elevate privileges. The AI technologies, as they use them, will try to gather information on every role and every permission. AI is patient. Once the threat actors have that data, the cycle continues and is sold to the highest bidder.

How is this seen executed in the real world? During an MDR review of the market and rudimentary testing, John discovered some shocking revelations! He noted in this example, the customer had been breached for five years without triggering any alerts. After more investigations, all of their usernames and passwords were found on the dark web.

Security Strategizing: Best Practices for AI

Securing the Large Language Models (LLM)

Most of us these days have heard of Chat GPT, a free, widely used LLM (Large Language Model). With an intuitive and easy-to-use interface, many leverage it for various tasks like organizing notes or rewriting documents. However, threat actors exploit these tools. This goes especially for those who aren’t guard railed or regulated by major tech companies like Microsoft or Google.

Without safety measures in place, sensitive information can be inputted into an LLM, which can be linked to corporations. They’re slow to block access, often resulting in corporate data leaks.

Securing AI

AI plays multiple roles: attacking, defending, and now, most recently, needing its own security. The LLM is the most obvious place for any organization to start. To stay competitive, we need it, but it must be easy and non-restrictive. This is where providers like eSentire excel. With their free, secure LLM gateway, customers can install it in their environment, log and correlate the data, and monitor when someone’s using it. John added that eSentire is taking it a step further this year and are coming out with MDR (Manage/Detect/Respond) for LLMs, helping protect against policy violations.

AI is a powerful tool. It has vast potential, but its impact depends on how we, as human beings, use it and secure it. As it evolves, it’s imperative that we implement safeguards and collaborate with experts to protect against threat actors and malicious attacks. AI must be integrated into threat detection, and steps should be taken to secure LLM gateways. For businesses to navigate the ever-changing landscape and keeping ahead, ensuring security and innovation are progressing, they need to understand both capabilities and the risks of AI.


Tom Croft

Tom Croft

Field Sales Engineer - UCaaS, CCaaS, WAN