Cyber Daily recently had the pleasure of chatting virtually with Craig Adams, Chief Product Officer of Rapid7, for a discussion on all things AI.
From how it can streamline how teams respond to cybersecurity incidents to the dangers of not using it, Adams says it should be part of every security team’s arsenal.
Cyber Daily: Many observers expect the AI bubble to eventually burst, but one area where it appears legitimately useful is cybersecurity. Can you explain why it is such an important part of the modern cybersecurity toolkit?
Craig Adams: So first, let’s start by saying that it’s been fascinating to follow AI’s journey in cybersecurity, and when [machine learning] models started to come into fashion only a few years ago, or, I should say, back into fashion a few years ago.
Every security team first asked themselves: how to prevent the use of AI in their organization. They first wondered how to prevent the use of AI data exfiltration and the like. Today, I think we understand that fighting the use of AI is like fighting the tide, and it’s going to happen. So the only question is, firstly, can you secure it? But secondly, are you getting the benefits?
So to answer your question specifically, one of the things that AI is amazing at is reacting faster. One of the dirty secrets of security is that most security teams in any organization spend their time on non-malicious incidents. They spend their time – if I look at the needle in the haystack analogy – they actually spend their time on the hay, not on the needle.
And there’s no better use case for AI than detecting threats faster and optimizing workflow, right? However, I still think that in the end you will have a human for real corrective action.
I think Clippy is my model of mental AI – if you’re my age and remember the little paper clip – I don’t think Clippy will lock down everyone’s firewalls autonomously. But can AI tell you what to focus on and radically change the number of steps in the process? Yes, I also think that most organizations are going to use AI through what vendors offer.
Cyber Daily: How does this human in the mix work with the AI – what is that balance?
Craig Adams: So if you, if you stick to the needle analogy, because I think all of this – it’s true in every security team, you know – the first thing that a good initiative to do ‘AI will suppress, it’s the need for you to investigate known, benign things that your tools trigger you on in the first place.
The second thing a good AI program will do is look at the workflow you would perform, starting from the initial identification of a threat, and automatically optimize the majority of tasks from that point on . So he’ll say, “Great, when I get a threat alert, I do the following five things.” It will do these five things for you, and finally, it will summarize a recommendation.
And I think we’re still going to, when it comes to security, rely on a human last call, but I think it’s a sea change. I will say that I still think we are in the early stages of adopting AI in security; although it’s the buzzword, most organizations are trying to cut through the hype and figure out the real areas to use AI.
But I think optimizing the workflow, making recommendations, is a big step forward.
Cyber Daily: How far do you think AI can go in the mix? Look in five to ten years. So where do you think AI will fit into the security puzzle?
Craig Adams: This can go a long way.
I mean, the analogy, a very apt analogy, is that the last major disruption was the cloud; and, you know, we’re still left with this question: “Where are we in terms of cloud adoption?”
I still think we’re in its infancy. I still think we’ll see, over a five-year period, a doubling of cloud usage by most organizations at a minimum. So when I look at the security world, we have this dynamic where everyone wants to be proactively informed of any abnormal behavior. Nobody wants another alert.
So I’m thinking about the extent to which security can be automated, and to be clear, automated, either through innocuous alerts or through investigations and recommendations. I still think you’re going to see a human in the loop, split remediation. I still think we’re going to want professionals to evaluate what steps to take, but there’s still a lot of room to maneuver.
Cyber Daily: Let’s turn things around and look at the other side of the coin. This is the risk of AI. So, I know you’ve talked in the past about AI engine infiltration. Can you explain this to me in detail?
Craig Adams: First, the obvious: I actually believe, provocatively, that the biggest risk of AI is that an organization doesn’t use it. So my prediction of risk number one, existential threat, is not useful, but not helpful.
The second piece, though, is that when you actually have AI, you’re going through a flow. First of all, you need to protect your model. You need to make sure you have the same protections in a dynamic model that you would have otherwise. In other words, the final point is that there is a lot to be said about model pollution. Or how can you ensure that the model you have and the recommendations that come from it remain truly pure?
We work with our clients on how to recommend best practices for routine audits, so regularly inspecting what comes out in the human audit of what your AI model is assessing to make sure you actually like the outcome. But I still think the biggest threat is non-use. The second is to protect your models. Third, there is regular auditing to detect bias.
Cyber Daily: Have you ever seen examples of AI models being tampered with to change the outcome?
Craig Adams: You know, I would say that what I saw is not a falsification, but perhaps a model training bias.
There are notorious examples of hiring AI engines from some cloud providers, where they looked at their existing employee base in order to analyze resumes. There was a gender bias among their existing staff which was then reflected in the model itself. The second thing is we’ve seen examples of pollution where organizations send additional queries in order to bias the trained dataset.
Because, again, everything in AI is based on your training data set.
The latest one I see is we’ve seen hacked AI models. So, to be very clear, the new parts of any architecture tend to be the ones with the least security. There’s a great example when people moved from on-premises to cloud, or from cloud to containers or started using AI… We tend to innovate quickly and then secure last, and so we absolutely saw to a hacking of the results because people have not put their finger on artificial intelligence. same protections as they would around part of their infrastructure.
Cyber Daily: And what does it look like? What will the final product look like once this model is hacked?
Craig Adams: So, several things.
So, first of all, this looks like ransomware incidents. One of the prices of every organization is data. So it’s data exfiltration, pattern locking, and then paying a ransom or attempting recovery at the end.
Second, you see data pollution. Specifically, insert deletions that have occurred in the data sets, but the most likely event organizations will experience is a ransomware event attacking their AI model.
And sure, there are different groups using different tactics, but the modern threat actor is very opportunistic. We do see targeted attacks, but people are looking for things on a large scale and at this point it’s just too easy, too tempting.
Cyber Daily: What about the AI used by the bad guys? Because we know that this is a revolution in itself.
Craig Adams: The future is now. To be very clear, in any sort of technical disruption, your first followers are always those who seek to do harm.
Thus, AI presents a variety of malicious use cases. The first concerns organizations that attempt to engage in underwater fishing. It’s simply too simple, too effective to be able to offer very targeted information.
Second, you can also see AI being used to make environmental discoveries. So, given the amount of public information that exists, ranging from office locations to infrastructure analysis, we see bad actors using AI at scale because it allows them to use their time efficiently .
This is the curse of any technical disruption: your opponents start using it quickly. Your defenders tend to use it slowly. And that’s why I come back to the biggest threat of AI to any organization, is not using it and letting your adversary become more effective while you still try to protect yourself in your old ways.