Today, we’re still discussing the same cybersecurity issues we were talking about 15 to 25 years ago, and we still haven’t solved the cybersecurity problem.
This poignant observation sets the stage for a compelling conversation with Andrew Hollister, Chief Information Security Officer (CISO) at LogRhythm.
With over 25 years of experience in software, infrastructure, and security roles across both the private and public sectors, Andrew is a seasoned professional with a keen understanding of the persistent challenges in the cybersecurity landscape. Having joined LogRhythm in 2012, Andrew brought with him a deep interest in leveraging machine-based analytics to tackle cybersecurity issues.
In our conversation with Andrew, we delve into the critical shifts and challenges anticipated in the cybersecurity landscape for 2024. We will explore his perspectives on the current state of cybersecurity, the challenges organizations face, and the essential strategies required to mitigate risks in an ever-changing digital landscape.
What do you believe will be the most significant shift or challenge in the cybersecurity landscape in 2024, and how should organizations prepare for it?
As we look ahead, I believe the cybersecurity industry will experience a continued tight budget landscape and this will force organizations to reevaluate their security tech stack. Although cybersecurity budgets are usually protected, this wasn’t always the case in 2023. This challenge is likely to continue in 2024, and as a result, organizations are going to have to make tough decisions about where they invest.
Generative Artificial Intelligence (generative AI) will also play a significant role in the cybersecurity landscape in 2024. This applies both from the perspective of how it can be used to make cybersecurity improvements and the risks it brings.
The biggest challenge around generative AI will be the safety of the data being entered into these AI tools. If organizations are using generative AI tools with sensitive data, for example, personal identifiable information (PII) or healthcare information they face the risk of this data being targeted by threat actors. At the same time, the legal rights to the output of a generative AI tool still aren’t completely clear. This could lead to further challenges in the form of legal complications further down the road.
Generative AI is gaining prominence in various industries. In the context of cybersecurity, how do you foresee the risks and rewards of deploying generative AI? What precautions should organizations take when integrating this technology into their security systems?
The hype around generative AI continues to build, and more organizations are exploring ways to incorporate the technology wherever possible. Before rushing into adoption, the cybersecurity industry needs to take a step back and determine the benefits and role of generative AI, and where its true value lies.
I believe generative AI is best utilized by Security Operations Centers (SOCs) as an assistive and augmentative technology, rather than replacing human analysts. Success will depend on aligning AI tools with analyst workflows rather than wholly relying on them.
Another thing to be aware of is that the value of generative AI can vary from analyst to analyst depending on their level of experience. Newer analysts are less likely to have the experience to interpret and action the information the AI is providing. Generative AI therefore may be more useful to mid-level analysts who have the knowledge to understand whether the advice it provides is accurate and should be followed.
One of the biggest challenges of deploying generative AI will remain around data confidentiality and leaks. Issues are likely to occur through poor security practice from the vendor side or poor policy on the customer side. Organizations that don’t steer their employees on how they should and shouldn’t use generative AI are likely to experience data privacy issues. Organizations can still adopt, experiment, and innovate with generative AI, but they must prioritize taking care of their data.
As AI continues to evolve, there are concerns about its potential to generate realistic fake content. How do you anticipate AI contributing to major confidential data risks?
The presence of fake content is already a challenge for organizations, and generative AI is taking this from bad to worse. On one side of the challenge, threat actors are leveraging AI-generated fake content to improve their phishing attempts and vastly boost the readability and believability of scam emails.
The second side of the issue is the potential of generative AI “hallucinating”, or in other words, just making things up. Though it may be called artificial intelligence, it’s essentially just regurgitating information that is likely to be correct based on what it has previously seen. Because of this, the information provided by generative AI tools may not always be accurate, whether it’s intentionally directed to do that or not.
In terms of data leaks, if organizations don’t have the right policies, education, or technical controls in place, they risk their data. On the side of the provider, if they don’t have the appropriate controls in place, or follow best practices, the data will be at risk within the tool itself.
Human analysis has always been a critical component of cybersecurity. In the age of advanced automation and AI, how do you emphasize the importance of human expertise in detecting and responding to evolving cyber threats? Are there specific skills or qualities that cybersecurity professionals should prioritize to stay ahead of the game?
Generative AI won’t be equally useful to all security analysts, and it certainly won’t take away the need for cybersecurity as a specialism. Machines excel at churning through huge volumes of data and picking out patterns. In this sense, generative AI is great at adding context and meaning to the data, but it doesn’t have the insight a human being possesses.
For example, if my CEO travels to another country for a meeting and tries to log in from an unusual place, an alert will be raised by the system. I would know that it’s the CEO because I recognize his name. However, if the AI doesn’t have the entire context of everybody’s titles and what those titles imply this might trigger it to make the hasty decision to lock the account down. A human analyst is likely to be more cautious of locking the CEO out and potentially disrupting business.
AI can be a helpful asset for data analysis, but ultimately any actions need to be governed by a human being. They can look at the data that’s been processed and determine if the outcome is what they expected and if they can automate that response in the future.
Cloud computing has become an integral part of modern business operations. What specific security challenges do you foresee in the realm of cloud computing for 2024?
The biggest cloud computing security challenge facing organizations continues to be visibility. This has been an issue we’ve seen for years with on-prem where organizations have data centers full of assets but struggle with visibility and management of those assets and the data they host.
Now we have cloud environments full of information instead, and the problem has just become more widespread. More organizations are adopting cloud technologies and just assuming they are safe.
While we have the shared responsibility model and cloud providers are generally good at their part, most breaches we see come from the consumer’s side. This is where organizations really need visibility. Organizations need to understand how the cloud is being used by their team, for what purpose it’s being used, who has access to data, and where it is all stored. These will continue to be the big challenges, and they are unlikely to go away anytime soon.
The concept of a “Zero Trust” security model has gained traction in recent years. How do you see this approach evolving in the coming years, and what role can it play in addressing the dynamic nature of cybersecurity threats, especially in a world where remote work is becoming more prevalent?
Zero Trust is already a widely adopted security model. The US has mandated Zero Trust across many, if not all, of its agencies. There are now established standards in place for Zero Trust and it’s significant that governments all over the world are adopting it.
The role that a Zero Trust architecture plays has grown in importance as remote work becomes more prevalent. This is due to the focus on identifying both the user and their device, what data they need to access, and the sensitivity of the data. Using this approach enables organizations to safely grant access to remote workers, without having to worry about excess privileges and compromise of home or other networks outside of the organization’s control.
It is a substantial task to implement Zero Trust successfully. Organizations should build up their Zero Trust approach over time instead of expecting to do it all in one big bang.
Looking back at your previous predictions, have there been any surprises or unexpected developments in the cybersecurity landscape that challenged conventional wisdom? How has the reality of cybersecurity differed from what you might have predicted in the past?
In 2023, I predicted that ransomware attackers would stop encrypting data and instead just steal the data and use that to extort organizations. These tactics have been something we’ve witnessed over the past year.
For the most part, we’re still discussing the same issues we were talking about 15 to 25 years ago. Phishing has been around for 25 years, ransomware for 35 years, and supply chain attacks for about 40 years. These are still three of the biggest concerns facing organizations today.
The presence of these threats underlines that we haven’t solved the cybersecurity problem, and doing the basics of cybersecurity remains the core tenet of a successful program. By continuing with essentials, such as patching and vulnerability management, as well as backup and monitoring, organizations have a much better chance of defending their data from threat actors.
In terms of surprises, the rate of adoption for generative AI has been one of the latest unexpected developments. Generative AI has managed to capture the imagination of organizations both on the vendor and customer side. However, this has also come with compliance and privacy issues and organizations are still trying to learn how to properly use the technology without risking their sensitive data.
Considering the increasing interconnectedness of devices and systems, how do you see the role of cybersecurity evolving in the broader context of digital transformation? Are there specific aspects of this transformation that organizations should pay extra attention to from a security standpoint?
Security shouldn’t be an afterthought of your digital transformation program, unfortunately it often is. Cybersecurity professionals are often only told about a new cloud system as it is being deployed, so there isn’t time to properly shore up their defenses. Organizations then have to invest considerable time and finance to retrofit security solutions to their environment.
If organizations instead had their Chief Information Security Officer (CISO) or their team as a partner when planning their digital transformation, they’d save time and budget, as well as mitigate their threat risks.
A considerable part of digital transformation is centered on moving things to the cloud, which highlights the core issue of visibility. The cloud increases the surface area organizations need to maintain visibility of and provides threat actors with multiple access points. To counter this, essentials such as understanding the shared responsibility model are fundamental as well as gaining the needed level of monitoring and visibility into those cloud assets.
In your opinion, what are the most underestimated cybersecurity threats that organizations might not be giving enough attention to?
The human factor is often the most overlooked aspect of cybersecurity. With digital transformation rapidly increasing, there is an obsession with using the latest technologies. However, organizations are failing to take into account whether or not their employees or customers are actually cybersecurity savvy.
In 2023, Statista found the global average of CISOs that believed that human error is the biggest cyber vulnerability was 60%. Mistakes by humans can result in considerable risk for organizations. Organizations should ensure their employees have a full understanding of cybersecurity basics and how to look out for potential attacks.
Weak authentication methods and failure to back up data and patch systems can lead to extensive reputational damage, financial losses, and even potential legal complications. Constant vigilance in these basic requirements is required to keep organizations secure in the ever-evolving threat landscape.
To conclude, what’s one thing you wish every person knew about cybersecurity?
If you do the basics of cybersecurity well, you can significantly reduce the risk of a damaging data breach. When a threat actor comes to knock on your door and finds you prepared, the chances are they’ll move on to the next target rather than expending the effort on you.
The Center for Internet Security (CIS) Critical Security Controls are a great starting point for organizations and outline the importance of maintaining the basics to build a secure foundation. Implementing two-factor authentication (2FA), keeping up with patching, and knowing what assets you are trying to protect are all crucial steps. It’s because these things aren’t done well and with consistency, that we’re still seeing the same problems as we did 30 years ago.