The impossibility of “getting ahead” in cybersecurity
By Blancco | Best Security Solution of the Year 2023
As security professionals, we often talk about staying ahead of the attackers, of gaining some time-space advantage, or closing the gap, to be better prepared for emerging threats. Facing the highly dynamic cybersecurity landscape, we strive to “future-proof” ourselves by anticipating the bad guy’s next move. This approach implies that if we did it well enough at scale, we could gain early awareness of their plans, disrupt them before execution, and checkmate attackers before they succeed.
That’s impossible to do with any consistency or at any scale. Here’s why:
We are busy designing, building, operating, and maintaining the information and control systems which run our organizations and communities. The bad guys are only looking to steal and disrupt those systems. This is not a contest between competing IT environments or cyber worlds, in which both sides have the burden of owning and managing. This is all about the one, critically important environment that matters: ours. We have to manage a massively complex ecosystem of technologies; they are only focused on getting into it.
Cybersecurity is about exploiting weaknesses in IT. Most of the research, development, and deployment of information technology is done by those who are using it for a variety of operations, from enabling internal IT (an organization’s central nervous system), to orchestrating telecommunications and the internet (society’s central nervous system), to delivering healthcare, financial services, retail experiences, and more. The only R&D burden for attackers is repurposing technology to exploit existing systems. This doesn’t truly qualify as an “arms race,” as cybersecurity is sometimes labeled.
The odds are overwhelmingly in the attacker’s favor. We have to get it right 100% of the time, while they only have to get it right once. Even if we could, in theory, “get ahead” of attackers, it would remain nearly impossible to sustain.
That’s not to say that watching the threat landscape is a waste of time. It still provides important indicators and warnings about imminent attacks, gives us a sense of where the attackers are investing their efforts, and helps the broader “us” (including law enforcement) to go after them, driving unwanted pressure as a means of deterrence and punishment. But that is only a stopgap measure.
Gaining an appreciation for new threats usually happens abruptly and unpleasantly when a successful attack has taken place. But cognitively speaking, it is just organizations catching up to the view of themselves that attackers have already developed and exploited. Therefore, threat modeling, while important, should be recognized for what it is: a never-ending game of catching up to the bad guys.
Because of this, cyber defense will never really be about getting ahead. But it can, and should, be about building future resilience.
Building “future resilience”
What security professionals can do is to focus on the technology environment that matters (their own) and build resilience against potential erosion of function, control, security, and privacy. This involves paying more attention to how current systems are built, considering how they might degrade or become compromised, challenging our own assumptions, and proactively bolstering resilience in anticipation.
This is increasingly important as the pace of change, particularly in IT, is accelerating. It will only get harder to try to get ahead by anticipating risks, engaging in threat modeling, and hunting for malicious actors. Building “secure-by-design” systems is a critical component of the future cyber frontier.
In practice, this means identifying the technology platforms and tools that we have built our information and control systems upon, examining the assumptions we have made about them, and – as far as possible – looking for ways to anticipate their compromise. This does involve some scenario planning, but from the point of view of determining where these technologies will eventually fail us, and how that might happen.
Let’s take a look at a few foundational technologies to see what future resilience-building might look like.
Artificial Intelligence
Artificial Intelligence (AI) is undoubtedly the most talked about evolving technology today. Enterprises have been experimenting with AI for years, but Generative AI (GenAI) became mainstream in 2023. OpenAI recently unveiled GPT-4o, showcasing breakthrough capabilities for the technology to interact at an almost human level. ChatGPT’s sophistication is growing at an astounding rate and while AI is rightly being embraced at every level of a business to drive operational efficiency improvements and advance productivity to enhance business outcomes, concerns over its appropriate use can’t be understated.
In terms of future resilience-building, now is the time to take a look at how GenAI could go wrong, and take steps to strengthen its design, deployment, and use.
Data protection and privacy are important to this effort. If history is any indication, it will take time for legislators and regulators to develop a working knowledge about AI, understand their options, and develop a sufficient consensus to enact a law. Therefore, understanding the data protection risks associated with AI and GenAI applications is a first port of call for bolstering an organization’s security posture, and must be undertaken long before regulatory frameworks are established.
For example, inappropriate input of sensitive or protected data into GenAI tools must be mitigated through internal policies to prevent the exposure of corporate or personal information. It’s also possible for GenAI to uncover sensitive or protected data by using existing data sources to derive real information such as personally identifiable information (PII) like home addresses, health conditions, or personal relationships. What’s more, GenAI is capable of creating new data which, once used by a person or organization, becomes sensitive or protected data due to its nature and use e.g., GenAI-assisted intellectual property or business plans.
The Large Language Models (LLMs) which serve as the foundational data sets for GenAI platforms is where focused attention must be paid. How could these models be compromised, eroded, or co-opted for malicious purposes? How could foundational data drive outcomes which, in turn, undermine the security of systems that rely on these outputs? Data integrity, accuracy, cohesion, and relevance are important in terms of GenAI’s effectiveness; they should also be carefully considered for their impact on system-level resilience.
We know that it’s not just internal use of GenAI that represents a threat. Hackers are employing the technology to craft new, or enhance existing, attack vectors, including spear phishing. New risks posed by GenAI-enabled attacks will make it even harder to remain compliant with security policies. Organizations will need to carefully consider the use of GenAI by their employees and implement specific controls on its use.
The good news is that GenAI can also be used to our advantage. It can enhance data security through risk modelling, penetration testing, human behaviour analytics, improving the signal-to-noise ratio of monitoring systems, and so on. Organizations should consider the use of GenAI by cybersecurity vendors to ensure continuous performance improvement, while also staying aware of all the risks it can pose.
In short, GenAI is a multi-faceted technology which must be considered holistically at the outset to build resilience against future erosion and exploitation.
Encryption
Another important technology is encryption, which has been a foundational data security control for decades. It will remain so for the foreseeable future. The ability to scramble data, such that decrypting it is very difficult or nearly impossible, is a useful way to prevent loss or leakage as that data is stored and transmitted.
But with broad use and reliance comes a potentially exploitable dependence. This is especially concerning when advances in quantum computing are considered. Although it will be a while before quantum computers are commercially available and have the number of ‘qubits’ necessary to crack traditional encryption methods, it is widely accepted that there are critical security considerations that must be addressed today. With ‘Q’ day on the horizon – the day quantum computers are able to crack current encryption schemes – security, privacy, and compliance professionals must plan accordingly. The work of NIST to publish PQC (Post-Quantum Cryptography) standards is very important and will extend the shelf life of encryption in many environments.
However, even if such a day never arrives, encryption is not an infallible control. There are already risks associated with key management (improper or insecure storage of keys, lack of frequent rotation, etc.), privileged account management or human errors, which are becoming more apparent. Take the recent hacking of a Windows machine with Bitlocker and TPM (Trust Platform Module) as an example.
The owners of sensitive data, as well as those who process, transmit or store such data, must build resilience into their systems and processes in recognition of the fact that sooner or later, existing controls will erode or be exploited. Mitigations (using existing methods) include enhanced data governance, segmentation, access controls, and data sanitization when information is no longer needed (one of the few final and irreversible controls – if the data is gone, it can’t be stolen!)
A More Secure Cyber Frontier
Just two important technologies – one new (GenAI) and one old (encryption) – illustrate the importance of building resilience in systems. We will never be able to anticipate and outmaneuver all attackers at scale. While we are busy building, they are watching us build and looking for weaknesses. If we are able to challenge our own assumptions, consider the eventuality of erosion or exploitation of foundational technologies, and stress-test functionality and security (even as theoretical exercises), we are more likely to deploy systems which are resilient to future, as-yet-unseen threats. If we do this successfully, we may not “get ahead” relative to attackers, but we will already “be ahead” in terms of mitigating new, novel risks that neither attackers nor defenders have yet identified or developed.