Recently, I’ve heard some bits and pieces about how Information
Security (InfoSec) can be “threat-centric” or “vulnerability-centric”.
This stuck me funny for a number of reasons, mainly it showed a basic
bias towards what InfoSec *is*. And to me, InfoSec is too complex to be
described as “threat-centric” or “vulnerability-centric” and yet still
simple enough to be described at a high level in a few paragraphs in a
blog post. So I thought I’d write a “primer” post on what InfoSec is to
create a reference point.
First, InfoSec is a hypothetical construct. It is something that we can all talk about, but it’s not directly
observable and therefore measurable like, say, speed that we can
describe km/hr. “Directly” is to be stressed there because there are
many hypothetical constructs of subjective value that we do create
measurements and measurement scales for in order to create a state of
(high) intersubjectivity
between observers (don’t like that wikipedia definition, I use it to
mean that you and I can kind of understand the same thing in the same
way).
Because it’s a hypothetical construct, what is “secure enough” is
also subjective to the observer, and a subjective assessment that is
then immediately, almost subconsciously compared to the relative risk
tolerance of the owner in their mind. This presents many challenges in
managing a security program, not the least of which is establishing that
high degree of intersubjectivity, above.
Second, security is not an engineering discipline, per se. Our
industry treats it as such because most of us come from that background,
and because the easiest thing to do to try to become “more secure” is
buy a new engineering solution (security product marketing). But the
bankruptcy of this way of thinking is present in both our budgets and
our standards. A security management approach focused solely on
engineering fails primarily because of the “intelligent” or adaptable
attacker. For example, if security were pure engineering, it would be
like building a bridge or getting an airplane in the air. In these
cases, the forces that are applied to the infrastructure do not adapt or
change tactics to cause failure. At worst, in engineering against
nature we only have a difficult time adapting to forces unforeseen due
to a combination of factors.
But InfoSec has to deal with the behaviors of attackers. Their
sentience includes creativity and adaptability. The wind does not act
to deceive. Gravity and rust do not go “low and slow” to evade
detection. Rain does not customize its raindrops to bypass umbrellas.
But sentient attackers do change to evade defenses and reach their
goal.
And because InfoSec is not solely a “computer/software engineering”
problem, it requires an understanding of both technology and
non-technology fields. Yes, this includes software engineering,
hardware engineering, and network engineering – but it also means
concepts like management science and behavioral analytics (among many
others) should have their place in understanding all the phenomena that
creates a state of “secure”.
For example the outcome of having more than a “vulnerability-centric”
view of InfoSec (from above) is that “secure” would reasonably measured
by understanding both the force that the
attackers can apply, and our ability to resist that force (1). In this
way,”threat-centric” security (study of ability to apply force) is as
useless without “vulnerability-centric” security (study of the ability
to resist). It’s like trying to measure “distance-centric” speed
without it being relative to “time-centric” speed, and is as equally
useless.
Finally, InfoSec is a subset of Information Risk Management (IRM).
IRM takes what we know about “secure” and adds concepts like probable
impacts and resource allocation strategies. This can be confusing to
many because of the many definitions of the word “risk” in the english
language, but that’s a post for a different day.
THE NEWSCHOOL APPROACH – THE MONEYBALL-ING OF INFORMATION SECURITY?
Moneyball
is a fantastic book about how new approaches to measuring and modeling
the performance of baseball players created market efficiencies for
those baseball teams who were better able to use the data they had at
hand. In a sense, the New School of InfoSec seeks to foster the
collection of data and the development of new and better models for
managing security.
But that simplistic analogy belies other important concepts. A call
for the application of scientific method, the recognition that our
standards are really just hypothesis about “how to secure”, the
requirement that claims of success be backed up with data and not just a
logical argument or isolated anecdotes, data sharing, transparency –
these are all fundamental premises, needs even, of the New School of
Information Security.
Because Security is a hypothetical construct, one that requires a
great deal of intersubjectivity and a broad array of applicable
knowledge to understand, the evidence of history suggests that a New
School approach – a scientific approach, is the most efficient way of making progress.
Alex Hutton is a contributor to the New School of Information Security blog. This essay originally appeared there.