What Does Security Mean?
Human beings understand well what security means in real world. First
an foremost it means one's physical security from any harm. It further
means security of one's property, so it does not get stolen or
In a broader sense, we can enumerate security requirements in real
world in the following manner:
- One's physical security from any harm (e.g., being hurt, feeling scared)
- Security of one's property from theft, damage, misuse (e.g., what
if someone uses my house to throw water balloons on others) or trespassing
- Verifiable identities of all persons that matter (bank tellers,
police officers, friends)
- Ability to minimize or eliminate unwanted interactions
(e.g., solicitation phone calls, begging, asking for petition signatures)
- Ability to freely move and engage in activities (e.g., visit a
favorite park or restaurant, take a fast road into the city)
Notice that some of these requirements relate to one's safety and
others relate to convenience, i.e. we both want to be safe and to
go about our business with minimal distraction from others.
In computing and networking one can find the similar cobmination
of security requirements that blend the need for safety with the need
for uninterupted operation. Making a parallel with physical security
requirements, in the Internet I may want (this is not a comprehensive list):
Many of you have heard about computer security (e.g., security from
intrusion, viruses, worms, etc.) How does it relate to network
security? Computer security aims to protect a single machine and
data residing on it. Networking goal is to enable communication
between any pair of machines, in any scenario. Thus the goal of
network security is to protect this communication and all
participants. The focus of network security is thus on threats that
require network access to be perpetrated.
- Physical security of my machine (my machine will not be
- Security of my data from theft, damage
- My machine will not be misused to harm others (e.g., DDoS, phishing)
- My machine, its programs and any physical devices it may control
will behave as intended
- I will be able to verify identity of remote machines/institutions,
origin of remote data
- I will not waste time nor machine/network resources on unwanted
tasks (spam, phishing, DDoS)
- I will be able to communicate to any chosen server at any time,
barring accidental networking/server failures
Another issue that often arises is whether security means robustness
(e.g., no one can break into my computer), or fault-tolerance (e.g.,
fast detection of intrusions and patching). In real world security is
achieved by combining techniques that achieve robustness and
fault-tolerance. Known and distinct threats should be prevented, while
new and stealthy threats should be quickly detected and handled.
As we discussed before "security" means a lot of things in a lot of
different contexts. At the high level, one can say that security has
the goal to protect three main properties of data and systems:
Not every security problem will violate all three security
properties, and often there will be variations to the problem that
violate different sets of properties.
- Confidentiality - keeping data, participant identities or systems
accessible only to authorized users.
This is usually achieved through
- Integrity - making sure/verifyng that data has not
undergone improper or unauthorized change. This also includes
verifying the origin of the data. Integrity does not only apply to
data but also to identities and system functionality. Integrity is
often achieved through use of cryptographic primitives, e.g., signatures.
- Availability - keeping some system running and reachable by
its customers, or keeping some data available to authorized
users. This is achieved through a myriad of techniques such as
firewalls, intrusion detection and prevention systems, DoS defenses,
etc. Note that availability also encompases quality of service. The
must not only be available but it must also provide good service
quality to its users.
Orthogonal aspects to these security properties are the policy and
the security mechanisms. Policy defines what exactly
confidentiality, integrity and availability mean in a given
context. Security mechanisms are the tools that should enforce the
policy. It is often very difficult to ensure that the behavior of
multiple security mechanisms correctly and fully enforces a
policy. Sometimes this is difficult because policies are expressed in
English and sometimes it is difficult because security mechanism's
behavior is complex and they may interact with each other in subtle
ways that are not obvious.
A security mechanism may aim to prevent an attack,
detect an attack, respond to it or recover from
it. Prevention means that it is impossible to perpetrate a given
attack in the presence of a given security mechanism. Detection aims
to quickly and accurately detect the attack, i.e. to reduce the number
of false positives, false negatives and to reduce the
detection delay. A false positive is a case when detection occurs
but there is no attack. A false negative is a case when there is an
attack but detection does not occur. A response to an attack can be to
tighten security mechanisms at the target, lodge a complaint with
someone, start logging the attack. etc.
Recovery aims to remedy attack effects, usually after the fact.
One form of recovery is to
sustain the attack, enabling the system to function correctly in its presence.
Let us now examine some security threats, just so we get an idea of
the complexity of security problems. As we go let us also try to
"slot" these problems as threats to confidentiality, integrity or
- Breaking into a computer can often be achieved in several ways. One
way is to guess a password that is weak. What
makes a password weak? It is usually based on the user's name, names
of his/her friends or on a dictionary word.
Another way to hack into
a computer is to "exploit a vulnerability" in an application or in the
OS on the computer. A vulnerability is a weakness in the
system, e.g., in its design, implementation or use procedures that
when exploited, makes it behave in a way the system's creator did not expect.
Usually such weaknesses involve software that process
some input received over the
or from the console, and the bug consists in the creator's failing to
predict all the features that input could have. For example, the input
may be longer than the space allocated by the software's creator to
store it. Or the input may have special characters that, when given to
another program like DB manager, cause it to behave
behavior could be to crash the application, gain root access to the
slow down the machine, etc. An exploit is the set of steps that
exercises the vulnerability, e.g., input to a buggy program.
Yet another way to break into a computer is using social
engineering - get the user to tell you their password.
For example, I could impersonate someone that should be trusted, e.g.,
IT staff member. Viruses and worms also break into computers. A
virus is a self-replicating program that requires user action to
activate such as clicking on E-mail, downloading an infected file or
inserting an infected USB key, etc. A worm is a
self-replicating program that does not require user action to
activate. It propagates itself over the network, infects any vulnerable
machine it finds and then spreads from it further.
Breaking into computer violates different combinations of
confidentiality, integrity and availability properties, depending on
how the break in is performed and for what purpose. Breaking into
someone's property violates confidentiality since this is unauthorized
access. If the attack modifies or deletes some data or program this
violates integrity of the system and data, and may also violate
availability (data is not there when needed, program does not run).
- A computer could suffer a denial-of-service (DoS) attack,
which aims to disrupt a service by either exploiting a vulnerability
or by sending a
lot of bogus messages to the target. This is an availability
A computer could also be infected by a virus or a worm. This
definitely violates confidentiality (unauthorized access) but may also
affect integrity and availability if data and programs are modified/deleted.
- User information could be stolen from their computer or from their
communication channel (i.e., network). One could steal it from a user's
computer by breaking in first. One could steal it from a user's network by
eavesdropping on the communications. This is easy to do for routers on
the path between the user and remote destinations, or for hosts that
share a broadcast medium with the user (WiFi, Ethernet). One possible
solution to this problem is to encrypt the data. But there are many
cryptographic protocols and configuration that can be easily
broken. Further, even if someone cannot read an entire message they
may be able to infer partial information that a user wants to keep
private (e.g., who talks to whom and how often). Anonymization
techniques (routing traffic though intermediaries) can help hide these
communication patterns. But anonymization and encryption are often at
odds with other security approaches, e.g. intrusion
detection. Information stealing violates confidentiality.
- One could use somebody's machine for nefarious purposes, i.e., to
attack or harrass others. Examples of such threats are
denial-of-service attacks (one's computer is used to bombard the
victim with traffic), worms and Email viruses (one's computer is
infected and then spreads infection to others), spam and phishing
(one's computer is infected and used to send spam or phishing emails
to others). Specifically, a DoS attack can misuse someone's computer
in two ways. First, it can be used to create a lot of bogus traffic
going to the attack's target. Second, if the computer is a public
server (e.g. a DNS server) it can be used to reflect attack
traffic to the target. The attacker sends a service request, e.g., a
DNS request to the server, putting the target's IP address in the
source field in the IP header. The server then replies to the
target. In this case, the server is just doing its job - it is not
compromised. Misuse of someone's resources is integrity violation.
- One can damage a user's computer or data in several ways. First,
if they break into the computer and gain root privileges they can
modify data at will, or erase it. It is thus important to be able to
detect tampering (cryptography helps here) and to have ways to recover
data (e.g., from a remote backup). Second, denial-of-service attacks
sometimes do not just overwhelm computers but can also inflict
permanent damage, e.g., to their disk drives. Damagining a computer or
data is integrity violation.
- One can take up user's resources with irrelevant messages. This is
what happens during DoS attacks (usually CPU or network resource),
worm and virus spread (network resource), or with unsolicited email
(takes time to read it, even if the user does not act on it). This
is availability violation.
- Lately, many physical devices are becoming networked so they can
be remotely controled (smart fridge, smart heating, smart
utilities, networked medical devices). This increases the risk of
cyberattacks, as the attacker can now affect the physical world with
their actions and inflict not just financial but also physical
This threat violates integrity and availability of services and data.
- One could impersonate a user or a remote server they are trying to
communicate with. This can be done at network layer (IP spoofing),
application layer (cookie or password stealing) or even link layer
(ARP spoofing). The risk from impersonation ranges from revealing
private data to unintended recipient to performing activities, such as
money transfers, that only a trusted person/computer can
initiate. This threat violates integrity and may violate confidentiality
if secret data is revealed to strangers as a consequence.
- One could prevent communication between a user and a remote server
in many ways. For example, a DoS attack on the server or on any router
on the path from the user to the server, or on the reverse path, would
interrupt communication. So would a DoS attack on a DNS server that is
authoritative for resolving the remote's server name to IP, or DNS
hijacking or IP prefix hijacking. This threat violates availability.
We now list some security mechanisms that are used to enforce
confidentiality, integrity and availability properties. Again, this is
not an exhaustive list. These mechanisms are selected just to
illustrate the diversity of security solutions.
- Encryption is often used to enforce confidentiality.
- Checksums are used to enforce data/code integrity. If a checksum
is kept in a trusted place it can be recalculated latera and compared
to that trusted copy to detect tampering.
- Key management is used to enable confidentiality and integrity
protections, that often use cryptography. Key management ensures that
keys needed for communication are securely distributed to participants.
- Authentication verifies someone's identity (a person, a machine,
an institution, a piece of code), and enforces integrity.
- Authorization ensures that only authorized users can access
data/systems. It relies on authentication to learn about the user's
identity, and it protects confidentiality.
- Accounting keeps track of user and program actions, and protects
availability of systems.
- Firewalls encode rules about who can access what resources and in
which context. They protect the confidentiality and availability.
- VPNs ensure secure (encrypted) channel over a public network, and
- Intrusion Detection examines traffic and system actions to detect an
unauthorized access. It protects confidentiality.
- Intrusion Response combines intrusion detection with some
defensive action such as modifying firewall rules, logging some user's
actions, transitioning a system into a restricted access mode etc. It
protects confidentiality, integrity and/or availability depending on
the defensive action.
- Virus scanners aim to detect viruses in programs and data. They
protect confidentiality and integrity.
- Policy managers help people construct policies that do not
conflict with each other, and understand the impact of combining
policy rules. They do not directly enforce CIA rules, but help in
defining what they mean.
- Trusted hardware aims to provide a "root of trust" and a safe haven
for some crypto storage and calculations. Starting from this one can
verify if the software or data residing on that hardware have been
compromised. It ensures integrity.
We now discuss what are the challenges for people working in and
doing research in security.
For some threats a target's
security depends on the security of others. In other words, sometimes
there is little a target can do to fully handle a threat. An example
for this are distributed denial-of-service (DDoS) attacks - a common variant
of DoS attacks that involve many (hundreds and thousands) of hosts
sending excessive traffic to a server. A server can be perfectly
secure and bug-free and still get overwhelmed by traffic. It could
replicate its resources but it is hard to do so on demand and it's a
waste to do so when regular traffic to this server may be many times
smaller than DDoS traffic. Such situations, where many entities share a
common resource, and each can benefit from overusing it but overuse by
many leads to depletion and loss to all entities, are called Tragedy
of the Commons. An underlying issue here is that networks have
long been governed by market trends and profits, and are highly
decentralized. Say there is a solution to some problem that has a
reasonable, non-negligible cost and must be deployed ubiquitously. How
could one enforce this on the Internet? Today this is impossible
because there is no single body controlling operation of the entire
Internet. In practice something can be done through business
relationships, e.g., influencing largest networks and letting them
handle their customers and peers.
- There are a myriad of requirements that a security
solution must meet to be practical to deploy (or a good candidate to
It must solve the
problem it targets to a great extent (e.g., filter 90% not 30% of all
spam), it must be able to handle future variations of the problem
(e.g., when attackers modify their behavior to evade it), it must be
relatively cheap (CPU and storage-wise), it must have economic
incentive for the hosts/networks deploying it (e.g., it makes their
life better as opposed to helping others), it must require a few
deployment points (proposing Internet redesign is interesting but not
practical) and these better be non-specific (e.g., any 10% of
hosts is better than these selected 10% of hosts). Not every
solution meets all these criteria but most do.
- In security one fights a live, inteligent enemy, which is
different from many other areas of computer science and
engineering. This makes it unlikely that any problem becomes
completely solved (threats evolve as well as defenses) and forces
researchers to play a double role when developing and testing defenses
(as developers and as potential attackers).
- Attacks evolve. What was a hot topic today may not be a few years
from now so by the time your defense is developed it may already be
- There is a lack of publicly accessible data about attacks. This is a
big problem that really hinders security research as it is impossible
to ascertain what threats are typical and likely, how all the variants
look like, etc. Similarly there is a lack of publicly accessible data
about legitimate traffic, legitimate user access patterns, etc. This
makes it very difficult to ensure that a proposed security solution
will not harm legitimate users/traffic. There are many ways around
these obstacles but at best these are just band-aids on a big
wound. Recently, many organizations are becoming willing to share
their data if one signs an NDA and agrees to access the data under
specific conditions (e.g., while physically present at company's
premises). A notable effort to bring security data close to
researchers is the made by the PREDICT project, funded by the
Department of Homeland Security.
- Since there are many security problems, solving each requires some
success metrics. This is easy for some problems (e.g., number of
attacks discovered for intrusions) but hard for others (e.g., how does
one measure success in defending against a DoS attack). Consequently
researchers use different metrics and evaluation approaches, which
makes it hard to compare their solutions.
- Some security problems require a lot of resources and details to
be reproduced realistically in simulation and emulation, e.g., worm
spread, botnet organization, DDoS attacks. This
translates to a large time cost for developers of solutions to such
problems and leads to naive evaluation approaches that may lead to
Any practical security solution must balance the cost of using it with the
benefits expected by the users. These benefits depend on risk
analysis in each particular case. The risk analysis determines how
much would be lost if an asset is attacked and the probability of an
attack. Such probability depends very closely on the environment where
the asset resides, which may change. Legislation may also play a role
in risk analysis and management, i.e., it may not be greatly
beneficial to a company to implement some security policy but they may
be required to do so by law.
Any security solution further makes some assumptions about the attack
and use model. For example, a solution may assume that confidentiality
is achieved through cryptography under assumption that the key
length is such that it would take years to discover it through
brute-force (trying out all possible keys). But this may not be
true 10 years from now when computers become much faster.
Effectiveness of security solutions depends not only on their design
and implementation but also on how well they align with human needs
and abilities. For example, forcing users to change their password
each day would be very secure, but it would be impossible for human
users to follow this policy. Humans are often the weakest link in a
security system, bypassing policies and making it easy for social
engineering attacks to succeed.
Who are the Attackers
Who are the people that attack computers and networks? A long time ago
these were teenage hackers who attacked for bragging rights. While
attacks were disruptive, there was no real malice and many times no large
financial loss. Today, most attacks are perpetrated by organized
criminal. There is a very active underground economy where people
trade in stolen data, compromised machines and malicious code. Attacks
occur mostly for financial reasons - stolen data can be used for
financial gain, denial of service can be used for extortion, spam can
people to buy a shady product, ... Some attacks happen for personal or
political reasons. This shift in attacker mix from hackers to
organized criminal is significant because it means that attackers are
much more motivated than before and capable of more sophisticated
It is worth noting that the risk to the attacker from many attacks is
very small. Attackers often use compromised machines that may be in
some remote part of the world compared to the target. Most attacks do
not last long or do not generate strong enough signal to be detected
for most of their lifetime (e.g. intrusions). Even if one detects the
attack and the end-machine involved there may be no traces there left
of the attacker, or the machine may be in a foreign country that does
not cooperate with investigation.