Risk analysis and dependency

Principles of Computer Security
Author: Mark Burgess
Risk analysis and dependency
Fact of the week
Much of bank security relies on the existence of “tamper-proof”
technologies, either by relying on physical isolation of systems,
or by building systems that destruct if tampered with. Tamper
resistance is almost impossible to achieve in a public arena.
Chapter 20 Bishop: Introduction to Computer Security
Chapter 23 Bishop: Computer Security: Art and Science
Chapters 20,21 in Schneier, Secrets and Lies
Human-computer systems
Last week we talked about possible forms of attack against different kinds of
system. This week, we need to examine a method for analyzing systems, in
order to find their weaknesses and detail our own assumptions about their
security. If we want to talk about security in a more serious way (as more
than a video game), we have to say what we mean by it, in a technical sense.
Saying that “we want security” is not good enough, because it is too vague.
We need to:
• Identify what we are trying to protect.
• Evaluate the main sources of risk and where trust is placed.
• Work out possible counter-measures to attacks.
The kind of system we are most interested in is the human-computer
system, which we define as follows:
A human-computer system is an organized effort involving humans and computers to solve some problem or perform a service.
For instance, if we are an airport, we could begin by saying something
like this
• Assets: baggage, human life, aircraft equipment, ticket, money.
• Risks: lost luggage, plane crash, mechanical failure, sabotage, hijack,
robbery.
• Counter-measures: baggage tracking system, pilot redundancy (copilot), maintenance protocols, restriction of unauthorized access to
airport, passenger search (X-ray, metal detector etc), credentials (passport), separation of cash ticket sales from passenger processing.
1
Principles of Computer Security
Author: Mark Burgess
We can define a secure system as follows:
A secure system is one in which all of the threats have been analyzed and accepted as policy, i.e. where countermeasures are in
place for all of the failure modes.
The problem, of course, is how to know what all of the threats are. This
course can be divided into discussions of two main themes:
• Common threats against information systems.
• Common countermeasures.
One important lesson that we shall learn here is that “computer
security” is not just about computers. It should really be called
“security including the use of computers”. Security is a property
of whole systems (rules and procedures), not of individual parts.
Laws, rules and policy
Any system can be described by:
System = rules + input + output
All of these can fail and thus be security risks. Rules for machines
and rules for people are different only in the sense that humans are less
predictable than machines. Recall that we said, last week, that security is
only possible if systems fail predictably. A part of policy rule thus lies
in defining what should be allowed, another is defining what should happen
if something disallowed happens:
• Rules
• Codified responses.
The foundation of security is policy. If we don’t agree on and define what
is valuable and acceptable, we cannot speak of the risk to those assets. If
we don’t have policy, then we don’t care what happens. Policy is something
which social groups decide on – i.e. its a case of agreeing with your neighbors.
Policy (sometimes formalized as “law”) is a principle of society. Society
is a system (a set of cooperative rules and procedures). The first countermeasure to the breakdown of this discipline is a deterrent: the threat of
retaliation. “If you do this, we will not like you, and we may punish you!”
Society needs mechanisms, bureaucracy, police forces and sometimes military personnel to enforce its rules, because there is always a few individuals
who do not understand the discipline.
2
Principles of Computer Security
Author: Mark Burgess
In most cases, especially with computer crime, organizations have few
possibilities for reprimanding those who break policy, except to report them
to law enforcement agencies. Each country has its own national laws, which
override local policy, i.e. local security policy has to obey the law of the
land. This sometimes causes problems for either side. For instance, in some
countries, encryption is forbidden by the government, i.e. citizens do not
have the right to privacy; in others, system administrators are not allowed
to investigate users suspected of having committed a crime, since it would
be a violation of their privacy. These are the opposite ends of the spectrum.
Nowadays, law-enforcement agencies (police forces) take computer crime
more seriously, as computer crime has all the counterparts of major crime,
organized crime, and petty crime. Because the idea of lawful behavior in
a virtual world is still new, computer crime (ignoring local policy rules) is
dominated by petty crime, perpetrated by ignorant or selfish users, who
do not see their behavior as criminal. Recall the principle of societies, last
lecture.
The principle of communities: What one member of a cooperative community does affects every other member, and vice
versa. Each member of a community therefore has a responsibility to consider the well-being of other members of the community.
This same rule generalizes to any system (=society) of components (=members).
Policy violation
The bases of trust are:
• Predictability
• Reliability
If we believe we know how something will behave, we trust it. Predictability and reliability are closely related to complexity. This gives us a
new principle:
The more complex a system is, the less predictable it is and the
more difficult it is to secure.
Policy rules are the foundation of a human-computer system. We tend
not to question them, so they can be exploited, thus we need some kind of
enforcement (police force) to maintain our faith in our assumptions.
That is one of the main reasons why people can be fooled by criminals.
3
Principles of Computer Security
Author: Mark Burgess
Example 1: We have become used to using ATM mini-bank terminals for withdrawing money. These are now everywhere, all
different shapes and sizes. We trust these terminals to give us
money when we enter private codes, because they usually do.
One attack used by criminals is to install their own ATM, which
collects PIN codes and card details and then says “An error has
occurred”, so that users do not get money. They have then stolen
your card details.
This kind of “scam” is common. It aims to exploit your trust. The same
can, in fact, be said about any other kind of crime. Here is another example
of misplaced trust:
Example 2: Airport staff do not trust that passengers will not
carry weapons, so they use metal detectors, because most weapons
are metallic. They trust their metal detectors to find any weapons.
A stone knife could be easily smuggled on board a plane.
Closer to home:
Example 3: A computer user downloads large files of pornographic material, filling up the disk. This violates policy, but
the system manager does not enforce this policy very carefully,
so users can ignore it and this idea spreads (a virus in the policy). Here the trust goes both ways. The system administrator
trusts that most users will not break this rule, and the users trust
that the system administrator will not enforce it.
Motivation
There are many reasons why people will violate policy:
• For greed (resource gain - theft)
• To show off (social gain)
• Bribery or blackmail (of them by others)
• Vandalism (macho nonsense)
• Revenge (personal grievance)
• Sabotage (?)
• Opportunism (instinct - kick him while he’s down, viruses, worms)
• Selfish challenge (climb the mountain because its there)
4
Principles of Computer Security
Author: Mark Burgess
• Open warfare (political differences)
These lead to a payoff for the attacker (using a word from game theory),
i.e. a perceived win. What makes humans interesting is that we perceive
reward or gain in both material and emotional ways. For instance, what is
won by vandalism, or warfare? The answer is that those who commit these
acts feel that they gain from others’ suffering. That is a uniquely human
trait.
Not all security violations are intentional, of course. Lost luggage is
generally caused by human error. Programming errors are always caused by
human error. Human error is thus, either directly or indirectly, responsible
for a very large part of security problems. Here is a few examples of human
errors which can result in problems:
• Forgetfulness
• Misunderstanding/miscommunication
• Misidentification
• Confusion/stress/intoxication
• Ignorance
• Carelessness
• Slowness of response
• Random procedural errors
• Systematic procedural errors
• Inability to deal with complexity
• Inability to cooperate with others
Systems and failure
An American Admiral (Grace Hopper) is reputed to have said: “Life was
simple before world war 2. After that, we had systems.” This hits the nail
on the head for security.
Our increasing use of systems (computer systems, security systems, bureaucratic systems, quality control systems, electrical systems, plumbing
systems) is an embracement of presumed rigor. In other words, systems expect the players to follow rules and exhibit discipline. Without “systems”
we have only efficiency as a gauge of success. With systems, we also need
to have a bureaucratic attention to detail, in order to make the system
work. We may tick off what has been done, and perform quality control.
5
Principles of Computer Security
Author: Mark Burgess
Thus systems are inherently more vulnerable to failure, because they require
precision (which is something most humans are not good at).
Systems are characterized by components and procedures which fit together to perform a job. Usually the components are designed as modules,
which are analyzed and tested one by one. The analysis of whole systems is
more difficult, and is less well implemented. This means that there are two
kinds of systemic fault:
• Design faults: the system does not meet its specifications.
• Emergent faults (bugs): the system does things which were never
planned or intended.
Such faults can be exploited by attackers, in order to manipulate the
system in undesirable ways. Viruses and parasites are classic examples of
this. A final cause of failure:
• Catastrophes: unexpected, external failure.
An unexpected catastrophe is everyone’s worst nightmare. Fire, earthquake, bomb blasts can all destroy assets once and for all.
Prevention/correction
Protecting ourselves against threat also involves a limited number of themes:
• Safeguards (shields)
• Access control (selective shields)
• Protocols (specification of and limitation to safe behavior)
• Feedback regulation (continuous assessment)
• Redundancy (parallelism instead of serialism)
Detection and correction:
• Monitoring
• Regulation
We need to apply these to environments that utilize computer systems.
6
Principles of Computer Security
Author: Mark Burgess
Redundancy - calculating risk
One often hears of systems that have a “single point of failure.” This means
that a system has one point of critical dependency. For example, our Internet
connection to the outside world is a single point on which the functioning
of our communication with the outside world depends. If someone cuts this
line, our communications are dead.
Why don’t we have a backup? Sometimes the reason is a design fault,
and other times it is a calculated risk.
Serial : single point of failure (OR)
------------------------|
|-------|
|-----------------------Parallel: multiple points of failure - redundancy (AND)
---------------|
|------|
-------|
|
|
|
-------|
-------|--------|
|-------|----------|
-------|
|
|
|
-------|
--------|
|-------------You might remember these diagrams from electronics classes: Kirchoff’s
laws for electric current. We can think of failure rate as being like electrical
resistance: something that stops the flow of work/current.
Fault trees
What is the probability of a security breach or a system failure? Put another
way: how do we evaluate the risks, and figure out the best way to protect
against them? Fault Tree Analysis (FTA) is a systematic method for doing
this. It is a method that is used in critical situations, such as the nuclear
industry and the military. It is a nice way of organizing an overview of
the problem, and a simple way of calculating probabilities for failure. If we
were security consultants, this is how we could impress a customer, with an
in-depth analysis:
• Start at the top, with the result/system we want to protect
7
Principles of Computer Security
Author: Mark Burgess
• Find the possible ways in which the system is vulnerable.
• Find out how the vulnerabilities combine (AND, OR, XOR etc)
• Continue expanding the tree
• Around the tree, especially at the bottom of the tree are the things
we trust, and the pathways are possibly weak links.
By drawing such a tree, we can understand apparently simple problems in
a new light – with actual numbers, not just guesswork. Computer programs
(like fault tree “spreadsheets”) exist to help calculate the probabilities.
Fault trees are made of the following symbols:
Key
(a)
(b)
(c)
(d)
(e)
Symbol
AND gate
OR gate
XOR gate
Incomplete cause
Ultimate cause
Combinatoric
P(out) = P(A)P(B) (independent)
P(out) = P(A)+P(B) - P(A)P(B) independent
P(out) = P(A)+P(B) (mutex)
(none)
(none)
These can also be generalized for more than two inputs.
The standard gate symbols give us ways of combining the effects of dependency. The OR gate represents a serial dependency (failure if either one
8
Principles of Computer Security
Author: Mark Burgess
or the other component fails). The OR gate assumes that events are independent, i.e. the number of possibilities does not change as a result of a
measurement on one of the inputs; the XOR gate is a dependent event, since
a non-zero value on one input assumes a zero value on the rest. The AND
gate requires the failure of parallel branches; it could be either dependent
or independent. For the sake of simplicity, we shall consider only examples
using independent probabilities. This week’s exercises are about using these
gates.
Immune system example
The OR gate is the most common:
9
Principles of Computer Security
Author: Mark Burgess
Examination system example
This is not a proper fault tree, it is just a cause tree. How would you fill in
the logic gates?
Combining probabilities
From the properties of the gates, we see that
• In OR gates, probabilities combine to get larger.
• In AND gates, probabilities combine to get smaller.
• XOR gates have no effect on magnitudes.
So if we see many OR pathways, we should be scared. If we see many
AND pathways, we should be pleased. Here is a simple example of how
we work out the total probability of failure, for a simple attack where an
attacker tries the obvious roots of failure: guessing the root password, or
exploiting some known loopholes in services that have not been patched.
10
Principles of Computer Security
Author: Mark Burgess
We split the tree into two main branches: first try the root password of
the system, OR try to attack any services that might contain bugs.
• The two main branches are “independent” in the probabilistic sense,
because guessing the root password does not change the sample space
for attacking a service and vice versa (it’s not like picking a card from
a deck).
• On the service arm, we split (for convenience) this probability into two
parts and say that hosts are vulnerable if they have a service which
could be exploited AND the hosts have not been patched or configured
to make them invulnerable.
• Note that these two arms of the AND gate are time-dependent. After
a service vulnerability becomes known, the administrator has to try
to patch/reconfigure the system. Attackers therefore have a window
of opportunity.
11
Principles of Computer Security
Author: Mark Burgess
Since all the events are “independent”, we have:
P(break in) = P(A OR (NOT A AND (B AND C)))
= P(A) + (1-P(A)) x P(B)P(C)
Suppose we have, from experience, that
Chance of guessing root pw
P(A) = 5/1000 = 0.005
Chance of finding service exploit
P(B) = 50/1000 = 0.05
Chance that hosts are misconfigured P(C) = 10%
= 0.1
P(T) =
=
=
=
0.005 + 0.995 x 0.05 x 0.1
0.005 + 0.0049
0.01
1%
Notice how, even though the chance of guessing the root password is
small, it becomes an equally likely avenue of attack, due to the chance that
the host might have been upgraded. Thus we see that the chance of break
in is a competition between an attacker and a defender.
The problems this week are about taking this idea further.
Game theory
The Theory Of Games is a set of methods, mostly worked out this century
by a Hungarian/American mathematician called John Von Neumann, and
later embellished by others. Game Theory is another way of evaluating the
paths through a particular tree, or set of trees. It assumes that we have a
contest between two or more players, each of which has something to win or
lose (e.g. the values we talked about last week, such as money, time, social
standing etc). By setting up a matrix of possibilities, we can find out what
chance one has of winning the contest, or at least maximizing our win.
The simplest kind of game is a two-person game. We construct a “payoff
matrix” for one of the players, like this. All of the possible strategies we
can think of for player 1 are listed down one side of a matrix, and all of
the possible counter-strategies by the other player are listed along the other
side. The elements of the matrix are the “payoff”, or what one can expect
to win if a particular combination of strategy/counter-strategy is used (see
figure 2.
In practice, players do not choose one pure strategy, by they vary over
a number of strategies with a certain probability, represented in figure ??
here as a histogram.
Game theory allows us to evaluate which combinations are most likely
to lead to a win. This allows us to analyze an attacker’s chance of success,
and find out ways of minimizing damage.
12
Principles of Computer Security
Author: Mark Burgess
Figure 1: The payoff matrix
Example
Consider the following example of network-bank security. The bank needs to
identify you with a special key to keep you “secure”. They can use a number
of strategies for this. Similarly an attacker can try to attack in a number of
ways, by various strategies. How can a bank defend its transactions best?
We look for maximum and minimum “payoff” to the different sides.
Let the numbers in this table be the “estimated relative security” of the
different strategies. This is our definition of “payoff”. It is rather primitive,
but it helps to illustrate a formal way of picking the best approach.
ONLINE PAY- Steal card Hijack ses- Spoof site Trojan horse in
MENT
details
sion
(trick user) browser/server
“Calculator”
6
6
1
1
Certificate
3
4
3
3
Fingerprints
2
6
1
1
Rough and ready justification : A certificate that lives on a PC
is vulnerable to stealing by someone who can break into a PC. Many PCs
are easily broken into by the network, but a separate calculator has to be
physically stolen, so it is “more secure” from theft. The problem with the
calculator, on the other hand, is that is authenticates you to the bank, but
13
Principles of Computer Security
Author: Mark Burgess
Figure 2: The payoff matrix with histograms
it does not authenticate the bank to you, so you are open to spoofing. If
someone could get you to go their site, they could trick you into giving them
data that they could use to steal from your account. Using fingerprints is
somewhere in between. It is less secure from theft, because fingerprints
never change, so they could be stolen somehow; otherwise, they are like a
calculator – a method of one-way authentication.
The numbers tell us that it pays an attacker to try to spoof or use a
Trojan horse, rather than try to steal a key, or hijack a session. For the
defender, things are not so clear-cut.
Now, if we think “min-max” (attackers viewpoint), then we are minimizing defensive security and maximizing the attackers reward. Minimizing
places us in the last two columns, since that is where the security values
are lowest. Now we maximize what an attacker has to gain in these two
columns. The bold face text show the weakest points, so we should try to
prevent these combinations within a system, since they lead to the easiest
routes of attack.
14
Principles of Computer Security
Author: Mark Burgess
ONLINE PAYMENT
Steal card
details
Hijack session
“Calculator”
Certificate
Fingerprints
6
3
2
6
4
6
Spoof
site (trick
user)
1
3
1
Trojan
horse
in
browser/server
1
3
1
Max-min Now, if we think “max min”, then we maximize defense security
first. This places us in the first two “high score” columns. Then we minimize
the attacker’s gain. This is no longer so easy, because the columns are not
comparable. It looks as though the calculator defense strategy is best, but
only against the first two attack strategies. Against the second two, it is
much weaker.
ONLINE PAY- Steal card Hijack ses- Spoof site Trojan horse in
MENT
details
sion
(trick user) browser/server
“Calculator” 6
6
1
1
3
4
3
3
Certificate
Fingerprints
2
6
1
1
Now we are not quite sure whether the weakness against some strategies
outweighs the strength in the first. It depends on the relative importance ,
or likelihood of the different attack strategies.
Clearly the best possible solution is a mixed strategy, with both calculator and certificate. Probably this is too expensive for a bank.
The trick is to fill in the payoff-expressions, using some kind of fault-tree
analysis, combined with observations about the system and as detailed an
understanding as we can find. We don’t have time to look at a game in detail,
but we can note that one of the predictions of game theory is that a good
way to attack/defend is often to randomize the method of attack/defense.
You might have noticed that the multiple-guess test in last week’s problems
used this method to prevent the most obvious kind of copying attack.
Thought of the week
It has been said that the only difference between competitive
commerce and warfare is politics.
15