Agent-based Modeling of Counterinsurgency Operations

Agent-based Modeling of Counterinsurgency Operations
Jason Martinez1 and Ben Fitzpatrick1,2
1
Tempest Technologies
8939 S. Sepulveda Blvd. Suite 506, Los Angeles, CA 90045 USA
[email protected]
2
Loyola Marymount University
UH 2700, 1 LMU Drive, Los Angeles, CA 90045 USA
[email protected]; [email protected]
COIN operations take place not only on the battlefield but
also in the minds and perceptions of the much larger
civilian population. For this reason, COIN operations are
intimately tied to protecting the reputation of the state and
demonstrating good will toward the people.
We attempt here, with as simple a modeling approach as
is possible, to capture the essential features of a COIN
operation conducted by a force supporting an existing
government. We treat influence and combat operations in
a heterogeneous population of agents. The communications
network within the population is flexibly and dynamically
modeled.
As agents communicate, influence is
transmitted. When agents interact in a combat mode, the
possibility of collateral damage exists.
We begin the discussion here with an overview of the
influence modeling. From that point, we move to the
consideration of combat modeling. Finally, we discuss
some in silico experiments to examine various hypotheses
about COIN operations.
Abstract
We construct a computer model that allows us to simulate
the effect of counterinsurgency operations on a population
of agents. We build a society of agents who are
interconnected in an established social network. Each agent
in this network engages in political discourse with other
agents over the legitimacy of the existing government.
Agents may decide to support an insurgency, join an
insurgency, side with the existing government, or remain
neutral over which group to support. Using this model we
explore the relative importance of social network structure,
influence effectiveness, and combat operation effectiveness
in minimizing insurgent strength.
Introduction
The complex nature of counterinsurgency (COIN) warfare
today requires a much greater understanding of the factors
that increase our ability to successfully “win the hearts and
minds” of the target population. Insurgent warfare today is
not a traditional war in which pitched armies conduct
frontal assaults or where massive air strikes precede
ground invasions. The success of COIN operations
depends intimately on gaining the support of the
population as well as identifying and eliminating
insurgents in the field (Petraeus, et al, 2007, 1-161; Van
Der Kloet, 2006).
Mathematical models and
computational simulations of combat have long been used
to support military planning (see, e.g., Taylor, 1983,
Epstein, 1985). In this paper, we describe an effort to
model COIN operations that include both combat and
influence operations.
While the primary goal of the counterinsurgent is to
support and reinforce the legitimacy of the existing
government of the host nation, an insurgency works to
prevent that from happening. In addition to direct
adversarial
combat,
both
the
insurgent
and
counterinsurgent are engaged in “marketing” campaigns in
order to “sell” their ideas to the population. In this view,
Underlying Theory of the Influence Model
Our modeling process begins with the assertion that agents
live in a world in which they are subject to influence by a
number of different sources. Agents are influenced by
their peers, by the media, by insurgents, and by COIN
forces. In this sense, we can view each agent as a potential
target to a marketing campaign made by a number of
different influencers. Our modeling is informed by a
number of sociological theories concerning the way in
which individuals are influenced by their interactions with
others in the environment.
For parsimony and simplicity in building an initial
model, we identify four basic mechanisms through which
an individual will decide to join a group or be influenced to
side with a particular group. The mechanisms that we
identify include (1) an individual propensity to favor a
particular group, (2) the influence by propinquity, (3) the
effect of receiving rewards (or positive reinforcement), and
(4) the effect of coercive measures.
Copyright © 2009, Association for the Advancement of Artificial
Intelligence (www.aaai.org). All rights reserved.
x
54
Individual propensity to favor a particular
group: The first mechanism is the idea that
individuals will develop attitudes that are more
favorable toward some groups and less favorable
toward others. We call these ideological leanings,
which roughly corresponds to an individual’s
natural propensity to favor one group over
another. There may a number of reasons why an
individual would favor one group over another. A
few examples may include race, nationality,
religion, ethnicity, and or any other category that
identifies a common interest.
x
x
x
change of allegiance but also to limit one’s interaction with
other groups. Findley and Young (2006) delineate two
COIN approaches: the “war of attrition” approach and the
“the hearts and minds” approach. Within the realm of
influence operations, reward aligns with “hearts and
minds” while coercion is a form of “attrition.”
Underlying Theory of the Combat Model
In addition to influence, we also consider combat
operations. Combat operations are different from influence
operations in several important respects.
One key
consideration is that influence operations rely primarily on
communication, an activity that may or may not require
physical proximity. Influence operations can take place
across a diverse set of networks. Air- and sea-based
missile attacks notwithstanding, physical proximity of
forces is required for offensive actions. Also, COIN
operators must necessarily take different offensive
approaches from those of the insurgents. We begin our
examination of combat with the insurgent strategies.
The effect of propinquity: Individuals will
develop attitudes that are similar to the attitudes
of those with whom they interact. For example,
research has shown that individuals are likely to
become insurgents if they interact with fellow
insurgents. This may take place in various forms
of social gatherings, such as mosques,
universities, or even the work place. In this sense,
we tend to think of being influenced by those
around you as a sort of ‘norming’ effect in which
normative behaviors and ideologies reflect the
common interests of a reference group
(McPherson, et al, 2001; Centola et al, 2005).
The effect of receiving a reward: In the
influence literature, it has been demonstrated that
influence could result from the effect of receiving
rewards (Turner, 1991; Mason, 1996; Hechter,
1988). This influence parameter makes particular
sense in the way in which Coalition forces can
win “the hearts and minds” of a population
through a demonstrated ability to provide good
will gestures, such as providing electricity, public
transportation, safe environments.
The effect of coercion: Coercion is one method
in which an agent exerts influence to prevent the
target from interacting with the influencer’s
enemy. For example, if an insurgent witnessed a
civilian interacting with a coalition force member,
the insurgent may exert some coercion technique
to punish them for interacting with the enemy. In
short, there are consequences for supporting an
enemy (Mason, 1996; Hechter, 1988).
In a sense, we can think of the first two mechanisms of
influence as passive; that is, influence occurs passively
through one’s embedding in a social environment. The last
two mechanisms are active: agents conduct coercive and
rewarding operations in order influence target individuals
to shift allegiance.
In addition to the active and passive forms of influence,
another distinction needs to be made. In general, the first
three forms of influence work to influence an agent to join
one group over another. In other words, agents try to
convince other agents that there is a positive return for
joining one group over another. The effect of coercion, on
the other hand, should work not only to bring about a
55
x
ARS - Attack and Retreat Strategy. This attack
strategy is considered the prototypical guerilla
warfare strategy in which the attackers attack their
enemy and quickly retreat, leaving the enemy
caught off guard with very little opportunity to
respond back with fire power.
x
CDS - Collateral Damage Strategy. This
strategy is based on the assumption that the
enemy will over-respond if they are attacked.
While not a suicide mission, the purpose of
employing this strategy is to get the
counterinsurgents to inadvertently kill agents not
directly involved in combat. Agents using the
CDS strategy will surround themselves by other
agents (presumably civilians) and then attack. In
their haste to respond, the COIN agents
potentially kill not only their attacker but also
innocent civilians in the process.
x
SS - Suicide Strategy. With this strategy,
insurgents surround themselves with COIN
agents, and then commit suicide by blowing
themselves up.
This strategy will kill all
individuals within a given radius of the suicide
bomber. The suicide bomber may (or may not)
abort the mission if he determines that non-COIN
agents would die in the explosion.
x
IEDS – Improvised Explosive Device Strategy.
With this strategy, insurgents position an IED and
detonate a bomb when sufficient COIN agents are
within the kill radius.
x
CWS - Conventional Warfare Strategy. Also
called the Clausewitzian Warfare Strategy, this
strategy will attack any COIN agent that is within
sight. If more than one COIN agent is observed in
that distance, then the insurgent agent will
randomly select one and attack him and kill with a
given probability.
We use Red to refer to “Insurgents,” and Blue refers to
“COIN forces.” Red and Blue colored agents are special
agents who are in combat with each other.
We let the following agent groups refer to the civilian
population. Cyans are those who side with the Blues.
Pinks are those who side with the Reds. Whites are
undecided agents who align with neither Red nor Blue.
Civilian agents are not in combat, but do engage in
communication and may persuade neighboring agents to
adopt their “neutral” alignment.
Since Blue agents belong to the COIN force group and
are assumed to be dressed in military fatigue, their
identities are visible to all other agents within the
simulation. For all other agents, however, their identities
are kept private. Red agents only become ‘visible’ when
they attack other agents. Further, Blue agents are replaced
if they are killed in battle: Blue uses a reinforcement rate
sufficient to maintain a constant force level. For the
purposes of simulations of relatively short duration, we
assume that there is an infinite supply of Blue agents with
reinforcements arriving as needed.
Except for the Blues and Reds, agents may change their
political views. Since each agent has a predisposition to
adopt one view over another, we may think that each agent
has ordered their preference to belong to each group. This
order is calculated from a vector containing each
individual’s interest for each group. We call this vector L,
which we refer to an agent’s level of alignment with each
of the five groups.
We let L be a vector containing a list of five elements,
referring to each of the five possible identities for a given
agent. Each element within the vector refers to an agent’s
preference for the kth group. Our agent determines his
identity by selecting the largest score in L (i.e., Lmax).
For the COIN agents, we consider influence as an
offensive strategy and combat as a defensive strategy. The
primary goal is to initiate contact with the population and
engage in influence operations. As a defensive strategy,
we model COIN combat in terms of the agent potentially
returning fire upon being attacked. A primary difficulty
with direct combat is the visibility problem. A COIN
agent can never know with certainty if he is interacting
with a civilian or an insurgent until the insurgent fires a
weapon at him. Wong (2006) suggests that computer
models involving insurgencies should more carefully
consider the role of civilians as victims in combat
scenarios. Over-responding to an insurgent action may kill
many civilians, a situation which would create a negative
perception over the legitimacy of COIN presence.
Agents and the Simulation Environment
The simulation is designed to host a number of different
kinds of influence and combat scenarios. In general,
influence between agents can happen in one of two
different ways. Influence may occur among agents on a
simple two dimensional grid, in which agents interact with
the physical neighbors. They may also interact through a
formal social network. A number of graph-theoretic social
network structures have been considered, such as Watt’s
beta graphs and scale-free graphs (Watts, 2003).
The simulation operation involves the following
quantities.
x
k
Grid. The environment where each agent may
reside is composed of a two dimensional (r x c)
grid. The grid does not change in size, nor does
the grid wrap. Each cell on the grid can only be
occupied by one agent at a time.
x
Time Step. Agents are selected at random from
the population without replacement.
Once
everyone has been selected from the list, a time
step is said to have occurred. Individuals may not
be selected more than once per time step.
x
Agent Movement.
walk manner.
x
Agent Identity. Each agent has the capacity to
belong to any one of five different group
identities. These group numbers are denoted by
the vector k and they refer to the following group
colors:
Lk
eGk Ei ( I k ) Eb ( Bk ) E c (Ck )
1 eGk Ei ( I k ) Eb ( Bk ) E c (Ck )
The use of a logistic structure for preference is common
in the market research literature (see, e.g., Franses and
Papp, 2001). The quantities in the exponential of the
logistic govern the response of the individual agent to
received influence. In particular, we make the following
definitions.
Agents move in a random
{1,2,3,4,5} {red , pink , blue, cyan, white}
56
x
The number Gk models an agent’s natural
propensity to join k group. This number is
assigned at birth from a random uniform
distribution.
x
The number Ik is the proportion of individuals
who belong to each k group in the agent’s
physical neighborhood. This quantity fluctuates as
agents move about the space.
x
The number Bk refers to the agent’s expected
rewards for belonging to k group. This is a
composite measure. This is defined as the
proportion of times that one has been rewarded in
the last ten iterations with members of each k
group.
x
The number Ck refers to the amount of coercion
that the agent has received from members of k
group. This is defined as the proportion of times
that one has been punished in the last ten
iterations with members of each k group.
x
The coefficients E i , E b , E c are the relative
weights of the social neighborhood impact, the
reward impact, and the coercion impact,
respectively. We use these coefficients so that the
units of each quantity ( I , B, C ) can be
maintained in the same units. For example, the
same amount of money offered by Blue may be
worth less in terms of influence than if offered by
Red, and the impact of many Blue neighbors may
be less than the same number of Red neighbors.
procedure checks to see if the conditions are appropriate
for an attack. This will depend on the kind of attack
strategy that is implemented in the simulation. If the Red
or Blue agent is not in any danger, then the agent will
perform the same influence task that any civilian would
perform on a randomly selected neighbor. In this manner,
the simulation moves forward in time. If an attack kills
civilians, then the neighboring agents are influenced by
coercion. The impact of this influence may be positive
(fear) or negative (desire for revenge).
Below we present the results from one simulation. The
parameter settings (as suggested in the bullet above on the
E coefficients) give Blue and Cyan agents lesser ability to
influence White agents than the Red and Pink have. We
note that as the simulation progresses Red is capable of
generating enough influence to recruit from the population
of Pink agents. The time scale of the Red to Pink
transition is slow relative to the time scale of White to Pink
and White to Cyan transitions for two reasons. One is the
constraint that White cannot transition directly to Red
slows down the process by requiring two stages of
transition; another is the propinquity condition. The Pink
neighbors tend to keep the individual Pink.
To simulate a time step, we cycle through the agents as a
randomized list. If a civilian agent is selected, that agent
will look in its Moore neighborhood and randomly select a
communication partner. In this interaction, the agent
presents his political views and attempts to influence his
communication partner. The agent then moves to a new
location within its physical neighborhood and updates its
own current state (i.e., color or political affiliation). How
this update is performed is the following. The agent
computes the quantities Lk for each of the five possible
states. The state producing the maximal L is the state
selected. A set of rules for determining color change is
then implemented. Our theory states the following:
x
White agents may turn into Pink or Cyan.
x
Pink agents may turn into Red or White agents.
x
Cyan agents may turn into White agents, but may
not turn Blue.
x
Red and Blue agents do not change color.
Figure 1. Influence Operations without Combat
In short, the algorithm states that the direction of color
change must follow through certain stages. For example,
we do not expect an agent to change from Cyan to a Red,
but must first turn White, then Pink, and then eventually
Red.
If a Red or Blue agent is selected from the randomized
list, a slightly different set of computational procedures is
implemented. In the first step, they evaluate whether or
not they are in a combat mode; that is, they evaluate the
area around them and determine whether any combatants
are visible. If an enemy combatant is present, then the
Software Implementation
The software that we have designed is written in C and
accessed from a Matlab GUI front end. Alternatively the
software can be run from the command line as a standalone compiled C application. Connecting to Matlab
permits not only a very flexible GUI but also
straightforward summary and visualization of results.
Hypotheses
One of the advantages to using computer simulation
models is that it provides us with the ability to test a
57
number of hypotheses about real world events, which we
could not do otherwise (this is particularly the case with
warfare models, for which ethical concerns could also take
place). In addition to that, we would also be able to
observe scenarios in an environment, which we would not
be able to observe otherwise.
We have been particularly interested in testing some key
assumptions commonly observed in counterinsurgency and
irregular warfare manuals (Petreaus, et al, 2007). We will
test the following hypotheses:
x
Sometimes, the more you protect your force, the
less secure you may be (1-149).
x
Sometimes, the more force is used, the less
effective it is (1-150).
x
Sometimes doing nothing is the best reaction (1152).
SS detailed above.
Thus, we have 16 distinct
configurations.
We begin by noting that the parameter values for these
runs were specified for convenience of simulation and
testing, without detailed regard for realism. For simplicity
we have chosen a relatively small population size and
parameters that produce dynamics that are relatively quick
to converge to steady states. We have in other simulation
efforts made some attempts to infer reasonable parameter
values from field data, but a full discussion of the issues
surrounding that estimation is beyond the scope of this
paper. A more realistic setting would involve tens of
thousands to millions of individuals representing a small to
large urban setting, with hundreds to thousands of Blue
agents and slower transition rates of civilians into partisan
groups. We can say that the qualitative structure of the
results (e.g., trends) are consistent with more realistic
parameter settings.
Figures 2-5 illustrate the dynamics of the groups
averaged over the 50 Monte Carlo realizations for four of
the 16 experimental settings. Figures 2 and 3 show the
Red attack strategy of Attack and Retreat (ARS) when
Blue responds 25% and 75% of the time Red attacks,
respectively. We first note that the attack strategies speed
up the dynamics relative to those of Figure 1. The impact
of killing civilians is an influence computation described
above. We next note that the Red population increases
much more dramatically than in the influence only case of
Figure 1. The third observation is that the Cyan population
decreases faster when Blue adopts a more aggressive
defensive strategy. This higher rate is partially due to
Cyans killed as collateral damage and partially due to
Cyans being negatively influenced by Blue’s killing of
civilians.
Examining Figures 4 and 5, we consider the 25% and
75% Blue response to Red’s collateral damage
optimization strategy, in which Red seeks to surround itself
with civilians. In this scenario, the civilian populations
suffer from Blue’s counterattacks, and the influence of the
collateral damage is clear from the high rate of decay of
the Cyan population. Were the losses purely cross-fire
deaths, the Pinks and Cyans would decay at the same rate.
There are a number of criteria that we can use to define
what we mean by a successful counterinsurgent combat
strategy. For the purposes of the current project, we define
success of a counterinsurgent strategy to have multiple
meanings. For example, one strategy may result in many
dead counterinsurgents, but the result would nonetheless
create a population that supports the existing government.
Or, another strategy may result in very few deaths, but a
population that strongly supports the insurgency.
Obviously, a strategy that minimizes the number of dead
counterinsurgents and maximizes the number of
individuals supporting the state would be ideal.
Results
A series of experimental methods were conducted to
evaluate the effect of various combat strategies. Agents
live on a two-dimensional 40 by 40, celled landscape
containing 480 agents. At the start of the simulation 400
agents are colored white. Forty of the remaining agents are
colored Red, while the remaining 40 agents are colored
Blue. Each condition in the experiment ran for a total of
3000 time steps. We ran each condition of the experiment
50 times in order to generate a reasonably sized Monte
Carlo sample for trend extraction. Blue’s strategies consist
of returning fire upon attack 0%, 25%, 75%, and 100% of
the time. The idea here is that Blue has a difficult
classification problem of identifying the actual Red actors
in the population and may accidentally shoot civilians.
Such a situation has occurred in recent counterinsurgency
efforts, though Blue firing discipline and identification
capability is clearly much better than what has been
modeled here. This strategy is clearly oversimplified, a
problem we discuss in Remarks below. The most serious
weakness of the implemented Blue strategy is that Blue
may accidentally kill civilians in this response, leading to a
coercion influence response of a negative nature, literally
driving civilians into the arms of the insurgents. Red’s
strategies consist of the strategies CDS, ARS, IEDS, and
58
Figure 2. Blue’s Response (25%) to an Attack and Retreat
Strategy
Figure 4. Blue’s response (25%) to a Collateral Damage
Strategy.
Figure 3. Blue’s Response (75%) to an Attack and Retreat
Strategy
Figure 5. Blue’s response (75%) to a Collateral Damage
Strategy.
The complete summary of all 16 attack configurations is
given in box and whisker plots of Figures 6-9. These
figures show that low to moderate use of force against Red
is more likely to result in a population that sides with Blue.
Figure 6 presents the population of Cyans and
demonstrates that in general, Blue is most successful
against Red when Blue does not exert force against Red.
59
Figure 8. Number of pink agents remaining after 3000
time steps.
Figure 6. Number of cyan agents remaining after 3000 time
steps.
Figure 9. Number of civilians remaining after 3000 time steps.
Figure 7. Number of red agents remaining after 3000 time steps.
It is important to note that the entire population decays
due to the amount of fighting. The high rate of decay is
partially due to our choice of convenience parameters, set
to exhibit high rates of activity (rather than to exhibit
realistic rates of activity).
Since the Blue population receives reinforcements to
maintain constant strength, the plots of Blue population are
all constant. However, the simulation keeps up with the
number of Blue deaths. We have observed that Blue incurs
the greatest amount of damage when Blue does not exert
any force in the combat environment. In Table 1, it is
observed that on average Blue may incur over 2000 deaths.
However, when Blue exerts an excessive amount of force,
they are likely to reduce their chances of dying, while
successfully reducing the number of insurgents in the
60
population. Conversely, this also results in significantly
reducing the civilian population.
Judging from these results, it appears that our first
hypothesis (“Sometimes, the more you protect your force,
the less secure you may be”) depends on interpretation.
Blue clearly enjoys fewer deaths with higher rates of
kinetic engagement. On the other hand, the population
suffers more losses, and the collateral influence damage
further weakens Blue’s political position.
Current thinking in the counterinsurgency literature
suggests that excessive force, which kills innocent
civilians, will lead to more support for the insurgency. If
the insurgency increases, then it will be more difficult to
protect your force. While our model allows for civilians to
turn against Blue (or even Red) under indiscriminate
killing, the current parameterization involves too high a
rate of engagement and killing to be a quantitatively
accurate predictor.
Blue’s Use of
Force
0%
25%
75%
100%
Average
Number of
Civilian
Deaths
7.91
68.19
99.09
107.20
Average
Number of
Red
Deaths*
42.05
100.57
129.92
137.05
simple finite dimensional compartmental model, which
takes the form of a five-dimensional differential equation,
for the subpopulation dynamics. This simplified model
can be approached using optimal control theory
(Pontryagin’s
maximum
principle
or
dynamic
programming), and game theoretic formulations in which
Red and Blue compete for the “hearts and minds” of the
population can be considered. We have had some
preliminary successes with this type of approach, and we
hope in future studies to test the dynamic strategies derived
with the simple differential equation model by
implementing them as feedback controllers in the agent
model.
In general, the problem of parameter estimation is a very
difficult issue for agent-based models. In some application
areas, a significant number of parameters can be measured
directly, obtained from survey data, or inferred from a
fairly simple statistical model. Insurgency modeling is
fraught with many practical problems of parameterization
and data analysis. Market research uses a number of
statistical techniques to determine the weight parameters in
the logistic influence model, but the data collection
problem in marketing is much simpler. The direct
interrogation of the population enduring an insurgency is
nearly impossible. Certain ecological data analysis tools,
such as catch effort analyses, can be applied to extract
information about population sizes. Intermediate models
like the compartmental models can also guide the
determination of parameters for the agent model. This
topic is another of continuing investigation.
Average
Number of
Blue Deaths
2102.84
989.68
672.56
611.03
Table 1. Blue’s use of force and average number of deaths.
*Deaths in the 0% of force are the result of suicide attacks alone.
Comparing these results from Figure 7, we observe that
the worst strategy for Red is the suicide attack strategy, in
which Red could remove themselves from the population
without effort on Blue’s part. While this strategy is capable
of generating casualties against Blue, Blue’s response does
generate enough collateral damage that Red could use to
recruit individuals from the Pink population into the Red
camp.
Acknowledgements
This research has been supported in part by contract AFRL
contract FA8650-07-C-6759 and AFOSR grant FA955009-1-0524.
Dr. Richard Albanese of AFRL/HEX has
provided a great deal of important advice and some deep
insights into influence operations. Our colleagues Dr. Yun
Wang and Dr. Li Liu of Tempest Technologies have
collaborated on computational implementation, gaming
and control, and other modeling issues. We would also
like to thank the referees for giving us valuable suggestions
and thoughtful comments to help us improve previous
drafts of this manuscript. Finally we would like to thank
the conference organizers for creating this interesting
venue for interaction on cultural dynamics.
Remarks
We have presented a description and some preliminary
efforts to simulate counterinsurgency operations with an
agent-based model. Our model focuses on influence
operations but also uses some simple combat strategies for
kinetic operations of combat. In a sense, our initial foray
into insurgency modeling generates more questions than
does it provide answers. Our sincere hope is that we can
motivate other researchers to join us in the effort to
understand this difficult but important problem.
We acknowledge a number of issues we are continuing
to investigate.
First, combat strategies have been
implemented in a rather simplistic manner. We note that
we have undertaken efforts to determine optimal strategies
using control and game theoretic structures.
The
complexity and dimensionality of agent-based population
dynamics preclude optimization of strategies within the
context of the agent model. We have developed a very
References
Bowman, Tom. 2007. “Petraeus, Our Old New Man in
Iraq.”
http://www.military.com/forums/0,15240,126628,00.html
Centola, Damon, Robb Willer, and Michael Macy. 2005.
“The Emperor’s Dilemma: A Computational Model of
Self-Enforcing Norms.” American Journal of Sociology.
110: 1009-40.
61
Epstein, Joshua, 1985. The Calculus of Conventional War:
Dynamic Analysis without Lanchester Theory, Brookings
Institution, Washington.
Findley, Michael, and Joseph Young, 2006. “Swatting
Flies with Pile Drivers? Modeling Insurgency and
Counterinsurgency,” International Studies Association
Annual Meeting, San Diego.
Franses, Philip, and Richard Papp, 2001.
Models in Marketing Research,Cambridge.
Quantitative
Hechter, Michael. 1988. Principles of Group Solidarity.
Oxford University Press, Oxford.
Latané, Bibb. 1981. The psychology of social impact.
American Psychologist, 36, 343-356.
Mason, T. David. 1996. “Insurgency, Counterinsurgency,
and the Rational Peasant.” Public Choice. 86: 63-83.
McPherson, Miller, Lynn Smith-Lovin, and James M.
Cook. 2001. “Birds of a Feather: Homophily in social
networks.” Annual Review of Sociology. 27: 415-44.
Metz,
Steven.
2007.
Learning
from
Iraq:
Counterinsurgency in American strategy. Strategic Studies
Institute.
www.strategicstudiesinstitute.army.mil/pdffiles/PUB752.p
df
Petreaus, David H., James F. Amos, and John A. Nagl.
2007. The U.S. Army and Marine CORPS
Counterinsurgency Field Manual. The University of
Chicago Press, Chicago.
Taylor, James G., 1983. Lanchester Models of Warfare.
Operations Research Society of America, Military
Applications Section, Arlington.
Turner, Jonathan, H. 1991. Structure of Sociological
Theory 5th ed. Wadsworth, Inc, Belmont California.
Van Der Kloet, Irene. 2006. “Building Trust in the Mission
Area: a weapon against terrorism?” Small Wars &
Insurgencies. 17: 421-436.
Watts, Duncan, 2003. Small Worlds: The Dynamics of
Networks between Order and Randomness, Princeton.
Wong, Yuna Huh. 2006. Ignoring the Innocent Noncombatants in Urban Operations and in Military Models
and Simulations. Rand Corporation, Santa Monica
California.
62