Sequential versus Simultaneous Law Enforcement

Preliminary Draft – April. 29, 2009
Sequential versus Simultaneous Law Enforcement
Shmuel Leshem and Avraham Tabbach
ABSTRACT
This paper compares Stackelberg and Nash solutions of an inspection game in which the
enforcement technology exhibits diminishing marginal return. We show that (i) the level
of compliance and the enforcer’s payoff are (weakly) higher if either the enforcer or the
offender is the Stackelberg leader than in a Nash game, and (ii) the offender’s payoff is
(weakly) higher if the offender is the Stackelberg leader than in a Nash game. Moreover,
depending on the enforcement technology and the ratio between the sanction for and
gains from non-compliance, the enforcer’s payoff may be higher if the offender, rather
than the enforcer, is the Stackelberg leader. This suggests that reactive enforcement might
be socially preferable to proactive enforcement.


University of Southern California School of Law.
Tel-Aviv University School of Law.
1. Introduction
Economic models of crime usually consider a sequential game in which an enforcer—the
state or the police—acts as a Stackelberg leader by committing to an enforcement
strategy. A potential offender, responding to the enforcer’s choice of enforcement level,
then decides whether or not to commit an offense. A related strand of literature has
examined an inspection game in which the enforcer and the offender choose their actions
simultaneously so that their equilibrium strategies constitute best responses. This paper
develops a simple model that introduces the possibility that the offender acts as a
Stackelberg leader. Our main result is that the enforcer’s payoff might be higher if the
offender, rather than the enforcer, is the Stackelberg leader. This suggests that reactive
enforcement might be superior to proactive enforcement.
We consider a modified inspection game between an enforcer and an offender. The
enforcer must choose enforcement expenditures; the offender must choose a probability
of compliance. Non-compliance yields gains to the offender, but causes harm to the
enforcer. Detection of non-compliance eliminates the offender’s gain, prevents harm, and
subjects the offender to a sanction. In contrast to standard inspection games, we assume
that the enforcer’s strategy is continuous and that the probability of detecting noncompliance is increasing at a decreasing rate in the enforcement level. A salient feature of
this setting is that the enforcer’s optimal enforcement expenditures are increasing in the
offender’s probability of non-compliance.
Consider the case in which the enforcer moves first; that is, the enforcer pre-commits to
an observable level of enforcement. The enforcer’s problem is to choose enforcement
expenditures to maximize its payoff given that the offender responds optimally to the
enforcer’s choice. We show that, given that the offender complies with positive
probability, the offender’s equilibrium probability of non-compliance is lower than in a
Nash game. The enforcer’s equilibrium payoff, in turn, is higher than its Nash
equilibrium payoff. Moreover, since the offender’s probability of non-compliance, but
not the enforcer’s level of enforcement, is lower than in a Nash game, the offender’s
equilibrium payoff is lower than its Nash equilibrium payoff.
Next, consider the case in which the offender moves first; that is, the offender precommits to an observable compliance strategy. For example, suppose the offender can
choose an observable frequency of non-compliance in a certain period. The offender’s
problem is to choose a probability of non-compliance to maximize its payoff given that
the enforcer responds optimally to the offender’s choice. We show that, given that the
offender complies with positive probability, the offender’s probability of non-compliance
is lower than in a Nash game. To see why, note that the offender’s Nash equilibrium
strategy is optimal given the enforcer’s Nash equilibrium strategy; that is, without
incorporating the effect of the offender’s choice on the enforcer’s enforcement
expenditures. Since the enforcer’s best response is increasing in the offender’s probability
of non-compliance, the offender’s optimal first-mover strategy is to comply with a higher
probability relative to the Nash equilibrium strategy.
2
Since the offender’s probability of non-compliance is lower in an offender-leadership
game than in a Nash game, the equilibrium enforcement expenditures are lower in an
offender-leadership game than in a Nash game (since the enforcer’s optimal enforcement
expenditures are increasing in the offender’s non-compliance strategy). Moreover, the
offender’s lower probability of non-compliance in an offender-leadership game implies
that the enforcer’s equilibrium payoff is higher than its Nash equilibrium payoff. Thus,
both the offender’s and the enforcer’s equilibrium payoffs are higher if the offender
moves first than in a Nash game.
Whether the enforcer’s equilibrium payoff is higher in an offender- versus enforcerleadership game depends on the relative probability of non-compliance in the two cases.
Specifically, if the offender’s probability of non-compliance is lower in an offenderleadership game, the enforcer’s equilibrium payoff is higher if the offender moves first.
But even if the offender’s probability of non-compliance is higher in an offenderleadership game, the enforcer’s equilibrium payoff might still be higher if the offender
moves first. The reason is that, notwithstanding the higher probability of non-compliance
in an offender-leadership game, the enforcer’s equilibrium enforcement expenditures are
lower in an offender- versus enforcer-leadership game. The enforcer’s saving in
enforcement expenditures in an offender-leadership game might thus outweigh the
greater harm from non-compliance in an enforcer-leadership game.
The paper proceeds as follows. Section 2 reviews relevant literature. Section 3 sets up the
model. Section 4 compares the equilibrium outcomes in simultaneous and sequential
games. Section 5 concludes.
2. Related Literature
The literature on the economics of crime, beginning with Becker’s (1968) seminal article,
has implicitly examined a sequential game between an enforcer and an offender in which
the enforcer pre-commits to a continuous enforcement strategy. Underlying this literature
is the assumption that non-compliance always results in harm; the level of enforcement
merely determines the probability with which non-compliance is detected and punished
ex post. The enforcer’s objective accordingly is to minimize the sum of expected harm
and enforcement costs by choosing an optimal level of deterrence; that is, by directly
affecting the offender’s non-compliance strategy (see, e.g., Garoupa, 1997; Polinsky and
Shavell, 2007). Since non-compliance always results in harm, the enforcer has incentives
to detect non-compliance if and only if the enforcer moves first. In a simultaneous or an
offender-leadership game, by contrast, the enforcer’s best response is to not enforce the
law as the enforcer cannot affect the offender’s probability of non-compliance.
A related strand of literature has considered an inspection game in which the enforcer and
the offender act simultaneously (see, e.g., Graetz et al., 1986; Tsebelis, 1990). In a
simultaneous game, the enforcer’s and the offender’s equilibrium strategies constitute
best responses. Inspection games commonly share the implicit assumption that
enforcement is designed to prevent harm from non-compliance, rather than merely to
detect and punish non-compliance ex post. The enforcer’s objective in inspection games
3
is thus to minimize the sum of the expected harm from non-compliance and the
enforcement expenditures by choosing an optimal level of harm prevention, taking the
level non-compliance as given.
This paper studies a synthesized model of law enforcement. Similarly to deterrence
models, we assume that the enforcement technology exhibits diminishing marginal
return; i.e., the probability of detecting non-compliance is decreasing at an increasing rate
with the enforcement expenditures. Like prevention models, we assume that the
enforcement technology is preventive in that detection of non-compliance prevents harm,
rather than merely punishes non-compliance after harm has occurred.
In this synthesized model, the players’ objectives depend on the sequence of moves in the
game. In an enforcer-leadership game, the enforcer’s objective is to minimize the sum of
the expected harm from non-compliance and the enforcement expenditures by directly
affecting the level of deterrence--that is, the offender’s non-compliance strategy—as well
as by preventing harm resulting from non-compliance. The offender’s strategy, in turn,
constitutes a best response to the enforcer’s strategy. In a simultaneous game, the
enforcer’s objective is to minimize the sum of the expected harm from non-compliance
and the enforcement costs, given the offender’s non-compliance strategy. The offender’s
objective similarly is to maximize its net gain from non-compliance, given the enforcer’s
enforcement strategy. Finally, in an offender-leadership game, the offender’s objective is
to maximize its net gains from non-compliance by choosing an optimal non-compliance
strategy. The offender’s optimal probability of non-compliance, in turn, takes into
account the effect of the offender’s strategy on the enforcer’s enforcement strategy. The
enforcer’s strategy, in turn, constitutes a best response to the offender’s strategy.
This paper is also related to the literature on strategic commitment in contests. Dixit
(1987) shows that a favorite contestant in a Nash game (a contestant who is expected to
win with probability greater than one half) will over-commit resources as a Stackelberg
leader relative to the Nash equilibrium strategy. Consequently, if the favorite contestant
moves first, its equilibrium payoff is higher and the underdog’s equilibrium payoff is
lower relative to their Nash equilibrium payoffs. Baik and Shogren (1992) extended
Dixit’s analysis by endogenizing the sequence of moves. They show that both the
favorite’s and the underdog’s payoffs are higher if the underdog, rather than the favorite,
moves first. Our result is reminiscent of Baik and Shogren’s in that here too both players’
equilibrium payoffs might be higher if one is the Stackelberg leader. In contrast to Baik
and Shogren’s result, however, here the Pareto-optimal sequence of moves depends on
the parameters of the game. In particular, here the enforcer always prefers to move first if
the costs of deterring non-compliance are sufficiently low.
3. Model
Consider two strategic, risk-neutral players: an offender and an enforcer. The offender’s
strategy is a probability of non-compliance, q  [0,1]. A possible interpretation of the
offender’s strategy is that it represents the frequency with which the offender complies
with the law in some time interval, or, alternatively, the offender’s degree of non-
4
compliance at one point in time. The enforcer’s strategy is a level of monitoring
expenditures, k  [0, ).
The enforcer’s probability of detecting non-compliance is given by p (k )  [0,1) , where
p (k )  0 , p ' (k )  0 , p' ' (k )  0, and p ' ' ' (k )  0 ; that is, (i) the probability of detecting
non-compliance is increasing at a decreasing rate in the monitoring expenditures, and (ii)
the marginal probability of detecting non-compliance is decreasing at an increasing rate
in the monitoring expenditures. The assumption that the detection probability is concave
in the enforcement expenditures is common in deterrence models of law enforcement. In
contrast to these models, however, we assume that the enforcement technology is
preventive in that detection of non-compliance results in elimination of the offender’s
gains and prevention of harm. To ensure an interior solution for the enforcer’s choice of
monitoring expenditures, we further assume that p(0)  H1 and lim p(k )  0 .
k 
The players’ payoffs are as follows. If non-compliance is undetected, the offender obtains
positive gains of G and the enforcer obtains  ( H  k ) , where H represents the harm from
non-compliance and k the enforcement expenditures. If non-compliance is detected, the
offender pays a sanction S and the enforcer obtains –k. We thus assume that the sanction
for non-compliance is costless and is not paid to the enforcer. If the offender complies,
the offender obtains 0 and the enforcer -k. The enforcer’s and the offender’s payoff
schedules reflect the notion that the enforcer would rather not monitor if the offender
always complies, but will spend a positive amount on monitoring if the offender does not
comply with sufficiently high probability.
We proceed by comparing three game configurations: a simultaneous game (SIM), a
sequential game with enforcer-leadership (SEQe), and a sequential game with offenderleadership (SEQo). In SIM, both the enforcer and the offender choose their strategies
independently and simultaneously. The solution concept is Nash equilibrium. In SEQe,
the enforcer commits to observable monitoring expenditures. The offender’s compliance
strategy, in turn, constitutes a best response to the enforcer’s monitoring choice. In SEQo,
the offender commits to an observable non-compliance strategy. The enforcer’s
monitoring strategy, in turn, constitutes a best response to the offender’s choice of
compliance. We assume that the first-mover can commit to carry out his first-stage
strategy irrespective of the second-mover’s strategy, but that the second-mover’s strategy
must be a best response to the first-mover’s strategy. The solution concept in a sequential
game is thus subgame perfect Nash equilibrium.
4. Equilibrium under Different Move-Sequences
4.1 Simultaneous-move game
Consider a game in which the enforcer and the offender choose their strategies
simultaneously. We begin by constructing the players’ best response functions. Consider
first the offender’s best response as a function of the enforcer’s choice of monitoring
expenditure, k. The offender chooses q to solve the following problem:
5
max q1  p(k )G  p(k )S  .
(1)
q[ 0,1]
Differentiating the objective function in (1) with respect to q gives G  p(k )(G  S ) . The
offender’s best response is thus q  1 ( 0 ) if p(k )  () GG S . If p(k )  GG S , the offender
~
is indifferent between q  [ 0,1] . Letting k be such that p(k )  GG S , the offender’s best
response correspondence is
~
1
if 0  k  k

~
qbr (k )  [0,1] if k  k
~

if k  k .
0
(2)
Next, consider the enforcer’s best response as a function of the offender’s choice of noncompliance, q. The enforcer chooses k  [0, ) to solve the following problem:
min  q(1  p(k )) H  k .
(3)
k
Differentiating the objective function in (3) with respect to k gives  (qHp ' (k )  1).
Letting q be such that q  p '( 01) H , the enforcer’s best response function is:
0

k br (q)  k : p' (k ) 

k : p' (k ) 
if 0  q  q
1
qH
if 1  q  q
1
H
if q  1.
(4)
If 0  q  q , then the enforcer’s best response is to not monitor. This follows because for
0  q  q , the enforcer’s marginal benefit from monitoring, qHp ' (0) , is lower than the
marginal cost, 1. If 1  q  q , then the enforcer’s best response equates the marginal
benefits and costs from monitoring. In particular, for 1  q  q , the enforcer’s optimal
monitoring expenditures are increasing in q.1 This is because the higher is q, the greater
the enforcer’s benefit from detecting non-compliance ( qH ). If q  1 , the enforcer’s
optimal monitoring expenditures are such that p' (k ) 
1
H
. We shall denote kbr (1) by k̂ .
Lemma 1 presents the equilibrium strategies (marked with star) in a simultaneous game.
1
To see that the enforcer’s best response is increasing in q, note that
dkbr ( q )
dq

1
p ''q 2 H
 0. If p' ' '  0 ,
then k br (q) is convex.
6
Lemma 1 (equilibrium strategies in SIM)
~
~
Let k be such that p(k )  GG S and k̂ be such that p' (kˆ)  H1 . Then, if the enforcer and
offender move simultaneously, the following strategy profiles are the unique Nash
equilibria:
~
(a) If k  kˆ , then k *  kˆ and q*  1 (no-compliance equilibrium).
~
~
(b) If k  kˆ , then k *  k and q *  1 (partial-compliance equilibrium)║
p '( k * ) H
The equilibrium outcome—no-compliance or partial compliance equilibrium—depends
on the ratio GGS  (0,1) and H. Specifically, given H, there is a cutoff value GG S such that
if
G
G S
 () GGS , then the enforcer’s and offender’s best response functions intersect at q =
1 (< 1).2 Similarly, given GG S , there is a cutoff value H such that if H  () H , then the
offender’s and enforcer’s best response functions intersects at q = 1 (< 1).
Note that there does not exist an equilibrium in which k *  0 , since the offender’s best
response to k *  0 is q *  1 , which in turn induces the enforcer to deviate to k *  0 (by
the assumption that p(0)  H1 ). Likewise, there does not exist an equilibrium in which
q *  0 , since the enforcer’s best response to q *  0 is k *  0 , which in turn induces the
offender to deviate to q *  1 .
Figure 1 presents the equilibrium outcomes in SIM:
2
Note that, as
G
G S
0 , q 
*
1
H
and k
*
 0.
7
q
No
Partial
Compliance Compliance
Equilibrium (1) Equilibrium
No
Compliance
Equilibrium (2)
1
kbr(q, H1)
qbr(k,S2)
S),
kbr(q, H2)
q
kˆ( H 1 )
qbr(k,S1)
S),
k*
k
kˆ( H 2 )
Figure 1
Best responses curves in a simultaneous game
~
If the enforcer’s and the offender’s best response curves intersect to the left of (at) k , the
resulting equilibrium is a non-compliance (partial compliance) equilibrium.
Remark: In a partial-compliance equilibrium, q * is decreasing in
H (
~
dk
dS
dq*
dS
 0,
~
dk
dG
dq*
dG
 0,
dq*
dH
 0 ), whereas k
*
is increasing in
G
GS
G
GS
and increasing in
and is invariant to H
~
dk
dH
(  0,  0,  0 ).3 This can be easily seen from Figure 1: an increase in S or a
decrease in G shifts the offender’s best response curve to the left, resulting in a new
equilibrium where both q * and k * are lower as compared to the initial equilibrium. An
increase in H moves down the enforcer’s best response curve, resulting in a new
equilibrium where q * is lower, but k * remains unchanged, as compared to the initial
equilibrium.
4.2 Enforcer-Leadership Game
We now turn to the case where the enforcer acts as a Stackelberg leader. In an enforcerleadership game, the enforcer can pre-commit to enforcement expenditures irrespective
of the offender’s choice of probability of non-compliance. The offender’s choice of non-
3
More precisely, the effect of a marginal increase in S or G on q* depends on the elasticity of the marginal
detection probability with respect to the detection probability (
dp' p
dp p '

p '' p
( p ') 2
). The effect of a marginal
increase in S or G on k* depends on the elasticity of the detection probability with respect to the monitoring
expenditures (
dp k
dk p

p'
p
k ).
8
compliance, in turn, constitutes a best response to the enforcer’s strategy. The enforcer
thus chooses k  [0, ) to solve the following problem:
min qbr (k )(1  p(k ) H  k 
k[ 0, )
(5)
where
~
if 0  k  k
~
if   k  k .

1
qbr (k )  

0
(6)
(6) reflects the notion that if the offender is indifferent between compliance and non~
compliance (i.e., if k  k ) , the offender always complies. The reason is that, since the
~
offender strictly prefers to comply if k  k , there does not exist subgame perfect
~
equilibrium in which k  k as the enforcer can increase its payoff by deviating to k ' ,
~
where k  k '  k . The only subgame perfect equilibrium, therefore, is one in which the
~
enforcer spends k  k and the offender always complies.
Incorporating the offender’s best response function in (6) into the enforcer’s objective
function in (5), the enforcer’s problem can be rewritten as
~

(1  p(k ) H  k if 0  k  k
min 
~
k

if   k  k .
k
(7)
~
The enforcer’s objective function thus exhibits a downward jump at k .4
~
Clearly the enforcer never chooses k  k . Differentiating the enforcer’s objective
~
function for 0  k  k with respect to k and equating to zero gives p ' (k ) H  1. Letting k̂
be such that p' (kˆ) H  1, 5 the enforcer’s optimal monitoring strategy, k*, is given by

kˆ
k  ~

k
*
~
if k  [(1  p(kˆ)]H  kˆ.
~
if k  [(1  p(kˆ)]H  kˆ.
(8)
~
If k  [1  p(kˆ)]H  kˆ , then the enforcer’s expected payoff if the offender never
complies, the enforcer spends k̂ , and the expected harm is [1  p(kˆ)]H , is greater than
the enforcer’s expected payoff if the offender always complies and the enforcer spends
~
~
k . If, by contrast, k  [1  p(kˆ)]H  kˆ , then the enforcer’s expected payoff if the
4
More precisely, note that lim~ (1  p(k )) H  k 
k k
5
S
GS
~ ~
H  k  k.
Note that k̂ was previously defined in Section 3.1
9
~
offender always complies and the enforcer spends k is greater than the enforcer’s
expected payoff if the offender never complies, the enforcer spends k̂ , and the expected
harm is [1  p(kˆ)]H .
We summarize these results in Lemma 2.
Lemma 2 (equilibrium strategies in SEQe)
~
~
Let k be such that p(k )  GG S and k̂ be such that p' (kˆ)  H1 . Then the following strategy
profiles constitute the unique subgame perfect equilibria:
~
(a) If k  kˆ  [1  p(kˆ)]H , then k *  kˆ and q *  1 (no-compliance equilibrium).
~
~
(b) If k  kˆ  [1  p(kˆ)]H , then k *  k and q *  0 (full-compliance equilibrium). ║
Observe that in the no-compliance equilibrium, the role of enforcement is merely to
decrease the harm from non-compliance. Since the enforcer does not aim at deterring
non-compliance, the enforcement strategy is strictly preventive. In the full-compliance
equilibrium, by contrast, the role of enforcement is to induce compliance. Furthermore,
given that the offender always complies, the enforcer does not aim at preventing harm expost. The enforcement strategy is thus strictly deterrent.
The equilibrium outcome—no-compliance or partial compliance equilibrium—depends
on the ratio GGS  (0,1) and H. Specifically, given H, there is a cutoff value GG S such that
if
G
G S
 () GGS , then q = 1 (0).6 Similarly, given
G
GS
, there is a cutoff value H such that
if H  () H , then q = 1 (0).
The explanation of Lemma 2 is as follows. If GG S is sufficiently small (large) (for
example, if S is sufficiently large (small)), then the enforcer’s costs of deterring noncompliance are relatively low (high). In particular, as GGS  1 , the enforcer’s cost of
achieving deterrence approaches zero; by contrast, as GGS  0 , the enforcer’s cost of
achieving deterrence approaches infinity. Thus, the enforcer fully deters non-compliance
ex ante if GG S is sufficiently low and otherwise prevents harm ex post. Likewise, if H is
sufficiently small (large), the enforcer’s benefit from inducing compliance is relatively
lower (large). Thus, the enforcer fully deters non-compliance ex ante if H is sufficiently
high and otherwise prevents harm ex post.
Figure 2 presents the enforcer’s optimal choice of monitoring expenditures:
6
Note that, as
G
G S
0 , q 
*
1
H
and k
*
 0.
10
kˆ( H 1 ) kˆ( H 2 )
~
k
k
No-compliance
Equilibrium
Compliance
Equilibrium
~
-k ,
-H1
-H2
Enforcer’s
Payoff
Figure 2
The enforcer’s payoff function in an enforcer-leadership game
The enforcer’s strategy in an enforcer-leadership game depends on the relation between the
enforcer’s payoff when the offender never complies and when the offender always complies.
We turn now to compare the equilibrium outcomes in SIM and SEQe.
Proposition 1 (SIM versus SEQe)
(a) If q *  1 in SEQe, then q *  1 in SIM; if q *  1 in SIM, then either q *  1 or q *  0
in SEQe.
(b) If q *  0 in SEQe, then
(i) k * in SEQe is either higher than or equal to k * in SIM;
(ii) q * is lower in SEQe than in SIM; and
(iii) the enforcer’s equilibrium payoff is higher and the offender’s equilibrium payoff is
lower in SEQe than in SIM. ║
Recall that the enforcer in SEQe can directly affect the offender’s non-compliance
strategy. Thus the enforcer in SEQe either fully deters non-compliance ex-ante or merely
prevents harm ex-post. The enforcer in SIM, by contrast, takes the offenders’ noncompliance strategy as given, thereby aiming solely to prevent harm ex post. The
enforcer’s strategy in SIM therefore either induces a positive probability of compliance
(‘semi-deterrent’ enforcement) or full non-compliance.
11
When
G
GS
is sufficiently large or H is sufficiently small, the enforcer’s enforcement
strategy in both SIM and SEQe is preventive. When GG S or H is intermediate, the
enforcer over-commits enforcement expenditures in SEQe relative to SIM. Accordingly,
the enforcement strategy is deterrent in SEQe and preventive in SIM. The enforcer’s
equilibrium payoff is thus higher in SEQe, whereas the offender’s equilibrium payoff is
higher in SIM. When GG S is sufficiently small or H is sufficiently large, the enforcer
monitoring expenditures are identical in SEQe and SIM. The enforcement strategy is thus
deterrent in SEQe and semi-deterrent in SIM. Consequently, the enforcer’s equilibrium
payoff is higher in SEQe than in SIM, whereas the offender’s equilibrium payoff is
higher in SIM than in SEQe.
4.3 Offender-Leadership Game
Consider now the case where the offender acts as a Stackelberg leader. In an offenderleadership game, we assume, the offender can pre-commit to compliance strategy
irrespective of the enforcer’s choice of monitoring expenditures. For example, suppose
the offender can choose an observable frequency of non-compliance in a certain period.
The higher the frequency of non-compliance, q, the greater the harm and gain from noncompliance as well as the sanction for non-compliance. Having observed the offender’s
choice, the enforcer’s strategy, in turn, constitutes a best response to the offender’s
strategy. The offender’s thus chooses q  [0,1] to solve the following problem:
max q(1  p(kbr (q))G  p(kbr (q)) S  ,
q[ 0,1]
(9)
where
0

k br (q)  k : p' (k )  1 /( qH )
k : p ' ( k )  1 / H

if q  q
if q  q
(10)
if q  1.
Recall that k br (q) is the enforcer’s best response function (see Section 4.2) and that
q  1 / p' (0) H is the greatest q such that no enforcement ( k *  0 ) is optimal.
Incorporating the enforcer’s best response function in (10) into the offender’s objective
function in (9), the offender’s problem can be rewritten as follows
 qG

max  q[G  p(k br (q ))(G  S ) ]
q[ 0 ,1]

 G  p (kˆ)(G  S )
if 0  q  q
if 1  q  q
(11)
if q  1.
12
Clearly the offender never chooses q  q . Differentiating the offender’s objective
function for 1  q  q with respect to q gives
G  p(kbr (q))(G  S )  qp' (kbr (q)) dkbrdq( q ) (G  S ) .
(12)
The first two terms represents the offender’s marginal benefit from increasing q ,
resulting from the greater probability with which the offender obtains net gains from noncompliance. Recall that, in a partial-compliance Nash equilibrium, q * is such that the
sum of these terms is zero. The last term represents the offender’s marginal cost of
increasing q . This marginal cost stems from the higher probability with which noncompliance is detected as q increases.
Observing that
dkbr ( q )
dq
  qpp ''('(kk)) (by implicit differentiation of the enforcer’s best response
function), the offender’s marginal cost in (12) becomes
rewritten as
p '( k ) 2
p ''( k )
(G  S ) . (12) can thus be
2
G  (G  S )( p(k )  ( pp'((kk )))'' ).
(13)
If (13) is negative for k  [0, kˆ) , then q *  q ; if (13) is positive for k [0, kˆ] , then q*  1 ;
if there exists k  [0, kˆ) such that (13) is equal to zero, then q *  [q, 1) . To simplify the
analysis, we will assume
p ''' p '
( p '') 2
 1 ,7 which ensures a unique solution to the offender’s
problem.
Lemma 3 presents the equilibrium strategies when the offender moves first.
Lemma 3 (equilibrium strategies under SEQe)
Let
k : p ( k ) 

k 
0


p '( k ) 2
p ''( k )

G
GS
if
p '( 0 ) 2
p ''( 0 )

G
GS
if
p '( 0 ) 2
p ''( 0 )

G
GS
(14)
and k̂ be such that p' (k )  H1 . Then, if the offender moves first, the following strategy
profiles constitute the unique subgame perfect equilibria:
(a) if k   0 , then k *  0 and q*  q (no-enforcement equilibrium).
7
p ''' p ''
( p '') 2
is the elasticity of p' with respect to p' ' ; that is,
dp'' p '
dp ' p ''
. Specifically, if
p ''' p ''
( p '') 2
 1 , the offender’s
marginal net benefit from increasing the probability of non-compliance is decreasing.
13
2
If, in addition,  pp'(''0( 0) )  1 , then k   0 for all G, S, and H.
(b) If k   kˆ , then k *  k  and q *  1  (q ,1) (partial-compliance equilibrium).
p '( k ) H
(c) If k  kˆ , then k  kˆ and q  1 (no-compliance equilibrium). ║

If 
p '( 0 ) 2
p ''( 0 )
*
*
 1 , then the offender commits to q *  q for all S, G, and H. This is because the
offender’s marginal benefit from increasing q above q is lower than the marginal cost.
Given the offender’s strategy, the enforcer’s best response is to not monitor. If

p '( 0 ) 2
p ''( 0 )
 1, by contrast, the equilibrium outcome depends on
sufficiently small
G
GS
G
GS
and H. Specifically, for
or sufficiently large H, the offender commits to q *  q and the
enforcer respond with k *  0 . For intermediate value of
G
GS
or intermediate value of H,
~
the offender commits to q  (q, 1) and the enforcer respond with k *  k . Finally, for
sufficiently large value of GG S or sufficiently small value of H, the offender commits to
q*  1 and the enforcer respond with k *  kˆ .
*
Figure 3 presents the equilibrium outcome in SEQo.
Offender’s
Payoff
NoCompliance
Equilibrium
PartialCompliance
Equilibrium
Noenforcement
Equilibrium
qG
q
k
k+
k̂
Figure 3
The offender’s payoff function in an offender-leadership game
The offender’s strategy in an offender-leadership game depends on the relation between the
offender’s payoff when the offender does not comply with probability q and when the offender
does not comply with probability greater than q .
14
We turn now to compare the equilibrium outcome in SIM and SEQo.
Proposition 2 (SIM versus SEQo)
(a) If q *  1 in SEQo, then q *  1 in SIM; if q *  1 in SIM, then either q *  1 or
q *  [q, 1) in SEQo.
(b) If q *  [q, 1) in SEQo, then:
(i) both q * and k * are lower in SEQo than in SIM; and
(ii) both the enforcer’s and the offender’s equilibrium payoffs are higher in SEQo than in
SIM. ║
That the offender’s equilibrium payoff in SEQo is not lower than in SIM is
straightforward: the offender can always obtain his Nash equilibrium payoff in SEQo by
committing to his Nash equilibrium strategy. That the offender’s equilibrium payoff is
higher in SEQo than in SIM stems from the fact that the offender’s Nash equilibrium
strategy does not take into account the fact that the enforcer’s best response is increasing
in q. Consequently, the offender’s marginal benefit from increasing q* in SIM is lower
than the marginal cost (see (13)). The offender can thus increase its payoff in SEQo by
committing to a lower q.
To see why the enforcer’s equilibrium payoff is higher in SEQo than in SIM, note that
the enforcer’s equilibrium payoff is decreasing in q. Formally, recall that the enforcer’s
payoff as a function of q is  q(1  p(kbr (q)) H  kbr (q) , where k br (q) is the enforcer’s
best response function (see (11)). Differentiating the enforcer’s payoff function with
respect to q (by the envelope theorem) gives  (1  p(kbr (q)) H  0. The enforcer’s payoff
is thus increasing as q decreases.
Intuitively, if q decreases and k remains unchanged, the enforcer’s payoff increases
because the detection probability remains unchanged but the probability of noncompliance is lower. It follows that, if the enforcer chooses lower monitoring
expenditures than its Nash strategy, the enforcer’s payoff must be higher than if it kept its
monitoring expenditures unchanged.
We now turn to compare the equilibrium outcome in SEQo and SEQe.
Proposition 3 (SEQe versus SEQo)
Suppose q *  1 in either SEQo or SEQe. Then
(a) k * is lower in SEQo than in SEQe.
(b) q * is either higher or lower in SEQo than in SEQe.
(c) The offender’s equilibrium payoff is higher in SEQo than in SEQe.
(d) The enforcer’s equilibrium payoff is either higher or lower in SEQo than in SEQe; in
particular:
15
(i) if q * is lower in SEQo than in SEQe, then the enforcer’s equilibrium payoff is higher
in SEQo than in SEQe;
(ii) if GG S is sufficiently small, then the enforcer’s equilibrium payoff is higher in SEQe
than in SEQo. ║
To see why the enforcer’s equilibrium monitoring expenditures are lower in SEQo than
in SEQe, consider two cases: q *  1 in SIM and q *  1 in SIM.
First, suppose q *  1 in SIM. Recall from Propositions 1 and 2, respectively, that k* in
SEQe is higher than or equal to k* in SIM and that k* is strictly lower in SEQo than in
SIM. It follows that k* is strictly higher in SEQe than in SEQe.
Next, suppose q *  1 in SIM. Consider first the case where q *  0 in SEQe. Then, from
proposition 1, k* is strictly higher in SEQe than in SIM. Since, by Proposition 1, k* in
SIM is higher than or equal to k* in SEQo, it follows that k* is strictly higher in SEQe
than in SEQe. Consider next the case where q *  1 in SEQe. Then k* in SEQe is equal to
k* in SIM. But the assumption that q *  1 in either SEQe or SEQo implies that q *  1 in
SEQo. By Proposition 2, k* is lower in SEQe than in SIM. Thus, k* is strictly higher in
SEQe than in SEQo.
To see why the offender’s equilibrium payoff is higher in SEQo than in SEQe, consider
again two cases: q *  1 in SIM and q *  1 in SIM.
First, suppose q *  1 in SIM. Recall from Propositions 1 and 2, respectively, that the
offender’s equilibrium payoff is higher in SEQo than in SIM, but lower in SEQe than in
SIM. It follows that the offender’s equilibrium payoff is higher in SEQo than in SEQe.
Next, suppose q *  1 in SIM. Consider first the case where q *  0 in SEQe. Then, from
proposition 1, the offender’s equilibrium payoff is strictly lower in SEQe than in SIM.
Since, by Proposition 2, the offender’s equilibrium payoff is strictly higher in SEQo than
in SIM, it follows that the offender’s equilibrium payoff is strictly higher in SEQo than in
SEQe. Consider next the case where q *  1 in SEQe. Then, by Proposition 1, the
offender’s equilibrium payoff in SEQe is equal to that in SIM. But the assumption that
q *  1 in either SEQe or SEQo implies that q *  1 in SEQo. It follows from Proposition
2 that the offender’s equilibrium payoff is higher in SEQo than in SIM. Thus, the
offender’s equilibrium payoff is higher in SEQo than in SEQe.
Now, the enforcer’s equilibrium payoff in SEQo may be higher than in SEQe because the
enforcer over-commits monitoring expenditures in SEQe relative to SEQo. Thus, if q* is
sufficiently low in SEQo than in SEQe, the enforcer’s equilibrium payoff is higher in
SEQo than in SEQe.
16
Example 1. Suppose the probability of detecting non-compliance is given by p(k) =
1  e k , where   H1 . Then the enforcer’s equilibrium payoff is higher (lower) in SEQe
than in SEQo if GG S  ()(1  1e ) . ║
We prove this result in the Appendix. If p(k )  1  e k , then 
p '( 0 ) 2
p ''( 0 )
 1 . It follows that,
for all G, S, and H, the offender in SEQo commits to q . The offender’s best response, in
turn, is to not monitor. Whether the enforcer’s equilibrium payoff is higher if it moves
first or second depends on GG S . Specifically, if GG S is such that q *  1 in SEQe, then the
enforcer’s equilibrium payoff is higher in SEQo than in SEQo. If GG S is such that q *  0
in SEQe, then the enforcer’s equilibrium payoff may be either higher or lower in SEQo
than in SEQe. In particular, if the equilibrium monitoring expenditures in SEQe are
sufficiently high (i.e., GG S  1 1e ), then the enforcer’s equilibrium payoff is higher in
SEQo than in SEQe.
Example 2. Suppose the probability of detecting non-compliance is given by
p (k )  1  ( k  ) , where   0 and H  1 . Then, if q *  1 in either SEQo or SEQe:
(a) If 0    1, then the enforcer’s equilibrium payoff is higher in SEQe than in SEQo.
(b) If   1 , then the enforcer’s equilibrium payoff is higher (identical) in SEQe than in
SEQo if GGS  () 12 ; that is, if q *  () q in SEQo.
(c) If   1 , then the enforcer’s equilibrium payoff is higher (lower) in SEQe than in
SEQo if GG S  ()1  ( 1 )1 /  . ║
We prove this result in the Appendix. Example 2 illustrates that whether the enforcer’s
equilibrium payoff is higher in SEQe than in SEQo depends on the effectiveness of the
enforcement technology and GG S . Specifically, when the enforcement technology is
relatively ineffective (   1 ), the enforcer prefers to move first for any GGS  (0,1) . The
intuition is that when the enforcement technology is relatively ineffective, the offender’s
equilibrium probability of non-compliance in SEQo is relatively high as compared to the
offender’s Nash equilibrium strategy. The enforcer’s payoff from completely deterring
non-compliance in SEQe is accordingly lower than his payoff from responding optimally
to partial non-compliance in SEQo. Moreover, as we show in the Appendix, there is a
range of values of GG S in which the offender never complies in SEQo, but always
complies in SEQe.
When the enforcement technology is relatively effective (   1 ), by contrast, the
enforcer’s payoff is higher in SEQe than in SEQo if GG S is sufficiently small. More
specifically, as we show in the Appendix, there is a range of values of GG S in which the
offender never complies in SEQe, but partially complies in SEQo. For these values of
G
G  S , the enforcer’s payoff is higher in SEQo than in SEQe. If the offender always
complies in SEQe, then if the offender complies with probability greater than q in SEQo,
17
the enforcer’s payoff is higher in SEQe than in SEQo; if, by contrast, the offender
complies with probability q in SEQo, then whether the enforcer’s payoff is higher or
lower in SEQe relative to SEQo depends on GG S .
5. Conclusion
This paper shows that reactive enforcement might be superior to proactive enforcement.
We studied a modified inspection game in which the enforcement technology exhibits
diminishing marginal return. We showed that the offender enjoys a first-mover advantage
as compared to a Nash game. In particular, in an offender-leadership game, the offender’s
non-compliance decision incorporates the effect of non-compliance on the enforcer’s
enforcement strategy. Consequently, the level of compliance is higher and the level of
enforcement is lower in an offender-leadership game than in a Nash game. This implies
that the enforcer’s payoff is higher in an offender-leadership game than in an enforcerleadership game if the level of compliance is higher in an offender-leadership game. If,
by contrast, the level of compliance is lower in an offender-leadership game, then the
enforcer’s payoff might still be higher in an offender-leadership game if the enforcer’s
costs of achieving deterrence in an enforcer-leadership game are sufficiently high.
18
Appendix
Example 1. Suppose the probability of detecting non-compliance is given by
p(k )  1  e k , where   H1 . Then the enforcer’s equilibrium payoff is higher (lower) in
SEQe than in SEQo if GG S  ()(1  1e ) . ║
First, note that
p(k )  [0,1),
p ( 0)  0 ,
p' (k )  e k  0 ,
p' (0)   
1
H
, and
p' ' (k )   2 e k  0 .
~
Next, consider the equilibrium outcomes in SEQe. Recall that p(k ) 
~
p' (kˆ)  H1 ; hence k  1 ln( S SG ) , kˆ  1 ln( H ) , and p(kˆ)  1  ( wH ) 1 .
G
GS
and
The
enforcer’s
equilibrium
payoff
if
is
q*  1
1
1
ˆ
ˆ
 [k  (1  p(k )) H ]    (ln( H )  1)    ln( eH ) . Thus, the enforcer prefers to fully
~
deter non-compliance iff k  kˆ  (1  p(kˆ)) H iff 1 ln( S SG )   1 ln( eH ) iff
G
GS
 1  e1H  1 .
We summarize these results in the following Lemma:
Lemma A1 (equilibrium in SEQe)
Assume the enforcement technology is given by p(k )  1  e k , where   H1 . Then, if
the enforcer moves first, the following strategy profiles are the unique subgame perfect
equilibria:
~
(a) If GG S  1 e1H , then k *  1 ln( S SG )  k and q *  0 .
(b) If G  1 1 , then k *  1 ln( eH )  kˆ and q *  1 . ║
GS
eH

Next, consider the equilibrium outcomes in SEQo. From (12), the offender’s
maximization problem for q  (q,1) is max q[G  p(k br (q))(G  S ) ] , where
q( q ,1)
q
1
p '( 0 ) H
 1H . Differentiating with respect to q and rearranging (see (13)) gives
G  (G  S )(1  e k  
2 ( e k ) 2
 2e k
)
 S .
The offender thus chooses q *  q for all S, G and H.
We summarize this result in the following Lemma:
19
Lemma A2 (equilibrium in SEQo)
Assume the enforcement technology is given by p(k )  1  e k , where   H1 . Then, if
the offender moves first, the offender complies with probability q  1H and the enforcer
spends zero for all G, S and H. ║
To show that the enforcer’s equilibrium payoff is higher (lower) is SEQe than in SEQo if
G
1
G  S  ()(1  e ) , recall that (i) the enforcer’s equilibrium payoff in SEQe is
~
k   1 ln( S SG ) if q *  0 and  1 ln( eH )   1 if q *  1 (see Lemma A1), and (ii) the
enforcer equilibrium payoff in SEQo is  qH   1 (see Lemma A2). Thus, the
enforcer’s equilibrium payoff is higher (lower) in SEQe than in SEQo if GG S  ()(1  1e ) .
Example 2. Suppose the probability of detecting non-compliance is given by
p (k )  1  ( k  ) , where   0 and H  1 . Then, if q *  1 in either SEQo or SEQe:
(a) If   1, then the enforcer’s equilibrium payoff is higher in SEQe than in SEQo.
(b) If   1 , then the enforcer’s equilibrium payoff is higher (lower) in SEQe than in
SEQo if q *  () q in SEQo.
(c) If   1 , then the enforcer’s equilibrium payoff is higher (lower) in SEQo than in
SEQe if GG S  ()1  ( 1 )1 /  .║
First, note that p(k )  [0,1), p (0)  0 , p' (k )   (k   )( 1)  0 , p' (0)   

p' ' (k )   (  1) (k   )
 (  2 )
p(kˆ)  1 
The


( H )  1
1
G
GS
and
kˆ  ( H ) 1   ,
and

  1
) .
 1  ( H
enforcer’s
kˆ  (1  p(kˆ)) H 
~
p( k ) 
1
~
k  ( GS S )1 /    ,
Hence,
, and
 0.
Next, consider the equilibrium outcomes in SEQe. Recall that
p' (kˆ)  1 / H .
1
H
equilibrium
1
( H )  1
payoff

  1
) H
   ( H
if
q*  1
1
1
 (  H )  1 (  1
is
thus
 
   1 )  

 ( H ) 1 (1   )  1  .
20
The
(
enforcer
G S 1/ 
S
)

  1
(H )
fully
   (
(1   )


deters
non-compliance

1


1

H ) (1   ) 1
2

 1

S
S G
G
S G
iff


  1
1 (H )
( G S S ) 
iff
(1   )
~
k  kˆ  (1  p(kˆ)) H
iff


 2

H  1
( ) (1   )   1


iff
iff
2

 1 .
We summarize these results in the following Lemma:
Lemma A3 (equilibrium in SEQe)
Assume the enforcement technology is given by p(k )  1  ( k  ) , where   0 and
H
1
   . Then, if the enforcer moves first, the following strategy profiles are the unique
subgame perfect equilibria:
(a) If
G
S G
(b) If
G
S G

  1
1 ( H )

  1
1 ( H )
(1   )

2

 1 ,
~
then k *  ( GSS )1/     k and q *  0 .

2

 1 ,
then k *  ( H )  1    kˆ and q *  1 .
(1   )
1
║
Consider now the equilibrium outcomes in SEQo. From (12), the offender’s
maximization problem for q  (q,1) is max q[G  p(k br (q))(G  S ) ] , where
q( q ,1)
q
1
p '( 0 ) H


H
. Differentiating with respect to q and rearranging (see (13)) gives
G  (G  S )(1  11 ( k  ) ) .
Equating to zero and solving for k yields
k    ( 11 G S S )1 /    .
The
offender
in
SEQo
thus
q *  (q, 1)
chooses
iff
1
( H ) 1     ( 11 GS S )1 /     0 iff 1  11 ( H ) /( 1) 
H

 1 implies
for
G
GS

H
G
GS
 1 and hence ( H ) /( 1)  1 ). It follows that for
kˆ  k   0
iff
 1  11 (note that
G
GS
 1 , q*  q and
 1  11 ( H ) /( 1) , q*  1 .
Suppose kˆ  k   0 . Then p(k  )  1  (  1)( GS S ) and p' (k  )   ( 11 GSS )
that q   1/[ p' (k ) H ] ; hence q  
 (  1)

. Recall
( 1)
1 
H 
( 11 GS S )

.
We summarize these results in the following Lemma:
21
Lemma A4 (equilibrium in SEQo)
Assume the enforcement technology is given by p(k )  1  ( k  ) , where   0 and
H
1
   . Then, if the offender moves first, the following strategy profiles are the unique
subgame perfect equilibria:
(a) If
G
GS
(b)
 1  11 , then k *  0 and q *  q .
1  11 ( H ) /( 1) 
If
q* 
( 1)
1 
H 
( 11 GS S )

 1  11 ,
G
GS
then
k *   ( 11 GS S )1 /     k 
 q .
1
(c) If
G
GS
Note
 1  11 ( H ) /( 1) , then k *  ( H )  1    kˆ and q *  1 .
that,
if
and
kˆ  k   0 ,
then
the
expected
harm
in
║
equilibrium
is
1
q  (1  p(k  )) H   ( 11 GS S )  . Accordingly, the enforcer’s equilibrium payoff is
1
k   q  (1  p(k  )) H   ( 11 GS S )  ( 1 )  .
The next Lemmas prove the statements made in Example 2.
Lemma A5
Assume the enforcement technology is given by p(k )  1  ( k  ) , where   1 and
H
1
   . Then:
(i) If q*  1 in SEQo, then q*  1 in SEQe.
(ii) If q*  1 in SEQe, then either q*  1 or q *  1 in SEQo.
(iii) If q *  0 in SEQe and q *  (q,1) in SEQo, then the enforcer’s equilibrium payoff is
higher in SEQe than in SEQo.
(iv) If q *  0 in SEQe and q *  q in SEQo, then the enforcer’s equilibrium payoff is
higher (lower) in SEQo than in SEQe if
G
GS
 ()1  ( 1 )1 / 
Proof.
(i)
1
and
(ii).
  /( 1)
1
 1 H
(
)
We

  1
 1 ( H )

  1
1 (H )
proceed
(1   )
(1   )


by
showing
that
for
  1,
2

 1 :
2

 1
< 1  11 ( H ) /( 1)
22
1
 1
iff
iff


(  2 ) /( 1)
 /( 1)
 (1   )

2

 1
(1   )    1
(1  1 )  (1   ) .
iff
Since ln( x)  ln( y ) iff x  y , the proof is completed by showing that
ln (1  1 )   ln( 1   ) , or  ln( 1  1 )  ln( 1   ) , for   1 . Since  ln( 1  1 )  ln( 1   )
for   1, the inequality must hold for   1 if the derivative of the LHS, ln( 1  1 )  11 ,
is smaller than the derivative of the LHS, 11 ; that is, if ln( 1  1 )  21 for   1 .
1
To show this, note that ln( 1  1 )  11 1t dt  1  21 , for   1, where the equality follows
from the definition of the logarithm function and the left inequality follows from the
definition of the integral as the area circumscribed between the integrand and the x-axis
and the fact that 1t is decreasing in t.
(iii) Suppose that q *  0 in SEQe and q *  q   (q, 1) in SEQo. Then the enforcer’s
~
equilibrium payoff is greater in SEQo than in SEQe iff k  k   q  (1  p(k  )) H . That is,
iff
1
 ( GS S )1 /      ( 11 GS S )  ( 1 )  
1
(1   )   (1  1 )
Iff
1    (1  1 ) ,
Iff
which, as was shown above, holds for   1 .
(iv) Suppose
G
GS
 1  ( 1 ) . Then
( 1 ) 
and therefore ( GS S )1 /   1  1 .
~
    , which implies k  qH . A similar
Multiplying through by  gives  ( GS S )1 / 
~
proof shows that if GG S  1  ( 1 ) , then k  qH .
S
GS
■
Lemma A6
Assume the enforcement technology is given by p(k )  1  ( k  ) , where 1    0 and
H
1
   . Then, if the offender moves first:
(i) If q*  1 in SEQe, then q*  1 in SEQo.
(ii) If q*  1 in SEQo, then either q*  1 or q *  0 in SEQe.
(iii) If q *  0 in SEQe, then the enforcer’s equilibrium payoff is higher in SEQe than in
SEQo.
23
Proof:
(i)
and
(ii).
We
proceed
by
showing
that
for
1  0 ,

2

  /( 1)
  1

)
 1  ( ) (1   )  1 :
1  11 ( H
H

  1
1 ( H )
(1   )

2

 1
> 1  11 ( H ) /( 1)
2
1
 1
iff
iff
 ( 
2 ) /( 1)
  /( 1)  (1   )    1
(1   )    1
(1  1 )  (1   ) .
iff
Let   1  1. Then we have to show that (1   )1/   (1  1 ) , for   1. Now, from the
proof of parts (i) and (ii) in Lemma A5 we know that (1  1 )   1   , for   1. Raising
both sides to the power of
1

gives (1   )1/   (1  1 ) .
(iii) Suppose that q *  0 in SEQe and q *  q   (q,1) in SEQo. Then the enforcer’s
~
equilibrium payoff is higher in SEQe than in SEQo iff k  k   q  (1  p(k  )) H . That is,
iff
1
 ( GS S )1 /      ( 11 GS S )  ( 1 )  
1
Iff
Iff
(1   )   (1  1 )
1    (1  1 ) ,
which, as was shown above, holds for   1.
Next, suppose that q *  0 in SEQe and q *  q in SEQo. Then, from Lemma 4,
G
 1  11 . Recall that 1    (1  1 ) , for 1    0 ; hence 11  ( 1 ) . It follows
GS
that 1  11  1  ( 1 ) and thus
G
GS
 1  ( 1 ) . Rearranging terms gives
S
GS
 ( 1 ) ,
which implies ( GS S )1 /   1  1 . Multiplying through by  gives  ( GS S )1 /      , which
~
implies k  qH .
■
Lemma A7
Assume the enforcement technology is given by p(k )  1  ( k  ) , where   1 and
H
1
   . Then, if the offender moves first:
(i) q*  1 in SEQe iff q*  1 in SEQo.
24
(ii) If q *  0 in SEQe and q *  (q,1) in SEQo, then the enforcer’s equilibrium payoff is
identical in SEQo and in SEQe.
(iii) If q *  0 in SEQe and q *  q in SEQo, then the enforcer’s equilibrium payoff is
weakly higher in SEQo than in SEQe.
Proof.
Parts (i) and (ii) follow from the proof of either Lemma 5 or Lemma 6.
To prove part (iii), suppose that q *  0 in SEQe and q *  q in SEQo. Then, from
Lemma A4, GG S  1  11  12 . Rearranging terms gives GSS  12 , which implies
~
( GS S )  1  12 . Multiplying through by  gives  ( GS S )    , which implies k  qH . ■
25
References
Baik, K. and J. Shogren (1992). Strategic behavior in contests: Comment. American
Economic Review 82: 359-362.
Dixit, A. (1987). Strategic behavior in contests. American Economic Review 77: 891-898.
Graetz, M., Reinganum, J. F. & Wilde, L. L (1986). The Tax Compliance Game: Toward
an Interactive Theory of Law Enforcement. Journal of Law, Economics and
Organizations, 2: 1-32.
N. Garoupa (1997). The Theory of Optimal Law Enforcement. Journal of Economic
Surveys, 11: 267-295.
Polinsky, A. M. & S. Shavell, Public Enforcement of Law (Handbook of Law and
Economics, Vol. 1, A. M. Polinsky and S. Shavell, eds., Elsevier, 2007, 403-454).
G. Tsebelis (1990). Are Sanctions Effective? A Game Theoretic Analysis. Journal of
Conflict Resolution 34: 3-28
26