Killing Conscience - School of Law » University of Leeds

Killing Conscience:
The Unintended Behavioral Consequences
of „Pay For Performance‟
Lynn A. Stout
Distinguished Professor of Corporate and Business Law
Cornell School of Law
March 2012
Abstract
Contemporary lawmakers and reformers often assume that ex ante incentive contracts
providing for large material rewards are the best, and possibly only, way to motivate corporate
executives and other employees to serve their firms’ interests. This article critiques the “pay for
performance”approach. It surveys empirical evidence from behavioral science that
demonstrates that relying on material incentives to motivate performance in incomplete
relational contracts can be counterproductive, because it can suppress desirable unselfish
prosocial behavior (conscience). The article explores how, for a variety of mutually-reinforcing
reasons, workplaces that rely on incentive-based pay can discourage conscientious behavior and
instead encourage opportunism and even illegality.
INTRODUCTION
In 2008, the U.S. Justice Department charged the Swiss bank UBS with orchestrating a
massive scheme to help wealthy Americans evade U.S. tax laws. The UBS case, the largest tax
fraud investigation in history, was eventually settled after UBS agreed to pay a $780,000,000
fine and reveal the names of some 4,450 possible tax cheats.1 But it began with the arrest of a
single individual, Bradley Birkenfeld.
Birkenfeld was one of many UBS bankers who had built careers helping UBS clients
evade U.S. taxes. After his arrest he agreed to cooperate with the Justice Department‟s
investigation of UBS, and to plead guilty to a single count of conspiracy to defraud the U.S.
government. The judge who heard Birkenfeld‟s plea asked him why he chose to participate in
the scheme when he knew he was breaking the law. The 43-year-old banker replied, “I was
incentivized to do this business.”2
The UBS scandal is only one of several recent high-profile cases where it has been
reported that incentives tempted employees to succumb to opportunistic or even illegal behavior.
In February 2010, the state of Georgia announced a statewide investigation into what appears to
be widespread tampering with student exam answer sheets by teachers trying to earn
1
2
USA Today, UBS Tax Evasion Whistle-Blower Reports to Federal Prison, January 8, 2010.
Evan Perez, Guilty Please By Ex-Banker Likely to Aid Probe of UBS, Wall St. J. (June 20, 2008)
1
performance bonuses.3 Incentive pay has been blamed for causing the savings and loan crisis in
the late 1980s and the Enron and Worldcom accounting frauds in the late 1990s.4 Incentive pay
has also been identified as a root cause of the 2008 credit crisis, when the prospect of
performance-based bonuses tempted mortgage brokers to approve home loans to unqualified
buyers, and lured executives at financial companies like Bear Stearns and AIG into making risky
derivatives bets that nearly brought down their firms.5
Commentators who favor incentive-based pay would likely argue that in each of these
cases, the problem lay not in the use of incentives per se, but rather in the use of poorly-designed
incentives. If we are sufficiently careful in measuring and rewarding individual performance,
the “optimal contracting” argument goes, pay-for-performance schemes harness the forces of
greed and self-interest to promote greater efficiency and better economic performance.
This article argues, however, that pay-for-performance strategies by their very nature
often prove counterproductive and even disastrous “solutions” to complex and stubborn social
problems like corporate scandals and failing schools. Optimal contracting theory dominates the
ongoing debate over executive compensation, and is seeping into other policy discussions as
well, because would-be reformers believe that even if we can‟t do much else, we can at least “get
the incentives right.” The underlying assumption seems to be that using incentives might help,
and can‟t possibly hurt.
This Article argues, however, that pay-for-performance schemes can hurt. Optimal
contracting theory relies on a homo economicus model of purely self-interested behavior that
predicts that ex ante material incentives are the best, and possibly only, tool available to motivate
an agent to do something a principal wants the agent to do. While this behavioral model is
elegant and powerful, it can also be dangerously misleading. Extensive empirical evidence
demonstrates that when employment contracts are incomplete--as all contracts must be, to a
greater or lesser degree—employers may often get better results by emphasizing what might be
called “internal” incentives, especially the internal force laymen call conscience. What‟s more,
conscience and self-interest are often substitutes, rather than complements. This means that
excessive reliance on ex ante financial incentive arrangements—even well-designed ones—can
create “psychopathogenic” environments in which conscience may be suppressed or snuffed out.
The outcome is not necessarily more efficient agent behavior, but possibly more opportunistic,
unethical, and even illegal agent behavior.
This Article uses the evidence from behavioral science to lay out a theoretical foundation
for the claim that when contracts are incomplete, incentive-based pay schemes can prove
counterproductive. In making the logical case against “pay for performance,” the Article does
not suggest that pay itself (that is, compensation) is unnecessary. Few employees are willing to
work very long or very hard for free. Nor does it argue that incentive-based pay is always
counterproductive. There may be some agency tasks in which explicitly-negotiated ex ante
incentive schemes perform quite well.
The Article does argue that ex ante incentives are not the only means available for
motivating employees. Extensive behavioral evidence demonstrates that, with the right
3
Shaila Dewan, Georgia School Inquiry Finds Signs of Cheating, NY Times (Feb. 12, 2010)
“The Disastrous Unexpected Consequences of Private Compensation Reforms,” Testimony of William K. Black
before the House Committee on Oversight and Government Reform, October 28, 2009, at p. 2; Margaret M. Blair,
(CITE Enron chapter); William Bratton (CITE). .
5
CITES
4
2
combinations of social cues and discretionary, ex post rewards, many agents can be motivated to
act “prosocially” (conscientiously) by working harder and more honestly than their formal
contracts with their principals can force them to. Moreover, for many of the complex tasks that
principals want agents to perform in the business world and elsewhere, the optimal contracting
approach to agent motivation can prove quite dangerous, as incentives for a variety of reasons
suppress desirable prosocial behavior. Thus, instead of relying on pay-for-performance schemes,
this Article counsels that corporations and other employers often might do better to rely instead
on what this Article calls “trust-based” compensation arrangements, which presume both the
principal‟s and the agent‟s capacity to reciprocate prosocial behavior.6
Part I of this Article briefly reviews the optimal contracting approach and the rise of the
ideology of incentives. As Part I demonstrates, when optimal contracting theorists speak of
“incentives,” they are not using the word in a broad sense, as a synonym for “motivations.”
(“My love for my child gives me incentive to take her to the pediatrician.”) Rather, they are
referring to specific financial or material rewards that are formally negotiated and determined ex
ante. This notion that incentives provide the best and possibly only way to reliably channel
human behavior—an idea that implicitly assumes people are opportunistic and selfish--has
exercised increasing influence both in private employment markets and regulatory policy. For
example, optimal contracting theory was cited in support of to support changing the tax code in
1993 to encourage public companies to rely on “high-powered” incentive schemes to compensate
their executives. While American corporations did indeed rely increasingly on performancebased pay, contrary to the predictions of optimal contracting theory, the resulting shift in
executive compensation patterns has not produced measurably better corporate performance. To
the contrary, it has been accompanied by disappointing investor returns, an outbreak of corporate
frauds and scandals, and the near-collapse of the financial sector. Nevertheless, rather than
questioning the efficacy of incentive-based pay schemes, contemporary lawmakers and would-be
reformers continue to insist the solution is to simply to design better ones.
Part II explores some flaws in this approach. In particular, Part II provides an
introduction to and survey of what behavioral science in general, and experimental gaming in
particular, has revealed about the behavioral phenomenon of unselfish prosocial behavior.
Overwhelming empirical evidence now demonstrates that--contrary to the assumption of
opportunistic selfishness that underlies the incentive approach--real people frequently act in an
unselfish and prosocial fashion. In lay terms, they act as if they have a conscience that spurs
them, at least sometimes, to sacrifice their own material payoffs in order to help or to avoid
harming others and to follow ethical rules. While different individuals show different
proclivities toward conscientiousness, the data demonstrates that conscience is neither rare nor
quirky. To the contrary, almost anyone other than a psychopath is likely to act unselfishly when
certain social cues support unselfishness and the personal cost of acting unselfishly is not too
high.
Part II uses these findings to propose a simple model of “conscience” that offers at least
four useful lessons for optimal contracting theory. First, conscience (unselfish prosocial
behavior) exists: it is a very real and very common behavioral phenomenon. Second,
conscientious behavior seems to be triggered primarily by certain important social cues,
6
In a recent article in the Columbia Law Review, Ron Gilson, Chuck Sabel and ,Robert Scott refer to these sorts of
contracts as “braided” contracts. ADD CITE
3
especially instructions from authority, beliefs about others‟ prosocial behavior, and perceptions
of benefits to others. Third, even when the social cues support conscience, it can disappear if the
personal sacrifice of acting conscientiously becomes too great. Fourth, although almost anyone
other than a psychopath is capable of acting conscientiously when social context supports
conscientious behavior and the personal cost is not too large, individuals vary in their willingness
and inclinations toward unselfish prosocial behavior.
Part III explores what this model implies about the possible behavioral effects of
employing high-powered incentives. It illustrates how, through at least three different but
mutually-reinforcing mechanisms, incentive-based pay schemes tend to suppress conscience, and
so encourage opportunistic and even illegal behavior that conscience otherwise would keep in
check. First, incentive schemes frame social context in a fashion that encourages people to
conclude purely selfish behavior is both appropriate and expected. As a result, pay-forperformance rules “crowd out” concern for others‟ welfare and for ethical rules, making the
assumption of selfish opportunism a self-fulfilling prophecy. Second, the possibility of reaping
large personal rewards from incentive schemes tempts people to cut ethical and legal corners,
and for a variety of reasons, once an individual succumbs to temptation, future lapses become
more likely. The result can be a downward spiral into opportunistic and unlawful behavior.
Third, industries and firms that emphasize incentive pay tend to attract individuals who, even if
they are not psychopathic, nevertheless are more inclined to selfish behavior than the average.
Once relatively selfish actors come to dominate a workplace, the model predicts that many lessselfish employees will leave, and even the prosocial executives and employees who remain will
start acting in a more purely self-interested and opportunistic fashion.
Part IV concludes by considering some implications of this behavioral analysis for
contemporary law and policy. The pay-for-performance approach dominates compensation
practices in the executive suite today. But it is also gaining popularity in our nation‟s schools,
newsrooms, and medical centers. The scientific evidence on prosocial behavior suggests this
may be a dangerous development. Behavioral science teaches that it can be counterproductive to
compensate people primarily through large ex ante financial incentives. Sometimes, perhaps
often, principals get better results by adopting the opposite approach to compensating agents, and
emphasizing rewards that are modest, nonmonetary, and determined ex post. This reality has
important implications not only for the current debate over regulating executive compensation,
but other pressing issues of law and public policy as well.
I. OPTIMAL CONTRACTING AND THE IDEOLOGY OF INCENTIVES
Economists and legal scholars have been studying the problem of how to set executive
compensation for decades.7 From the beginning, however, the academic literature on executive
compensation has shared one common characteristic: it has analyzed the question of
compensation from an “optimal contracting” perspective.8 Optimal contracting theory, in turn,
views the task of setting an executive‟s compensation (or any employee‟s or agent‟s
compensation) as a version of what economists call the agency cost problem.
7
See, e.g., Jensen and Meckling (1976) CITE; Jensen and Murphy “CEO Incentives: It‟s Not How Much You Pay,
But How,” 68 Harv. Bus. Rev. 138 (1990); Bebchuk and Fried (2004) CITE ; Anabtawi (2005) CITE.
8
Anabtawi , supra note __, at 1561 (“The optimal contracting model underlies most scholarship in the area of
executive compensation.”)
4
Economic theory predicts that “agency costs” arise whenever a rational and selfish
principal hires a rational and selfish agent to accomplish something the principal wants done.
Because the agent is selfish, if he is left to his own devices, he might not do what the principal
wants. To use the words of Michael Jensen and William Meckling, two of the earliest and most
influential writers in the executive compensation debate, “if both parties to the relationship are
utility maximizers there is good reason to believe the agent will not always act in the best
interests of the principal.”9 By the same token, if the principal has discretion over whether or not
to pay the agent, she will likely exercise that discretion to minimize or even decline payment to
the agent. The solution for both parties is to draft an “efficient” or “optimal” contract that
obligates the principal to pay the agent specific compensation that is tied ex ante to some
observable measure of the agent‟s performance.
In the executive compensation debate, the agency cost problem is typically framed as a
problem of getting corporate directors and executives to serve the interests of the firm‟s
shareholders. As Margaret Blair has put it, “part of the conventional wisdom has been that
directors and managers of companies will always make decisions in ways that serve their own
personal interests unless they are either tightly monitored and constrained (which is costly and
raises the question of who will monitor the monitors) or given very strong incentives…”10 Thus
the problem of executive compensation is viewed as a problem of designing proper incentives to
motivate managers to serve shareholders‟ interests.
The Meaning of “Incentives”
It is critical to understand that when executive compensation experts talk about “strong
incentives,” they are not employing the word “incentives” broadly, as a synonym for
“motivations.” In particular, they are not talking about subjective motivations like guilt, love,
pride, shame, or any other internal concern that might spur a person to change his or her
behavior. In optimal contracting scholarship, “incentives” refers to punishments or rewards that
share three important characteristics. First, they are monetary or material in nature. Second,
they are of a significant size. Third, they are contractually predetermined, set in advance by some
ex ante algorithm or formula.
Let us consider each of these characteristics in turn, as they are important elements of the
optimal contracting approach that explain many of its limitations. First, although the word
“incentive” can be used broadly to refer to anything that might inspire a change in behavior (my
guilt gives me “incentive” to call my mother), this approach reduces much of economic theory to
a tautology. After all, if economics is based on the principle that “people respond to
incentives,”11 and incentives are then defined as “anything people respond to,” the logic becomes
evidently circular. Moreover, only monetary, or at least material, incentives lend themselves to
formal incentive contracting. It is relatively easy to enforce a contractual promise that takes the
form of “you will get a million stock options at an exercise price of $30 per share.” It is much
9
Jensen and Murphy (1976), supra note __, at 308.
Blair, supra note __, at 60
11
Popular economists do sometime use the word “incentives,” in this broad and tautological sense, but doing so
deprives the underlying principle of economics—that people respond to incentives—almost meaningless. See, e.g.,
Steve E. Landsburg, The Armchair Economist: Economics and Everyday Life 3 (1993) (“Most of economics can be
summarized in four words: „people respond to incentives.‟”)
10
5
harder, and perhaps impossible, to enforce a promise like “you will be loved, honored, and
esteemed.” Thus commentators who advocate the pay-for-performance are really advocating
“pay money, or some other good with market value, for performance.”
Second, as the phrase “high powered incentives” implies, optimal contracting theory does
not object to, and even embraces, very large incentive payments. After all, the larger the
payment, the more it “incentivizes” the agent to perform.12 Conversely, nominal or token
rewards have no importance in the theory.
Third and perhaps most important, optimal contracting theory is based on the assumption
that the rules for determining exactly what the agent must do to earn his or her pay, and for
deciding the form and magnitude of the agent‟s pay, must be objective and must be agreed upon
in advance. Ex ante agreement to an objective metric is essential because optimal contract
theory, like other theories that rely on the homo economicus model, leaves no room for trust. No
rational and purely selfish agent would be so foolish as to rely on an employment contract that
read, “do a good job and you will be rewarded with a substantial bonus at the end of the year.”
Similarly, no rational and selfish principal hire an executive based on a contract that said “in
return for a million-dollar salary, I‟ll work my heart out.” Incentive contracts can control the
behavior of rational and opportunistic agents and principals only if their terms are set out in
advance and are clear, objective, and enforceable.
The Rise of Incentive Ideology
Judged by these standards, the methods that old-fashioned Corporate America used to
compensate its chief executive officers (CEOs) and other employees—“old-fashioned” meaning
common practices before optimal contracting theory attracted widespread support and attention
in the 1980s and 1990s—were hopelessly backward and inefficient. Corporations typically
compensated executives mostly with fixed salaries and the occasional bonus. Moreover, both
salary and bonus were often determined ex post, on the basis of highly subjective criteria. (“You
did a great job last year, we‟re giving you a bonus and a raise.”) Nonmonetary rewards were
coveted and common. (“You‟ve earned a key to the executive washroom.”) Executive pay,
though hardly stingy, was relatively modest and stable. Compensation experts Michel Jensen
and Kevin Murphy report that from 1982 to 1988, CEOs of major corporations enjoyed an
average salary of $843,000--slightly less, when adjusted for inflation, than CEOs of comparable
companies earned from 1934 through 1938.13
Despite this, it is perhaps fair to say that American executives seemed to do a decent
enough job for investors in the days before “pay for performance.” Companies run by
executives paid with fixed salaries and modest bonuses set ex post provided significant positive
returns to investors. Between 1950 (the year the Standard & Poors 500 Index was first
published) and 1990, the Index returned gains of averaging more than 8 percent each decade.14
During the late 1980s and early 1990s, however, the ideology of incentive pay captured
the hearts and minds of reformers and business leaders alike. This enthusiasm for the optimal
12
Bebchuk & Fried, supra note __, at 6 (“prominent financial economists such as Michael Jensen and Kevin
Murphy urged shareholders to be more accepting of large pay packages that would provide high-powered
incentives”).
13
Michael Jensen and Kevin Murphy (2004) CITE
14
CITE
6
contracting approach was part of a broader social trend, the rise of “law and economics.” In his
2008 study of conservative trends in legal thought, Steven Teles concluded that the law and
economics movement “is the most successful intellectual movement in the law of the past thirty
years, having rapidly moved from insurgency to hegemony.”15 Much the same might be said of
the ideology of incentive pay. In the area of corporate governance, for example, the idea that
executives and directors can only be trusted to work hard and honestly if their pay is somehow
tied ex ante to an objective metric of performance has been accepted by a generation of corporate
scholars as a truth so obvious it does not need further examination.16
More important, the ideology of incentives seems to have influenced the law. One of the
clearest examples can be found in the U.S. tax code. In 1990, economists Michael Jensen and
Kevin Murphy published an influential article in the Harvard Business Review calling for
companies to tie their executives‟ pay to objective metrics. Only a few years later, the U.S.
Congress passed a major revision of the Internal Revenue Code (I.R.C. Section 162(m)), which
encourages public companies do just that.17 Section 162(m) provides that public corporations
cannot deduct annual compensation in excess of $1 million paid to their top five executives,
unless that compensation is tied to an objective corporate performance metric. Section 162(m)
accordingly requires corporations seeking to minimize their tax burdens to adopt incentive pay
schemes for their most highly-paid executives.
To the extent it was intended to rein in the size of executive pay, Section 162(m) has
proved an utter failure.18 In the wake of the Section‟s adoption, executive pay at public
companies has increased dramatically.19 Section 162(m) appears to have been far more
successful, however, in influencing the way public companies compensate their CEOs and other
executives. The years since 1993 have seen a seismic shift in the compensation practices of
American business corporations, to the point where incentive pay now provides the bulk of
compensation for top executives. For example, in 1994, immediately after the Section‟s passage,
the percentage of CEO compensation attributable to stock option grants was only 35 percent. By
2001, this figure had risen to over 85 percent.20
Does Incentive Pay Work for Corporate Executives? Considering the Evidence
Thanks to IRC Section 162 (m) and other regulatory changes that have encouraged
business firms to adopt the pay for performance approach,21 we have now had nearly two
decades‟ experience with the enthusiastic embrace of incentive-based pay in business
corporations. It is worth stopping to consider what we have learned so far from this massive
natural experiment in human motivation.
The question of what contributes to high performance in both individual companies and
the broader economy is difficult and complex. Any discussion of how and why Section 162(m)
15
Steven M. Teles, The Rise of the Conservative Legal Movement: The Battle for Control of the Law 216(2008)
Lynn Stout, Cultivating Conscience: How Good Laws Make Good People 42 (2011)
17
Internal Revenue Code Sec. 162(m) (1993) CITE
18
Jeffrey D. Korzenik, The Tax Code Encourages Wall Street Bonuses, Forbes (Feb. 4, 2009) CITE
19
See infra TAN ___.
20
Blair, supra note __, at 61
21
DISCUSS, e.g., SEC rules requiring mutual funds to disclose their proxy voting and subsequent rise of proxyvoting advisor ISS/Riskmetrics, which favors pay for performance.,
16
7
has had an effect on the larger corporate sector is inevitably highly speculative. Nevertheless, it
is worth recognizing that optimal contracting theory predicts that, other things being equal,
Corporate America‟s shift toward incentive-based pay in the 1990s and 2000s should have
produced a significant improvement in the performance and profitability of U.S. companies, and
a corresponding increase in investor wealth.
This prediction has not been borne out. Where the S&P 500 Index saw average gains of
more than 8 percent each decade from its inception in 1950 until 1990, from 1993 on the Index
has seen gains of only 6 percent.22 Meanwhile, executive pay has increased dramatically. In
1991, a few years before the adoption of Section 162, the average CEO of a large public
company received pay approximately 140 times that of the average employee; by 2003 the ratio
was approximately 500 times.23 The shift to performance-based pay has also been accompanied
by a disturbing outbreak of executive-driven corporate frauds, scandals, and failures, to the point
where Enron, Worldcom, AIG, and Goldman Sachs have become household names synonymous
with corporate misbehavior.
At the micro-level, the evidence in support of pay-for-performance is not much better. A
few studies have found that certain types of incentive compensation schemes seem associated
with slightly better stock performance in some firms, as measured over a relatively short time
period.24 Other studies, however, find little or no effect, or even a negative effect.25 Meanwhile,
incentive pay has been statistically associated with unethical and even illegal executive behavior,
including earning manipulations; accounting frauds; and increased risk-taking behavior.”26
It seems fair to suggest that, at a minimum, there is little or no empirical evidence to
support the claim that pay-for-performance compensation schemes, at least for corporate
executives, have proved the panacea they were supposed to be. Nevertheless, rather than
inspiring observers and policymakers to question the wisdom of emphasizing ex ante incentives,
or at least to consider alternatives, many would-be reformers have decided the solution is simply
to use more and better ones. For example, before the 2008 crisis, prominent legal scholars touted
the need to tie pay to stock performance. Now the same scholars claim the better solution is to
tie pay to “long-term” stock performance,27 or to metrics that measure the cost corporate risktaking imposes on creditors and other nonshareholder “stakeholders.”28
Meanwhile, the ideology of incentive pay has seeped into other important public debates
as well. Experts urged the state of Georgia to adopt performance-based pay for teachers on the
theory that “to improve outcomes, the state must replicate market incentives.”29 (The result, we
are now learning, may be widespread cheating among Georgia teachers and administrators
seeking to improve their student‟s test scores through the simple method of erasing and
correcting the student‟s answers on the tests.)30 The U.S. Department of Health and Human
Services has launched a series of initiatives to explore using pay for performance systems for
22
CITE
Bebchuk and Freid, supra note __, at 1.
24
CITES
25
CITES
26
CITES. DISCUSS how evidence on effect of incentive pay on individual firm performance is mixed at best.
27
Lucian Bebchuk and Jesse Fried, Pay for Short Term Performance, ___ U Pa L Rev. ___ (2010), CITE
28
Lucian Bebchuk and Holger Spamann, Regulating Bankers‟ Pay, CITE.
29
Noel D. Campbell and Edward J. Lopez, Paying Teachers for Advanced Degrees: Evidence on Student
performance from Georgia, ___ J. Private Enterprise ___ (forthcoming)CITE
30
See supra TAN ___.
23
8
hospitals, physicians, and nursing homes.31 When Bloomberg News bought Business Week
magazine in 2009, Bloomberg‟s chief editor announced that the company would start basing
writers‟ compensation on objective metrics like whether a story‟s publication changed stock
market prices.32
As in the case of corporate executives, the ideology of incentives is being embraced in
these areas despite the fact there is little or no empirical evidence to support the claim that it
actually works.33 Perhaps, if we continue to emphasize and explore the pay for performance
approach, we may eventually stumble upon a proven formula for using financial incentives to
motive optimal performance from business executives—as well as doctors, teachers, and
journalists. This Article argues, however, that for many of our most important jobs and
industries, the quest to tie pay to performance is quixotic at best and destructive at worst.
Optimal contracting theory is deeply flawed because it rests on another flawed theory: the homo
economicus theory of rational and selfish behavior.
II.
A PRIMER ON PROSOCIAL BEHAVIOR
Optimal contracting theory, like most economic theory, adopts the homo economicus
assumption that people are rational and selfish actors.34 Of course, even the most ardent
enthusiast of rational choice admits there are times people are neither. Nevertheless, neoclassical
economics presumes departures from the homo economicus model are both relatively rare and
relatively random. Most of the time, the theory goes, the rational selfishness model does a pretty
good job of predicting what people will do.35
In recent years, however, the homo economicus approach has been challenged by the rise
of a “behavioral economics” school that, rather than just assuming people act rationally and
selfishly, looks to behavioral science in general and empirical experiments in particular to see
how real people actually behave. Most contemporary work in behavioral economics tends to
focus on departures from rationality, more than on departures from selfishness.36 Nevertheless,
behavioral science also demonstrates, beyond any reasonable dispute, that just as people often
make choices that appear irrational they also often make choices that seem unselfish and
“conscientious.”
Conscience as Unselfish Prosocial Behavior
31
U.S. Department of Health and Human Services, Centers for Medicare and Medicaid Services, Press Release on
Medicare “Pay for Performance” Initiatives (January 31, 2005).
32
NY Times April 25, 2010, An Uneasy Marriage of the Cultish and the Rumpled
33
See, e.g., David N. Figlio and Lawrence Kenney, Individual Teacher Incentives and Student Performance CITE
(“there is no U.S. evidence of a positive correlation between between individual incentive systems for teachers and
student achievement”); Meredith B. Rosenthal et al., Early Experience with Pay-for-Performance: From Concept to
Pracitice, 14 J. Am. Med. Assoc. 294 (2005); Meredith B. Rosenthal and Richard .G. Frank, What is the Empirical
Basis for Paying for Quality in Health Care? 63 Med. Care Research & Rev. 135 (2006).
34
Henrich at al., Foundations of Human Sociality: Economic Experiments and Ethnographic Evidence from Fifteen
Small-Scale Societies 8 (homo economicus model rests on a “selfishness axiom” that assumes “individuals seek to
maximize their own material gains … and expect others to do the same.”
35
Stout, Cultivating Conscience, supra note __ at 26-27
36
Id. at 77-78.
9
It is important to emphasize that the word “unselfish” is being used here to describe
behavior, not emotions. We are talking about acts, not feelings. When I pass up a convenient
opportunity to relieve a kindergartener of her lunch money, it is easy to imagine any number of
“selfish” subjective concerns that might motivate my restraint. Perhaps I want to avoid the
painful stab of guilt; perhaps I desire the warm glow of feeling virtuous; perhaps I want to avoid
the fires of Hell. Alternatively, I might suffer from an inchoate and irrational fear that, no matter
what precautions I take, my misdeed inevitably will be detected and punished. (H.L. Mencken
defined conscience “as the small inner voice that tells us someone is watching”). Whatever my
subjective emotional state,37 my objective behavior remains unselfish, in the sense I have
declined an opportunity to make myself materially better off. One does not need to understand
the internal mechanism behind such unselfish behavior to study (or value) the behavior itself.
Thus this article will describe an action as “unselfish”, “prosocial,”38 or “conscientious”
whenever the actor sacrifices time, money, or some other valuable resource in order to help or to
avoid harming another, or to follow ethical rules. This definition encompasses acts of active
altruism, like diving into a cold lake to save a drowning stranger. But it also applies to the far
more common phenomenon of “passive” altruism: declining to exploit others‟ trust or
vulnerability (e.g., refraining from shaking down schoolchildren for lunch money). Passive
altruism is omnipresent in American society. Even in anonymous urban environments, most
people put litter into a trash can rather than dropping it on the street, wait patiently in line at the
coffee shop rather than shoving or bribing their way to the front, and leave their neighbors‟
newspapers sitting in the driveway rather than helping themselves to the morning news.
Kidnapping for ransom is not a major industry.
Indeed, passive altruism is so deeply woven into contemporary life it often goes
unnoticed. We assume the strangers around us will show us a certain amount of courtesy and
forbearance, just as we take for granted the gravitational force that keeps us from floating out
into space. As I have argued at length elsewhere, where we may pay close attention when others
cheat or misbehave, we usually don‟t see others‟ unselfish behavior, even when it happens under
our noses.39 Americans watched their TVs with horror as hundreds of New Orleans residents
began looting in the wake of Hurricane Katrina. Few of us stopped to marvel at the tens of
thousands of New Orleans residents who were not looting.
There are many reasons, including the nature of our language, our psychological quirks
and biases, and the training many of us receive in colleges and universities, why it can be hard
for us to see others‟ unselfishness.40 However, one of the most important reasons is that healthy
societies tend to create extrinsic incentives to reinforce and promote prosocial behaviors. This
makes it hard to find everyday examples that would prove to a skeptic (and as I have just
suggested, we seem prone to be skeptics when it comes to others‟ unselfishness) that apparently
unselfish behavior is driven at least in part by internal forces (conscience), and not only by fear
of negative external consequences. I have never taken lunch money from a kindergartener. Still,
37
DISCUSS how evolution might favor the development of internal mechanisms to reward prosocial behavior in
social species.
38
Of course, not all prosocial behavior is unselfish. The greedy and materialistic neurosurgeon who saves a dozen
lives a week in order to pay for her third vacation home is acting prosocially, albeit in a self-serving fashion.
39
Stout, Taking Conscience Seriously, CITE; Stout, Cultivating Conscience, supra note __ at 45-71.
40
Ibid.
10
it would be difficult for me to prove that it is my conscience, not fear of arrest and prosecution,
that deters me.41
Luckily there is a place where conscience can be seen more clearly. In the experimental
laboratory, researchers can eliminate the complexity of external influences and incentives that
muddy the waters of everyday life. Over the past half-century, behavioral scientists have
developed an ingenious variety of experiments to test what real human subjects do when placed
in situations where their self-interest, as measured by material gains and losses, conflicts with the
interests of others. By studying the enormous body of empirical data that has been generated
from those experiments, we can learn a surprising amount about just how, when, and why
conscience works. Thus the remainder of this Part surveys four basic lessons we can learn from
behavioral science about the nature of conscience.
Experimental Gaming Lesson 1: Conscience Exists, So Does Spite, and Most People Know
This
One of the most useful and well-known behavioral experiments used to study prosocial
behavior is the “social dilemma game.” The social dilemma resembles the familiar prisoner‟s
dilemma of game theory. However, where the archetypal prisoner‟s dilemma involves two
people, social dilemmas can be played by two or more--sometimes quite a few more--players.
As in a prisoner‟s dilemma, each player in the game is given the option of either choosing a
“cooperative” strategy that helps the other players, or choosing instead to “defect” by adopting a
selfish strategy that maximizes the player‟s own personal returns. As in the prisoner‟s dilemma,
each individual player always maximizes her personal payoffs by defecting, no matter what the
other players do. The group, however, gets the greatest aggregate payoff if all its members
cooperate.
Consider an example of a social dilemma called the “contribution game.” A group of n
players (assume four) is assembled and each is given an initial stake of, say, $100. The players
are told they can choose between keeping all their new-found cash, or contributing some or all of
it to a common investment pool. Players also are told that any money contributed to the pool
will be multiplied by some factor greater than 1 but less than n (assume the money will be
tripled), then redistributed equally among all the players--including an equal share for players
who did not contribute.
The best individual strategy in such a game is to keep the $100, while hoping to receive
as well an equal portion of the tripled funds that would result from any of the others players
being foolish enough to donate to the common pool. For example, if you keep your $100 and the
other three players contribute theirs, you end up with $325 (your original $100 plus $225 from
the common pool). As a result, no rational selfish player will cooperate, and selfish players walk
41
Unselfish prosocial behavior is often apparently consistent with legal incentives because a variety of legal rules
are designed to promote unselfish prosocial behavior (e.g., criminal law, contract law, tort law). Similarly, many of
the acts of altruism and vindictiveness we observe in daily life occur between people who are acquainted with each
other and who operate in the same community (the neighborhood, the workplace, the family). Thus it is difficult to
exclude the possibility that apparently unselfish prosocial behavior actually is motivated entirely by concern for
future consequences in the form of reciprocity or reputational loss. Stout, Cultivating Conscience, supra note __ at
65-66.
11
away with only $100 each. At the same time, the best group outcome (and the best average
individual outcome) requires universal cooperation. If all unselfishly contributed, each would
get $300 back. Thus the rational pursuit of self-interest in a social dilemma ultimately leaves
both the group, and its individual members, worse off.
Over the past five decades social scientists have performed and published the results of
hundred social dilemma experiments, including dilemmas played by people of varying ages and
backgrounds drawn from different cultures around the world.42 Many of these experiments have
been cleverly designed to exclude any possibility the players could rationally expect external
benefits from choosing cooperation over defection. For example, experiments often use subjects
who are strangers to each other, who are told they will play the game only once, and who are
instructed to play under anonymous double-blind conditions that ensure their choice of strategy
(cooperate or defect) will not be revealed to either the other players or the experimenters.
Economic theory predicts a zero percent probability a selfish subject would cooperate in such
games.
Yet real people have a marked propensity to unselfishly cooperate in social dilemmas.43
As a rule of thumb, experimenters observe cooperation rates averaging about 50 percent in most
social dilemma games.44 This remarkable result has endured over nearly a half-century of
testing. Indeed, it was seen in the very first reported prisoner‟s dilemma experiment run at the
RAND Corporation during the 1950s. The subjects were two RAND game theorists who had
devoted their careers to studying rational selfishness. To the consternation of their colleagues,
they showed a hearty willingness to unselfishly cooperate with each other.45 (John Nash, a
RAND game theorist who would go on to win a Nobel prize and become the subject of the
biographical book and film A Beautiful Mind,46 mused in a note to his colleagues, “[o]ne would
have thought them more rational.”)47
Such results demonstrate, most obviously, that prosocial behavior is common, even
endemic. Cooperating subjects in a social dilemma are choosing to serve others‟ interests rather
than maximize their own.48 Lay terms to describe this type of unselfish behavior including
consideration, generosity, and, more generally, altruism. But altuistic concern for others is not
42
See David Sally, Conversation and Cooperation in Social Dilemmas: A Meta-Analysis of Experiments from
1958 to 1992, 7 Rationality & Soc‟y 58 (1995) (summarizing over 100 studies done between 1958 and 1992);
Robyn M. Dawes & Richard H. Thaler, Cooperation, 2 J. Econ. Persp. 187 (1988) (summarizing studies); Robyn M.
Dawes et al., Cooperation for the Benefit of Us— Not Me, or My Conscience, in Beyond Self-Interest, supra note
__, at 97-110 (summarizing studies).
43
Henrich at al., Foundations of Human Sociality: Economic Experiments and Ethnographic Evidence from Fifteen
Small-Scale Societies 5 (“there is no society in which experimental behavior is even roughly consistent with the
canonical model of purely self-interested actors.”)
44
Sally, supra note __, at __.
45
Sylvia Nasar, A Beautiful Mind: A Biography of John Forbes Nash, Jr., Winner of the Nobel Prize in Economics
1994 (1998), at 119; see also David Sally, Conversation and Cooperation in Social Dilemmas, 7 Rationality & Soc.
58, 60 (1995).
46
Sally, Conversation and Cooperation, supra note __, at 60.
47
Nasar, supra note __, at 119.
48
Again, saying that cooperation does not “serve” a player‟s interest does not involve making a claim about what
subjectively motivates cooperation. See supra TAN __. Possible subjective motivations include guilt, sympathy,
or ego, all of which may lead the player to conclude that she is psychologically better off (happier, less conflicted) if
she cooperates. But whatever the internal mechanism, subjects who cooperate in a social dilemma can be said to
“reveal a preference for” (to act as if they care about) serving others‟ welfare.
12
the only type of unselfish behavior we observe in experimental gaming. This can be seen from
the results of a second type of experimental game that has been the subject of numerous studies,
the ultimatum game.
An ultimatum game involves two players. The first player (called the “proposer”) is
given a stake of money, say $100. The proposer is then told that she can offer to give any
portion of it she chooses--all, a lot, a little, or nothing--to the second player. The second player,
called the “responder,” then gets to make a choice of his own. He can accept the proposer‟s
offer, in which case the $100 will be divided as the proposer suggests. Or, the responder can
reject the offer—in which case both players get nothing.
It is clear what homo economicus would do in an ultimatum game. The proposer would
offer the minimum possible amount of money (say, one dollar) and the responder would accept
this minimal amount. After all, a dollar is better than nothing, and should be accepted. Knowing
this, no selfish proposer would offer more. Yet human subjects don‟t play ultimatum games this
way. When real people play ultimatum games, the proposer usually offers the responder a
substantial portion of the stake, often half.49 And if the proposer does not do this, the responder
frequently rejects the offer.50
Revenge is sweet, but in an ultimatum game, it is not costless. A responder who rejects
any positive offer has made himself worse off, in material terms, than he needed to be. Why
does he do this? It appears he wants to make the proposer worse off.
If the concern for others seen in social dilemma games shows the bright side of our
human capacity to act unselfishly, ultimatum games give us a glimpse of a darker side.
Responders who reject offers they think are “too low” are displaying a willingness to sacrifice
not to benefit another, but to harm her.51 Synonyms for this sort of other-regarding behavior
include vindictiveness, vengefulness, and spite.52
Spite is not as appealing a character trait as altruism. Nevertheless, as we shall see below,
one person‟s capacity for spite can motivate another person to act more prosocially. As a result
we will also fit spite--admittedly with a bit of awkward squeezing-- into the category of unselfish
prosocial behavior.
Finally, in addition to proving that unselfish behavior can take the form of a willingness
to sacrifice to harm others, as well as a willingness to sacrifice to help them, ultimatum games
can teach us something else important as well. The lesson is apparent when we compare the
behavior observed in ultimatum games with typical behavior in a similar but slightly different
sort of game called the dictator game.
49
See generally Colin Camerer & Richard H. Thaler, Ultimatums, Dictators and Manners, 9 J. Econ. Persp. 209
(1995) (summarizing studies); Martin A. Nowak, et. al., Fairness Versus Reason in the Ultimatum Game, 289
Science 1773 (2000)(same)
50
Camerer & Thaler, supra note __, at 210 (“offers of less than 20 percent are frequently rejected”)
51
There may be other forms of other-regarding behavior as well. People may have not just altruistic revealed
preferences (willingness to sacrifice to help others) and spiteful revealed preferences (willingness to sacrifice to
harm others), but also relative preferences (willingness to sacrifice to ensure that one enjoys a good position relative
to others). See Robert F. Frank, Luxury Fever CITE. Although relative preferences are important in explaining
human behavior, they lie beyond the scope of this article.
52
Spite involves harming another, but spiteful behavior may benefit third parties if it encourages cooperative
behavior within a group. As a result, spite can be described at an evolutionary level as a form of altruism. See
generally Stout, Cultivating Conscience, supra note __ at 122-147 (discussing evolution of prosociality).
13
Like an ultimatum game, a dictator game involves two players. Again, one is given a
stake of money (assume $100) and told to choose a rule for distributing that money between the
two players. However, a dictator game differs from an ultimatum game in one important respect.
In a dictator game, the second player is not given any right to veto the first player‟s division of
the loot. The second player gets what the dictator offers, no more and no less. This is why the
first player is the “dictator."
Interestingly, the majority of subjects asked to play the role of the dictator in a dictator
game give the other player at least some portion of their initial stake, despite the fact that they
receive no external reward for making this sacrifice. 53 Thus subjects in dictator games show
altruism just as subjects in social dilemmas do.
Yet offers made in dictator games tend to be smaller than offers in ultimatum games.54
Dictators share, but on average they do not share as much as proposers in ultimatum games do.
This suggests that in addition to altruism, proposers in ultimatum games have a second motive
for sharing--fear that the responder might react to a low offer by spitefully rejecting it.
Put differently, proposers in ultimatum games behave as if they anticipate the fury of a
responder scorned. This indicates not only that people sometimes act unselfishly (e.g.,
altruistically or spitefully), but also that people know other people act unselfishly. Thus
unselfish behavior influences human choice on at least two levels.
At the first level, people sometimes sacrifice their own material payoffs to either benefit,
or harm, others around them. At the second level, the knowledge that people sometimes act
unselfishly leads other people to act unselfishly in anticipation--even if they, themselves, are
purely selfish. Suppose, for example, that purely selfish Jill lacks both altruism and spite. She
still might deliberately make herself vulnerable to Jack if she believes Jack will unselfishly show
concern for her welfare, a behavior we might call “rational trust in Jack‟s conscience.”
Similarly, selfish Jill might refrain from taking opportunistic advantage of Jack if she believes
that Jack would react by unselfishly sacrificing to punish her. This might be called “rational fear
of Jack‟s vengeance.”
Experimental Gaming Lesson 2: Social Context and The Jekyll/Hyde Syndrome
Experimental gaming thus demonstrates the homo economicus model of purely selfish
behavior is often misleading. But if we want to develop a model that is better, in the sense that
it allows us to more accurately predict what people will do, we need to know a bit more. Most
obviously, we need to know when and why people act unselfishly.
To appreciate the nature of the problem, recall the 50 percent cooperation rate typically
observed in social dilemmas.55 This result supports the claim that people often behave
unselfishly. But it also supports the claim that people often behave selfishly. (If people always
showed concern for others, we would observe 100 percent cooperation rates.) What explains
why some people cooperate when others don‟t, or why the same person may cooperate at one
time and not at another?
53
54
55
Camerer & Thaler, supra note __, at 213
Id.
See supra TAN __.
14
Experimental gaming data again offers insight. Prosocial behavior becomes both
predictable and manipulable if we pay attention to something we might call “social context.”
From an economic perspective, social dilemma, ultimatum, and dictator games are highly
standardized experiments, as each presents its subjects with a fixed payoff function determined
by the nature of the game itself (social dilemma, ultimatum, or dictator game). Nevertheless,
researchers have run the experiments under a wide variety of differing noneconomic conditions.
For example, researchers in some games have actually asked subjects to either cooperate or
defect;56 have grouped subjects according to their tastes for abstract or impressionist art; 57 have
allowed them to exchange surnames;58 and have raised or lowered the payoffs to other members
of the group from choosing cooperation over defection.59 One recent experiment even examined
how subjects behaved when playing social dilemmas in the presence of a dog.60 Of course, none
of these changes in social context change the economic structure of the games; subjects always
maximize personal payoffs by choosing defection over cooperation.
Nevertheless, changes in social context produce dramatic changes in observed behavior.
In a pioneering meta-survey of over 100 reported social dilemma experiments, David Sally found
that researchers were able to elicit cooperation rates ranging from a low of 5 percent to more
than 97 percent.61 To appreciate this astonishing behavioral flexibility, simply recall that
payoffs in a social dilemma are structured so a rationally selfish player would always defect.
Yet social cues can elicit almost universal prosociality in some games, and almost
universal selfishness in others. Although researchers have identified several different social cues
that seem to trigger prosocial behavior, this article focuses on three in particular: (1) instructions
from authority, (2) expectations about others' selfishness or unselfishness; and (3) magnitude of
the benefits to others from one‟s own unselfish action. Each deserves special attention, for each
has proven consistently important in triggering prosocial behavior in experimental games, and
each also maps onto a well-studied and fundamental aspect of human psychology (obedience,
imitation, and empathy). Moreover, as we will see in Part III, each carries important
implications for our understanding of the behavioral effects of employing ex ante incentives.
Let us begin with the role of instructions from authority. One of the most consistent
findings in human psychology is that people tend to do what they are told to do. In Stanley
Milgram‟s infamous obedience experiments, for example, subjects were told to administer a
potentially-lethal electric shock to another human being (in reality an actor pretending to be
shocked). The vast majority did just that. 62 From a rational choice perspective, however, this is
hardly surprising. After all, Milgram‟s subjects were being paid to follow instructions. What is
far more interesting is that in social dilemma games, subjects obey instructions to cooperate even
56
CITE
Sally, supra note __, at 67-68, 78.
58
See, e.g., Gary Charness et al., What's In A Name? Anonymity and Social Distance in Dictator Games (2002),
(downloadable at SSRN.com); Gary Charness et al., Social Distance and Reciprocity: The Internet vs. the
Laboratory (2002)(downloadable at SSRN.com).
59
Sally, supra note __, at __ CITE
60
“Manager‟s Best Friend: Dogs Improve Productivity,” Economist at 66 (August 14, 2010)(reporting results of
study that found that cooperation rates rose when subjects were asked to play social dilemmas in the presence of a
dog).
61
Sally, supra note __, at __ CITE
62
Stanley Milgram, A Behavioral Study of Obedience, 67 J. Abnormal & Soc. Psychol. 371 (1973). Of course,
Milrgam's results are only surprising and disturbing if we expect subjects to act prosocially.
57
15
though this means they get less money. (Defecting always maximizes personal payoffs in a
social dilemma.)
In his meta-survey, for example, Sally found that giving formal instructions to cooperate
raised cooperation rates by 34% to 40% compared to games where no instructions were given.63
Conversely, formal instructions to defect increased defection by 20% to 33%.64 Indeed, people
seem so sensitive to directions from authority that they change their behavior in response to mere
hints about what the experimenter desires. In one social dilemma experiment, experimenters
observed a 60% cooperation rate when subjects were told they were playing the “Community
Game.” Among similar subjects told they were playing the “Wall Street Game,” cooperation
dropped to 30 percent.65
If Stanley Milgram‟s experiments showed us the dark side of obedience, social dilemmas
show us a brighter side. People follow instructions to harm others. But they also follow
instructions to help, or to avoid harming, others—even when this requires some personal
sacrifice.
A second social variable that plays a key role in eliciting unselfish behavior from subjects
in experimental games is beliefs about whether others are acting, or would act, unselfishly. As
any social psychologist could tell you, human beings tend to imitate what others are doing.
We‟re nice when we think others would act nicely, and nasty when we think they would act
nasty.
For example, in dictator game experiments, dictators share more of their loot when they
are given information indicating that other dictators in other games chose to share.66 Similarly,
numerous social dilemma studies indicate that subjects' beliefs about how others are likely to
behave strongly influence their own choices. Experimenters have found that subjects who
believe that their fellow players in a social dilemma experiment are likely to defect become far
more likely to defect themselves. Conversely, players who are led to believe their fellows will
cooperate, become more likely to choose cooperation.67
This last pattern is an especially striking example of how social considerations
predominate over economic concerns in determining choices in experimental games, because in
a social dilemma game, a belief that one's fellows are likely to cooperate increases the expected
economic returns from defecting. Nevertheless, far from discouraging cooperation, a belief that
other players are going to cooperate produces more cooperation -- exactly the opposite of what
the homo economicus model predicts. Some social scientists call this “generalized reciprocity.”
It is important to recognize, however, that mutual cooperation in a one-shot, anonymous social
63
Sally, supra note __, at 75, 78.
Id.
65
See Lee Ross & Andrew Ward, Naive Realism in Everyday Life: Implications for Social Conflict and
Misunderstanding, in Values and Knowledge 103, 106-07 (T. Brown et al. eds., 1996). Similar results have been
observed in dictator games, where dictators make larger offers when they are instructed to “divide” their stakes than
when the experimenters use the “language of exchange”. See Camerer & Thaler, supra note __, at 213.
66
Erin Krupa and Robert Weber, The Focusing and Informational Effects of Norms on Pro-Social Behavior, CITE
67
Scott T. Allison & Norbert L. Kerr, Group Correspondence Biases and the Provision of Public Goods, 66 J.
Personality & Soc. Psych. 688 (1994) (“[n]umerous studies have reported that individuals are more likely to
cooperate when they expect other group members to cooperate than when they expect others to defect”); Toshio
Yamagishi, The Structural Goal/ Expectations Theory of Cooperation in Social Dilemmas, 3 Advances in Group
Processes 51, 64-65 (1986) (discussing experimental findings that “expectations about other members‟ behavior is
one of the most important individual factors affecting members‟ decisions in social dilemmas”).
64
16
dilemma is not true reciprocity, because there is no rational hope that choosing cooperation could
elicit benefits from others in future games. Nor can the recipient in a dictator game reciprocate
the dictator‟s generosity. Imitation is a better word to describe such behavior.
Finally, a third social variable that seems to play a role in triggering unselfish prosocial
behavior in experimental games is perceptions of the magnitude of the payoffs to others from
one's own unselfish behavior. Although this is, in a sense, an "economic" variable--we are
talking about economic returns--it is also a "social" variable because we are talking about
economic returns to others. Homo economicus has no intrinsic interest in helping others, either
a little or a lot. Real people seem more inclined to cooperate when they believe others will
benefit more from their cooperation.
This has been seen in dictator games, where an experimenter‟s promise to double or triple
the amount the dictator chooses to share has made some dictators so generous their partners
ended up with bigger payoffs than the dictators themselves.68 Similarly, David Sally concluded
from his meta-analysis of social dilemma games that “the size of the loss to the group if strictly
self-interested choices are made instead of altruistic ones … is important and positive” in
explaining cooperation rates.69
In lay terms, we might not inconvenience ourselves to only modestly benefit another.
But when the stakes are high, we tend to show empathy, and rise to the occasion. If I‟m in a
hurry, I might refuse to stop and give directions to a lost stranger. But I would stop to dial 911 if
he collapsed from a heart attack on the street beside me.
Taken as a whole, the experimental gaming data thus offers us a second, potentially very
useful, lesson about prosocial behavior. In brief, most people act as if they have at least two
personalities (or, as an economist might put it, two “revealed preference functions”). One
personality is purely selfish. When this personality dominates, we maximize our personal
payoffs without regard to how our choices affect others. Most people, however, have a second,
more prosocial personality. When our prosocial personality dominates, we take account of
others‟ interests, at least to some extent.
The result resembles the fictional protagonist of Robert Louis Stevenson‟s tale, The
Strange Case of Dr. Jekyll and Mr. Hyde. Sometimes we are caring, considerate, and
conscientious (Dr. Jekyll). Sometimes we act selfishly and asocially (Mr. Hyde). Which
persona dominates in any particular situation? Decisions to behave in a purely selfish or
prosocial fashion seem determined largely by social context. And three of the most important
aspects of social context, at least for present purposes, are instructions from authority;
expectations regarding others' prosociality; and perceived benefits to others.
This is not to say that when the social cues are lined up favorably, people always act
prosocially. As discussed in greater detail later, individuals differ in their proclivities toward
conscientious behavior, and a blessedly small minority of psychopaths seem to lack any
conscience at all. Moreover, as discussed immediately below, even the best (most conscientious)
among us take account of personal cost in choosing between selfish and unselfish behavior.
Experimental Gaming Lesson 3: Prosociality and Personal Cost
68
Ames Andreoni and John Miller, Giving According to GARP: An Experimental Test of the Consistency of
preferences for Altruism, 70 Econometrica 737 (2002).
69
Sally, supra note __, at 79.
17
Social context—especially instructions from authority, expectations regarding others‟
prosociality, and beliefs about benefits to others--plays a vital role in determining when people
choose to act in an unselfish prosocial fashion. But saying that social context matters is not the
same as saying economic payoffs don‟t. A third fundamental lesson from experimental gaming
is that prosocial behavior depends not only on social context, but personal payoffs as well. We
are far more prosocial than standard economic theory suggests. But our supply of prosocial
behavior seems (as an economist might put it) “downward-sloping.” When the cost of
unselfishness increases, the quantity supplied declines.
This phenomenon is perhaps most easily observed in social dilemma games. As the
personal cost associated with cooperating in a social dilemma rises (that is, as the expected gains
from defecting increase), the incidence of cooperation drops significantly. Sally‟s meta-survey
found that doubling the reward from defecting appeared to decrease average cooperation rates in
social dilemmas by as much as sixteen percent.70 Similarly, if a proposer offers a relatively
larger share in a dictator game, the likelihood that the responder will spitefully reject it
decreases.71 Although this pattern may be driven in part by responders' perceptions that a
proposer who offers a larger share is behaving in a more "fair" (prosocial) fashion and so does
not deserve punishment, it is also consistent with responders' perceptions that as the size of the
proposer‟s offer increases, so does the personal cost of spitefully rejecting the offer.
We seem more inclined to unselfishness when unselfishness is cheap. Conversely, when
the cost of conscience is high, we are less inclined to "buy" it. It is important to emphasize this
is not the same as saying people are basically selfish. Any cooperation in a social dilemma, and
any sharing in an ultimatum or dictator game, is inconsistent with the homo economicus model.
But the observation that people are capable of both benevolence and malevolence does not mean
that they are indifferent to the personal cost of these behaviors. When people indulge in
unselfishness, they keep at least one eye on self-interest in doing so.
This does not imply that unselfish behavior is economically unimportant. There are
many situations in modern life where a small act of unselfishness that costs the unselfish actor
relatively little provides much larger benefits to others. Summed up over many different
individuals and many different social interactions, the total gains from many such small acts of
altruism can be enormous. We benefit significantly when we know that, if we send our child to
school with lunch money, she will probably still have the money when lunchtime rolls around.
Even a limited human capacity for unselfish action generates enormous benefits over long
periods of time and large populations.
But the likelihood that self-interest places limits on conscience does imply that if we want
to promote conscientious behavior, we need to give conscience “breathing room” to work.
George Washington supposedly said few men have the honor to withstand the highest bidder.
Put differently, if we want people to be good, it‟s important not to tempt them to be bad. As we
shall see in Part III, this carries important implications for the modern ideology of incentives.
Experimental Gaming Lesson 4: The Role of “Character”
70
71
See Sally, supra note __, at 75
See supra TAN __.
18
Finally, we turn to a fourth important lesson to be learned from behavioral experiments:
although almost everyone is capable of unselfish action, a very small percentage of the
population seems not to be, and the rest of us vary in our inclinations toward unselfishness. In
other words, while unselfish behavior is determined in large part by social context and personal
cost, it is also related to what laymen call “character.”
The most obvious example can be found in psychopaths, relatively rare individuals who,
for reasons of nature or nurture, seem incapable of acting unselfishly or showing empathy for
others. (Not to put too fine a point on it, it can be argued that homo economicus is a
psychopath.) Luckily, the American Psychiatric Association estimates that only about 1 to 3
percent of the population is afflicted with “antisocial personality disorder” (the formal
psychiatric label for psychopathy), and many of those individuals are safely confined in prison. 72
The rest of us are capable of acting unselfishly, at least in the right circumstances.
Experimenters have been able to elicit cooperation rates of over 97% in social dilemmas, and
sharing rates of 100% in some dictator games (presumably dictator games without any
psychopathic subjects).73 When the stars are aligned—when social context supports
unselfishness and the cost of unselfishness is not too high—conscience seems to be a nearuniversal behavioral phenomenon.
But in real life, the stars are not always aligned. Sometimes social context is ambiguous.
(Most people are confident they can act selfishly when choosing a stock portfolio and prosocially
when attending a bar mitzvah, but how should one act when negotiating a babysitting contract
with a friends‟ teenager?) Moreover, sometimes large temptations raise their heads. In
ambiguous or tempting circumstances, different individuals show different propensities to act
conscientiously
What determines one‟s individual tendency toward conscientious behavior? Gender
seems to play a role in some experimental games, as does religion, although both variables have
only modest and quirky relationships to behavior.74 A far more significant demographic
variable may be age. Prosocial behavior in games increases throughout childhood and young
adulthood, and (stereotypes of grumpy old men to the contrary) there is some evidence the
process of becoming prosocial continues with age.75
But in addition to demographic variables, there is intriguing evidence to suggest that
one‟s proclivity toward prosociality--one‟s “character”—may be in large part a product of one‟s
personal experiences. Perhaps the most interesting example is a large study by a consortium of
behavioral scientists who arranged for social dilemma, ultimatum, and dictator games to be
played by subjects from fifteen small, non-Western hunting, herding, fishing, and farming
cultures around the globe.13 The consortium found that people of all ages, genders, and
backgrounds—Machiguenga subsistence farmers from the rainforests of South America,
Torguud nomads in Mongolia, Lamalara whale-hunters in Indonesia—routinely behaved in an
unselfish prosocial fashion. As the researchers put it, “there is no society in which experimental
behavior is even roughly consistent with the canonical model of purely self-interested actors.”14
Nevertheless, there were clear differences between cultures. For example, Machiguenga on
average contributed 22% in social dilemmas, while the more-generous Orma cattle-herders of
72
73
74
75
Stout, Cultivating Conscience, supra note __ at 47-48.
Id. At 98.
Id. at 100.
Henrich at al., supra note __, at 5.
19
Kenya contributed 58%.76 The researchers also found that individual demographic variables-gender, wealth—did a poor job of predicting behavior. Rather, behavior seemed driven by social
experiences, and especially by whether the culture was one in which people frequently engaged
in market transactions with strangers (like hiring themselves out for wages) and whether
economic production required people to cooperate with non-kin (whale-hunters necessarily
cooperate a lot, while slash-and-burn subsistence farmers need cooperate very little.) The
researchers concluded, “our data suggest that these between-group behavioral differences … are
the product of the patterns of social and economic interaction that frame the everyday lives of
our subjects.”77 In layman‟s terms, character may be largely a product of experience.
But whatever the underlying cause of differences in individuals‟ prosocial inclinations, it
seems clear that while most people will act prosocially when social context supports prosociality
and personal cost is low, substantial individual variations in behavior remain. This last lesson
will prove important as we investigate what behavioral science teaches about the likely
consequences of the ideology of incentives.
III. BEHAVIORAL IMPACTS OF „PAY FOR PERFORMANCE‟
According to the optimal contracting literature, people are self-seeking, opportunistic
actors who cannot be trusted to do tasks well (if they are agents) or to pay compensation (if they
are principals) unless constrained by enforceable contracts that create the right ex ante
incentives. As Part II has demonstrated, however, real people often depart from this behavioral
model. Given that business firms, school systems, and hospitals must deal with real people
rather than the fictional homo economicus, this Part explores the question: what can behavioral
science tell us about the possible impact of relying on ex ante incentives?
Relational Contracts and Contractual Incompleteness
The question becomes still more pressing once we recognize, as contract scholars do, that
it is impossible to design a truly “optimal” agency contract that creates perfect incentives for
performance with no risk of undesirable side effects like or excessive risk-taking. This is because
employment contracts, even more than most contracts, are “incomplete,” meaning they
invariably fail to address all the potential issues or disputes that might arise between the parties.
For example, an employment contract with a babysitter is likely to address the hourly wage to be
paid and the likely number of hours of work provided, but what if the child wanders outside and
becomes lost because the sitter is too busy Twittering to notice? What if the child ruins the
sitter‟s shoes with indelible markers? What if the parents return home hours late due to some
emergency, like a sudden trip to the emergency room to see if Dad‟s chest pains are indigestion
or a heart attack?
Contracts are incomplete for a number of good reasons. One is that humans aren‟t
omniscient. As Mel Eisenberg has put it, “contracts concern the future, and are therefore always
made under conditions of uncertainty.”78 Problems can arise during performance that neither
76
Id. at 23, Table 2.3, Summary of Public Good Experiments
Id. at 45.
78
Melvin Aron Eisenberg, The Limits of Cognition and the Limits of Contract,”47 Stan. L.Rev. 211, 213 (1995)
77
20
party thought of, much less discussed in the contract. For example, a bank might hire a
derivatives trader, only to have the position become obsolete as a result of unexpected financial
reform legislation that limited the bank‟s trading activities.
Complexity also leads to incompleteness, because complexity makes negotiating and
drafting contracts expensive. When a corporation hires a CEO, even if the parties could
anticipate every issue that might arise in the course of managing the business—from a sudden
advance in production technology to a nationwide quarantine due to a flu pandemic—they might
find the attempt to draft a formal contract that addressed each and every possible contingency
prohibitively expensive and time-consuming, and instead settle for a short, incomplete contract
that addresses only the most important and obvious aspects of the employment relationship (e.g.,
responsibilities and salary) and leaves other matters to be dealt with in the future should they
arise.79
Perhaps most important for our purposes, contracts are often incomplete with regard to
matters that, while important to the parties, are difficult to observe or to prove in court. For
example, suppose a teacher‟s contract provides a performance bonus if students achieve certain
test scores, and also explicitly provides that scores cannot rise because the teacher tampered with
the students‟ test answers. Even if (as happened in Georgia) a review and statistical analysis of
student answer sheets shows improved test scores but also a suspiciously high number of
changed and corrected test answers, it would be difficult and prohibitively expensive, and
perhaps impossible, for the school district to determine whether the teacher or the students
changed the answers, much less prove the matter in court.
Because uncertainty, complexity, and unobservability are endemic, incomplete contracts
are everywhere. Even a relatively simple agency contract—say, a contract with a real estate
broker to sell a house—contains gaps. (What if the homeowner thinks the agent is not marketing
the home as enthusiastically as he should?) As Steven Shavell puts it, “[c]ontracts typically omit
all manner of variables and contingencies that are of potential relevance to the contracting
parties.”80 Robert Scott goes further: “[a]ll contracts are incomplete.”81
But some contracts are more incomplete than others. Contracts fall along a spectrum of
completeness. At one end lie “discrete” contracts—simple, nearly-complete contracts for
exchanges between parties who never expect to deal with each other again. A contract to
purchase a laptop computer from an online catalog is an example of a relatively discrete contract.
At the other end of the spectrum lies “relational” contracts that involve complex, long-term,
uncertain exchanges—for example, a contract to employ a teacher, surgeon, or business
executive. Drastic incompleteness is a hallmark of most employment contracts. An empirical
study of Fortune 500 CEOs, for example, found that nearly a third had no written employment
contract at all, and another third had only bare-bones contracts that spelled out their pay and
incentives but few of their duties.82
This observation raises the question of how relational contracts like employment
contracts work. Purely selfish actors would exploit the large gaps in relational contracts, and
79
Similarly, uncertainty and complexity can defeat a court‟s attempt to provide optimal “implied” contractual
terms. See generally Stout, Cultivating Conscience, supra note __ at 179-182.,
80
Steven Shavell, Economic Analysis of Law at 63 (2004)
81
Robert E. Scott, A Theory of Self-Enforcing Indefinite Agreements, 103 Colum. L. Rev. 1641 (2003).
82
Stewart J. Schwab & Randall Thomas, An Empirical Analysis of CEO Employment Contracts: What Do Top
Executives Bargain For? 63 Wash. & Lee L. Rev. 240 (Winter 2006)
21
perform poorly or not at all. Anticipating this, purely selfish actors would avoid relational
exchanges with other purely selfish actors. Yet real people do enter incomplete relational
contracts. In fact, many of our most economically significant exchanges—joint business
ventures, apartment leases, building contracts, and of course employment agreements—are
relational. Somehow, despite the problems of uncertainty, complexity, and unobservability,
relational exchanges take place. How?
Sometimes opportunistic behavior in contracting is deterred by reputational concerns. As
organizational economist Oliver Williamson has put it (with an unfortunately typical academic
style), “reputation effects attenuate incentives to behave opportunistically in interfirm trade—
since the immediate gains from opportunism in a regime where reputation counts must be traded
off against future costs.”83 But as Williamson has also noted, “the efficacy of reputation effects
is easily overstated.”84 There are good reasons to question whether reputation can always, or
even often, motivate purely selfish actors to keep promises in relational exchanges. For
example, reputation becomes an unreliable guarantee as one nears retirement. (This does not
seem to deter corporations from hiring executives and directors in their fifties, sixties, and
seventies.) It can also be hard for outside observers to determine which party was at fault when a
relational deal breaks down, as witness public disagreement over the wisdom or folly of the
Hewlett-Packard‟s board‟s decision to fire CEO Mark Hurd.
Conscience As A Solution to Contractual Incompleteness
Given the limits of formal contracts and reputation, how can purely selfish actors
participate successfully in relational exchange? Maybe purely selfish actors can‟t--at least, not
with other purely selfish actors. The empirical evidence on prosociality suggests another
possibility. Although conventional economic analysis treats contract law as a vehicle for
allowing self-interested actors to bind themselves to perform their promises, the story of
relational contract may be just the opposite—not a tale of self interest, but a story of prosocial
partners who trust each other and, to at least some extent, look out for each other.
The key to understanding this idea is to understand that, when two people contemplate
entering a relational contract, each wants protection against the possibility the other might
opportunistically exploit the many gaps that necessarily exist in the contract. Neither the parties
nor the courts can reliably fill the gaps because of uncertainty, complexity, and unobservability.
Reputational concerns sometimes can check opportunistic behavior, but reputation alone often is
not enough. So parties entering relational contracts may seek to employ a third possible check
on opportunism—their contracting partner‟s conscience.85
Suppose, for example, some unanticipated problem or opportunity arises while two
parties are performing a contract. Where two purely selfish actors would instantly find
83
Oliver Williamson, The Mechanisms of Governance (1996) at 116.
Ibid.
85
See generally Margaret M. Blair & Lynn A. Stout, Trust, Trustworthiness, and the Behavioral Foundations of
Corporate Law, __ U. Penn. L . Rev. __ (2001) at ___ CITE (discussing how trust can fills gaps in incomplete
contracts); Stout, Cultivating Conscience, supra note __ at 185-88 (same). For empirical evidence, see, e.g.,
Brown, Falk and Fehr, Contractual Incompleteness and the Nature of Market Interactions (2001) CITE; _____
Kollock, The Emergence of Exchange Structures: An Experimental Study of Uncertainty, Commitment, and Trust,
100 Am .J..Socio. 313 (1994) CITE.
84
22
themselves locked in conflict over who should bear the loss or claim the gain, prosocial partners
could resolve the question far more easily—say, by splitting the unanticipated gain or loss—
because they share, to at least some extent, the common goal of promoting their mutual (not only
individual) welfare. Nor do prosocial partners need to reduce every detail of their bargain to
writing. They trust each other to focus on performing, not on selfishly searching for loopholes.
Finally, even when some element of performance is unobservable or unverifiable, prosocial
partners will try to hold up their end of the deal.
In brief, an implicit “term” of relational contracts seems to be that each party agrees that,
in performing, she will suppress her Mr. Hyde personality and adopt a Jekyll-like attitude toward
her counterparty. As Ian Macneil has put it, a relational contract is just that—a relationship—
characterized by (among other attributes) “role integrity,” “flexibility” and “reciprocity.”86
Using the language of behavioral science, a relational contract creates a social context conducive
to unselfish behavior. The spectrum from simple discrete contracts to complex, incomplete
relational contracts accordingly can be viewed as a spectrum from Hydish behavior toward one‟s
counterparty, to Jekyllish behavior.
This approach offers a number of insights into the questions of how relational exchanges
really work, and how contract law and contract lawyers can make them work better.87 But it
carries especially important implications for “pay for performance” contracting. This is because
the behavioral evidence indicates that employment contracts that rely on material incentives to
motivate performance simultaneously suppress the vital force of conscience—essential for
relational contracting--and so can encourage undesirably selfish, opportunistic, and even illegal
behavior. Incentive-based pay does this through at least three different, but mutuallyreinforcing, mechanisms: by changing perceptions of social context in ways that encourage
selfishness; by creating material temptations that can extinguish conscience; and by introducing a
selection bias against individuals with relatively prosocial characters.
Social Context and “Crowding Out”
Let us begin by examining how pay-for-performance schemes frame social context As
discussed in Part II, in choosing between asocial and prosocial behavior, people pay close
attention to social context. Contract negotiations, however, provide a social context that can be
ambiguous. Is the contract in question a discrete contract, in which case selfish behavior may be
appropriate and expected? Or is it a relational contract calling for trust, cooperation, and mutual
regard for each other‟s interests? In extreme cases (buying a car versus negotiating a prenuptial
agreement) the distinction is clear. But in other cases—and especially in employment
contracts—the contract may have both discrete and relational elements.
In an ambiguous situation, an actor who wants to rely on her contract partner‟s
conscience wants to signal as clearly as possible that performance calls for mutually considerate
rather than arm‟s-length behavior. Yet what signal does an employer send when it uses large ex
ante financial incentives to motivate its employees? The “pay for performance” approach
inevitably signals that the employer in question views the employment relationship as an arm‟s
length exchange in which self-interested behavior is appropriate, expected, and even encouraged.
86
87
Ian R. McNeil, Relational Contract Theory: Challenges and Queries, 94 NW U. L. Rev. 877, 897 (Spring 2000)
See generally Stout, Cultivating Conscience, supra note __ at175-99.
23
This is likely to induce the behavioral phenomenon social scientists call “crowding out” or
“motivational crowding.” In one classic study of motivational crowding, researchers studied ten
day-care centers where parents occasionally arrived late to pick up their children, forcing the
teachers to stay after closing time. The researchers convinced six of the centers to introduce a
new policy of fining the parents who arrived late. The result? Late arrivals increased
significantly.88
From an economic perspective, this was a bizarre result. How can raising the price of an
activity prompt people to purchase more of it? The answer, according to crowding out theory, is
that by changing the social context to look more like a market, fining parents who arrived late
signaled that lateness was not a selfish faux pas but a market decision parents were free to make
without worrying about teachers‟ welfare. By emphasizing external material incentives, the daycare centers crowded out “internal” incentives like guilt and empathy.
Similarly, incentive-based pay can be expected to crowd out unselfish employee motives
like trust, loyalty, and commitment. This is because emphasizing material incentives
manipulates each of the three social cues we have focused on—instructions from authority,
beliefs about others‟ prosocial or asocial behavior, and perceptions of benefits to others--in a
fashion that promotes selfishness.89 First, offering a material incentive to induce an employee to
perform a particular act inevitably sends the unspoken signal that the employer expects selfish
behavior, and indeed views it as appropriate to the task at hand. Second, when pay-forperformance schemes are used widely, they support the perception that other employees are
likely to behave selfishly (not to mention signaling the selfishness of the employer, who
proposes to withhold compensation regardless of circumstances unless its performance metrics
are met). Third, pay-for-performance schemes imply employee selfishness benefits the
employer. Otherwise, why would it be rewarded?
This analysis offers insight into what Bradley Birkenfeld might have been trying to say
when he told the judge that UBS had “incentivized” him to help its clients evade paying taxes.
Birkenfeld was not suggesting he was excused from breaking the law simply because he received
a material benefit from doing so; no judge would be sympathetic to the notion that self-interest
justifies illegality. Rather, Birkenfeld was saying that, by incentivizing him to help its clients
evade taxes, UBS had created a social context that gave him permission to do so.
Material incentives, it turns out, do more than change behavior. At a very deep level,
they change motivations. Emphasizing self-interest turns out to be a self-fulfilling prophecy. By
treating people as if they care only about their own material rewards, we ensure that they do.
Conscience, Temptation, and Cognitive Dissonance
Even if only at an intuitive level, many employers recognize the value of promoting
employee trust, loyalty and commitment, and (incentive-based pay schemes notwithstanding)
attempt to manipulate social context in the workplace to promote unselfish employee behavior.
The strategy can be as simple as posting a sign that reads “Customer Service is Our Priority,” or
88
89
Uri Gneezy and Aldo Rustichini, A Fine is a Price, 3 J. Legal Stud. 29 (2000).
Stout, Cultivating Conscience, supra note __ at 249-252.
24
as elaborate as a week-long corporate retreat at which executives attend lectures by motivational
speakers and go rock-climbing and white-water rafting.
But the most careful effort to create a social context that supports prosociality can run
aground on the rock of self-interest. This is because, as discussed in Part II, conscience works
best when it does not conflict too directly with self-interest. Unlike Oscar Wilde, most of us can
resist small temptations. It is the big ones that do us in.
And pay-for-performance schemes create very big temptations indeed. This is especially
true in corporate environments, because the hallmark of the American public corporation as a
business form is that it permits the accumulation of enormous assets that are formally owned
only by a fictional entity.90 Pay-for-performance contracts based on metrics subject to
employees‟ control, including metrics that employees can manipulate or falsify, create tempting
opportunities for employees to try to expropriate these enormous assets. Although in theory
executives and employees are subject to indirect supervision by the corporation‟s board of
directors, in reality, once directors agree to ex ante incentive contracts, they have effectively
ceded a great deal of control over the firm‟s assets to the firm‟s executives and employees. And
to the extent the incentive contracts are incomplete—as all incentive contracts must be—they
have also presented executives and employees with opportunities to try to expropriate the
corporation‟s enormous assets through unethical, opportunistic, or illegal behavior.
Thus corporate “principals” that rely primarily on ex ante material incentives to motivate
their employee “agents” are playing a dangerous game. It is almost always possible, and
sometimes far easier, for an executive or other employee to meet a performance metric not by
working hard, but through unethical or illegal behavior. In the case of Enron‟s bankruptcy,
executive stock option grants intended to motivate employees to “maximize shareholder wealth”
in fact motivated them to commit a massive accounting fraud. As Franklin Raines, then the CEO
of Fannie Mae,91 described the causes of the Worldcom and Enron scandals in an interview in
Business Week, “You wave enough money in front of people, and good people will do bad
things.”92 Employees who would never think of shoplifting or other small acts of larceny, will
ignore the voice of conscience if the opportunity for a hugely profitable fraud comes along. A
workplace that relies on large material incentives to motivate employees thus is also a workplace
that suppresses the force of conscience.
Moreover, once otherwise-honest individuals succumb to temptation and indulge in
unethical or illegal behavior, they become more likely to cross ethical lines again in the future,
and more easily. It is a truism among those who study business frauds that white-collar
offenders usually start with small violations before escalating into full-blown criminality. The
reason, many psychologists believe, has to with the phenomenon known as “cognitive
dissonance.”93 Cognitive dissonance theory posits that people desire consistency, including
consistency between their beliefs and their actual behavior. When their actions become
inconsistent with their attitudes, they are remarkably talented at changing their attitudes, and
rationalizing their actions to restore apparent consistency. The result is that when people are
given strong incentives to do things they are otherwise reluctant to do, they respond to the
90
Blair, Capital Lock-In; Hansmann and Kraakman; Stout, Nature of the Corporation CITE
DISCUSS Fannie Mae‟s subsequent scandal
92
“The Disastrous Unexpected Consequences of Private Compensation Reforms,” Testimony of William K. Black
before the House Committee on Oversight and Government Reform, October 28, 2009, at p. 7.
93
See generally Joel Cooper, Cognitive Dissonance: Fifty Years of a Classic Theory (2007)
91
25
incentives by performing the act they are averse to, then respond to the inconsistency between
their beliefs (“I should not do this”) and their behavior (“I did this”) by changing their beliefs
(“Since I did this, is must be something I should do.”)
Thus “induced compliance” shifts people‟s views about the appropriateness of their own
conduct because “in the battle between changing one‟s attitude and changing one‟s behavior,
attitudes are the easiest to change.”94 Once incentive-based pay tempts employees into
opportunistic or illegal behavior, they change their beliefs about what is opportunistic or illegal
in order to rationalize their actions so they can continue to think of themselves as fundamentally
ethical and law-abiding. This makes it much easier for them to justify similar unethical or illegal
behavior to themselves in the future.
Pay-for-performance schemes thus create criminogenic environments that first tempt
honest individuals into unethical or illegal behavior, and then invite them to adopt looser views
about what is unethical or illegal in the first place. As they say in business, pressure makes
diamonds. It also makes felons.
Selection Bias and the Question of “Character”
So far we have focused on how incentive-based pay discourages prosocial behavior even
among employees who are easily capable of acting prosocially As noted earlier, however,
individuals differ substantially in their inclinations toward prosocial behavior. Although few of
us are psychopaths without a conscience, some people are more inclined toward conscientious
behavior than others are. This, too, has implications for the wisdom of emphasizing ex ante
incentives as motivations.
In particular, when employers rely on incentive-based pay schemes that create
opportunities to reap massive personal payoffs through opportunistic behavior, we can expect
those employers to attract more than their share of opportunistic employees. It is no coincidence
that Wall Street‟s executives and employees are widely perceived to lack both empathy and
ethics. (Consider Rolling Stones’ now-famous description of investment bank Goldman Sachs as
“a giant vampire squid wrapped around the face of humanity, relentlessly jamming its blood
funnel into anything that smells remotely like money.”) 95 Investment banks and other financial
firms are notorious for offering their employees incentive compensation packages that create
opportunities to reap millions of dollars.96 Such incentive schemes naturally attract the
relatively opportunistic, because they perceive opportunities for personal gain that individuals
who are more constrained by personal ethics would discount as out-of-bounds and unavailable.
Moreover, once a workplace begins to attract more than its share of relatively
opportunistic or unethical employees, through a variety of different but mutually-reinforcing
effects it will also begin to repulse the relatively prosocial, and to subvert the prosocial
employees who remain at the firm into committing their own ethical lapses (which, given
cognitive dissonance, diminish their prosociality). This phenomenon has been described in detail
by William Black, an expert on white-collar crime, as a “Gresham‟s dynamic in which bad ethics
94
Id at 15.
Is Goldman Sachs Evil? Rolling Stone; see also Michael Lewis, The Big Short (discussing absence of conscience
among Wall Street derivatives traders). CITE
96
CITE examples
95
26
drives out good ethics.”97 Black, who served as deputy staff director for the federal commission
that investigated the causes of widespread fraud in the savings and loan industry in the late
1980s, concludes that incentive-based pay schemes created a similar Gresham‟s dynamic in the
case of the recent subprime mortgage crisis. The practice of paying loan brokers compensation
based in large part on the number of loans they originated led to a rapid deterioration in broker
ethics and quality of mortgages, with subsequent disastrous effects.98
There are several reasons why workplaces that attract more than their share of
opportunists drive out prosocial behavior. First, relatively ethical employees may conclude they
are at a competitive disadvantage, and decamp for greener pastures where their prosocial
proclivities are more valued. Alternatively, the relatively ethical may conclude they can no
longer afford to be so squeamish, and decide to dispense with their ethics. Second, as the
population of a workplace becomes dominated by opportunists, with fewer and fewer
conscientious employees, the risk that an opportunist‟s misconduct will be revealed by a moreethical whistleblower declines. Third, as discussed in Part II, as a workplace becomes crowded
with opportunists, this changes social context. When “everybody does it” (whether “it” is
approving low-quality mortgage loans, committing accounting fraud, or cheating on income
taxes), it is easy to conclude that you can do it, too.
Adverse selection pressures accordingly lead workplaces that rely on pay-forperformance schemes to attract a disproportionate share of relatively unethical and opportunistic
employees. Once this occurs, the result can be self-reinforcing dynamic in which prosocial
individuals and prosocial behaviors are driven out. It may even be possible to reach a tipping
point in which opportunistic behavior becomes so prevalent that prosocial behavior within the
company virtually disappears. Think of Enron, Countryside Financial, or (in Rolling Stone’s
opinion) Goldman Sachs. The firm becomes, in effect, a criminal enterprise populated primarily
by psychopaths--at least until they get on the elevator and go home.
Summary
As its name suggests, optimal contracting theory embraces the quest for the complete
employment contract that perfectly aligns the interests of agent and principal so that all “agency
costs” disappear. This quest, contract scholars concede, is a bit like the quest for the Holy Grail.
No perfect contract is possible, and gaps inevitably remain. What fills the gaps? According to
rational choice theory, only reputation can, and any contractual gap that cannot be filled by
reputational concerns will exploited opportunistically and become a source of agency costs. But
(again according to the theory) this should not discourage us from the quest to design ex ante
incentive contracts that are complete as possible, for without such contracts, opportunism is
inevitable and uncontrollable.
Behavioral science offers a different perspective. Conscience also can fill the gaps in
incomplete relational contracts, and motivate prosocial contract partners to perform even when
there is no realistic threat, or insufficient threat, of legal or reputational sanction if they don‟t.
This possibility deserves our attention, for behavioral science also teaches that a workplace that
emphasizes ex ante financial incentives will tend to suppress the force of conscience, by shifting
97
98
Black, supra note __, at 2.
Id. At 9.
27
social context, creating material temptations, and creating selection pressures that favor the lessconscientious.
But if we don‟t use ex ante financial incentives to motivate employee performance, what
shall we use to motivate them? In a capitalist society—perhaps in any society—few people are
willing to work long and hard but receive nothing in return. (When you take from each
according to their ability, and give to each according to their need, you are likely to end up with
lots of needy, incompetent people.) This article concludes by addressing the question: if we
don‟t use high-powered financial incentives to motivate people, what can we use?
IV. CONCLUSION: ALTERNATIVES TO “PAY FOR PERFORMANCE”
Behavioral science teaches that people are far more unselfish and prosocial than rational
choice theory typically presumes. When the social context is right and personal cost is not too
high, the vast majority of individuals are willing to act like Dr. Hyde, and sacrifice their own
material payoffs to follow ethical rules and to help or to avoid harming other people.
But the caveat “personal cost is not too high” suggests that it is important not to ask
conscience to bear more weight than it is ready to bear. Charities recognize that endless requests
for contributions can eventually lead to “donor exhaustion.” Similarly, wise employers
recognize employee prosociality has its limits. One can only get so much commitment, loyalty
and hard work for nothing. Eventually—perhaps soon—the siren call of self-interest invites
even the most dedicated agent to ask, “what‟s in it for me?”
Something must be. Many Americans volunteer their time for various worthy causes, but
when it comes to full-time employment, most insist on being paid. It is important to recognize,
however, that critiquing incentive-based pay is not the same thing as critiquing the general idea
of paying compensation. There are lots of ways to compensate and reward executives and other
agents for their efforts, beyond using large, material, ex ante contractual incentives.
The Opposite of Pay For Performance?
Indeed, behavioral science suggests that for many highly-incomplete employment
contracts, employers would do well to employ an approach to compensation that is the exact
opposite of relying on incentive-based pay. That is, rather than trying to motivate employees by
promising rewards that are large, material, and determined ex ante based on objective metrics,
behavioral science suggests firms might often do better to emphasize rewards that are modest,
nonmaterial, and determined ex post on the basis of subjective evaluations.
Starting with the advantages of keeping rewards modest, there is reason to suspect that
firms that avoid offering very large financial incentives can benefit from employee selection bias
because they are more likely to attract relatively prosocial individuals, and less likely to attract
selfish opportunists.99 Of course, there are some businesses—used car dealers, private hedge
funds—that may want to attract selfish opportunists, because their employees perform tasks that
are simple and certain and employee performance is relatively easy to observe, making it feasible
to design more-complete employment contracts that leave less room for employees to exploit the
employer. Many types of modern businesses, however (schools, hospitals, public corporations)
99
Robert H. Frank, What Price the Moral High Ground? Ethical Dilemmas in Competitive Environments (2004)
28
must necessarily use employments contracts that are far more incomplete and leave greater room
for opportunistic behavior. (James Sinegal, the CEO of Costco, works under an employment
contract whose terms supposedly “fit on a cocktail napkin.”)100 In such cases, firms that can
attract conscientious rather than purely self-interested employees—teachers who want students to
learn, doctors who want to help patients, CEOs who want to leave a legacy rather than simply
take as much money as possible—have an advantage. Assuming employees can be motivated by
compensating them another way (more on this below), limiting incentive awards has other
advantages as well. For example, it reduces the risk that a large opportunity for personal gain
might tempt a prosocial person to ignore her conscience. It also avoids sending the sort of social
signals that shift behavior in a Hydish direction by suggesting that employees are expected to
work only for self-serving reasons.
Similarly, behavioral science suggests that for many tasks, emphasizing nonmaterial
rewards—greater job responsibilities, a better parking space, an “Employee of the Month”
plaque—may work as well or better than emphasizing material rewards like cash bonuses or
stock options.101 Most obviously, job titles and award plaques cost firms much less to provide.
But there are psychological as well as economic advantages. Unlike monetary rewards, which
have intrinsic value apart from social context, nonmonetary rewards appeal to employees‟ desire
for status and esteem.102 Such motivations naturally focus attention on social context, rather than
personal financial circumstances—exactly where we want to focus attention to encourage
prosocial behavior. As importantly, nonmonetary rewards seem to do a better job of preserving
intrinsic employee motivations like interest, creativity, and desire for mastery. In his bestseller
Drive, Daniel Pink emphasizes this advantage, surveying the extensive experimental evidence
that demonstrates how the prospect of monetary rewards often reduces individuals‟ performance
on tasks requiring creativity and persistence.103
Of course, man or woman cannot live on “Employee of the Month” awards alone. Even
the most prosocial employee has to pay the rent and buy groceries. Thus employers must pay
employees financial compensation. But behavioral science argues against financial
compensation that is predetermined ex ante on the basis of objective metrics (the pay for
performance approach). Rather, we should set financial compensation ex post, on the basis of the
employer‟s subjective satisfaction with the employees‟ performance.
The idea of setting employee compensation subjectively and after-the-fact is in direct
conflict with optimal contracting theory, which predicts that because principals are just as
opportunistic as their agents are, no agent would ever bother to perform simply because a
principal said “do a good job, and I‟ll reward you appropriately.” Ex post compensation requires
employees to trust in their employers‟ trustworthiness. Trust and trustworthiness, in turn, are
prosocial behaviors that play no part in optimal contracting theory. They do, however, play an
important part in real human behavior.
Consider an interesting variation on the social dilemma game called (appropriately
enough) the “trust game.” A trust game is simply a social dilemma in which two players act
sequentially rather than simultaneously. One of the two subjects (the “trustor”) is first given a
sum of money, say $100. Both subjects are told the trustor can choose to contribute some or all
100
101
102
103
Gretchen Morgenson, Two Pay Packages, Two Different Galaxies, NY Times, Sec. 3 p. 1 (April 4, 2004).
Ernst Fehr, CITE SSRN paper of awards
Richard MacAdams, CITE
Daniel H. Pink, Drive: The Surprising Truth About What Motivates Us (2009)
29
of the $100 to an investment fund, which the researchers will triple and then give to the second
subject (the “trustee”). The trustee then gets to choose whether she wants to keep the tripled
funds entirely for herself, or return all or some portion back to the trustor. In a well-designed
trust game where subjects play only once and anonymously, no rational and selfish trustee would
ever donate any of the tripled funds back to the trustor. Anticipating this, no rational and selfish
trustor would ever donate any of his initial stake to the investment fund. In real trust games,
however, the trustor typically shares more than half his funds, and the trustee typically repays the
trustor with a slightly larger amount.104
Employment relationships that rely on ex post compensation are directly analogous to the
trust game. The employer first trusts the employee by committing to pay him a salary that is not
contingent on meeting objective metrics. Next, the employee trusts the employer by working
harder and more honestly than the employer could force him to work under the terms of the
formal contract. Then, the employer reciprocates the employees‟ trust by giving the employee a
raise and more job responsibilities. This process of reciprocal trust and trustworthiness continues
until either the employee retires, or one of the parties fails to reciprocate and the employment
relationship is severed because the employee either quits or is fired.105
Lessons From History
At this point, any seasoned businessperson over the age of 50 should be experiencing
déjà vu. Optimal contracting theory recommends that employers seek to negotiate employment
contracts that are as complete as possible and that emphasize large material rewards that are tied
to objective performance metrics determined ex ante. Behavioral science, however, counsels
that when contracts are seriously incomplete, we might do better to adopt the opposite approach:
use relatively modest rewards, emphasize nonmaterial rewards, and set financial compensation
ex post on the basis of the employers‟ subjective satisfaction with the employees‟ performance.
This second approach is exactly what the business world in fact relied on before Congress
passed tax legislation requiring corporations to tie executive pay to performance.
Before the adoption of IRC Section 162(m) in1993, stock options and other forms of
incentive pay tied to objective metrics played a far less important role in executive compensation
practices than they do today. (A 1988 paper by George Baker, Michael Jensen, and Kevin
Murphy critiqued prevailing compensation practices as “largely independent of
performance.”)106 CEOs and other executives were typically rewarded with relatively modest
salaries along with a variety of noncash perquisites such as the nicer offices, better parking
spaces, and promotions to larger divisions. Cash bonuses were common but relatively modest
and set after-the-fact, on the basis of the employees‟ performance as viewed subjectively by the
company‟s board of directors or senior managers. In other words, the business world followed
exactly the sort of compensation practices behavioral science recommends. Moreover, at least
judging from pre-1993 corporate performance and investor returns, the system worked
reasonably well.
104
105
106
Stout, Cultivating Conscience, supra note __ at 9.
CITE Gilson, Sabel, Scott, Braiding, __ Colum. L. Rev. __ (2010)
George P. Baker, et al, Compensation and Incentives: Practice v. Theory, __ J. Fin. 593 (1988) CITE
30
History accordingly suggests that when they are left to their own devices, businesspeople
tend to be pretty good intuitive behavioral scientists. Indeed, they seem superior in this regard
to the academics and regulators who argue we must tie pay to performance (many of whom,
despite their dedication and hard work, seem not to have noticed the rather obvious fact that their
own modest pay isn‟t tied to much of anything).
What does this imply about the ideology of incentives? Most important, that it is just
that: merely ideology, and dangerous ideology to boot. America‟s greatest achievements in the
twentieth century—sending humans to the moon, winning World War II, beating polio, building
great global public corporations like IBM, Ford, Xerox, and General Electric—were all
accomplished without the aid of “optimal contracts.” Yet despite the absence of reliable
empirical evidence to support it, a belief that incentive-based pay is essential to good
performance has not only captured many corporate boardrooms, but is spreading to our schools,
hospitals, and newsrooms as well. Behavioral science and history both caution against this
development.
31