Chapter 11. Failure to Engage

Chapter 11. Failure to Engage
The first ten chapters of this book have made a case that morality is necessarily a personal
relationship between two or more moral agents. But what is wrong, it may be asked, with also
using a system of ethics based on the relationship between single individuals and universal
ethical principles? As it turns out, there are some good reasons to think that abstract ethics
undermines morality. Let’s look at the evidence.
The Oresteiana is a trilogy of plays by Aeschylus (1956), first performed in Athens two
years before the author’s death in 456 BC. It can be read as a narrative about alternative moral
framing. In three episodes and ample flashbacks, it recounts three generations of the tragedy in
the House of Atreus. It is a catalogue of moral depravity and a glimpse into how the ancient
Greeks reacted to what we today would call wholesale immorality.
Atreus, Orestes’s grandfather, and his twin brother Thyestes indulged in the usual
usurpations, duplicity, murders, and incest. The biggy was Atreus killing his brother’s sons and
tricking Thyestes into eating them at a banquet. Modern readers find it difficult to understand
that all the odium piled up on Thyestes, and that he was hounded into exile by the Furies, acting
as agents of the gods. (It remains the case even today in some cultures that a woman who has
been raped is expected to feel shame.)
In the second generation, Atreus had two sons – Agamemnon who married Clytemnestra
and Menelaus who married Clytemnestra’s sister Helen (whose illicit favors Athena used to
bribe Paris and win the world’s first beauty contest and thus start all that commotion in Troy).
Thyestes became both the father and the grandfather of a sluggard named Aegisthus. When the
brothers Agamemnon and Menelaus assembled an army to retrieve Helen from Troy, the gods
imposed a test. They blockaded the Greek fleet with a continuous contrary wind and let it be
known that Agamemnon could proceed only if he sacrificed his daughter Iphigenia. He killed
her, and off they went for a ten-year war in Asia Minor. There is very little to admire in this
picture of the moral community or their understanding of the prevailing norms established by
divine decree.
The action of the Oresteiana picks up with Agamemnon’s triumphant return home to
Argos. His wife, Clytemnestra, binds Agamemnon in a robe (royal straightjacket) during a
ceremonial cleansing and stabs him to death. Although all Greece recognized the sacrifice of
Iphigenia as a religious necessity (like Abraham and Isaac) required and approved by the gods as
proven by the victory at Troy, a mother’s hatred was not so easily cooled. Besides, Clytemnestra
had a live-in lover and faux king on hand named Aegisthus the sluggard, and her husband’s
return was awkward.
In the second drama in Aeschylus’s trilogy, The Choephori, Clytemnestra and cousin
Aegisthus are murdered by Electra, the daughter of Agamemnon and Clytemnestra who has been
a Cinderella kind of slave in the household for ten years, and Orestes, her brother who has been
in exile in Athens during his youth.
The third part of the tragedy, the Eumenides – which means the kind or good ones -- is
the critical episode. Orestes is now being pursued by the Furies. These are ferocious, persistent,
degrading, grotesque spirits that hound evil doers on behalf of Zeus, the great god of justice. We
may think of them as conscience, or cultural norms, or the judgment of the gods that turn a man
into an outlaw. 1 In the old system, both the rules and the punishments belonged to the gods. The
Furies tortured Thyestes for his unknowing cannibalism. Now they were dogging Orestes for
matricide (the killing of Aegisthus was justifiable collateral damage and not a concern of the
Furies).
Zeus finds the whole affair mildly amusing. But Athena is moved and arranges something
completely unheard of: a jury trial. Twelve citizens of Athens are empaneled to hear the
accusations of the Furies and the response of Orestes and his defense team headed by the senior
partner of the firm Apollo and Associates. Athena is the judge and has the deciding vote in case
of a tie. It is a hung jury, but Athena casts her lot for Orestes and against the justice of the “old
gods.”
This trilogy is a turning point in western civilization and the history of morality. More is
at stake than rearranging the catalogue of good and evil acts. Humankind has been granted a role
in determining the good and the right, rather than merely “receiving” it. The bedrock framework
for modern ethics – communities determining what constitutes the good and the right and
individuals choosing to adhere or work around these norms – has only been in practice for about
2,500 years.
It is beyond belief today that Thyestes was morally depraved when tricked into eating bits
of his children while Atreus was in the clear for have killed his brother’s children and deceiving
their father into an affront to the gods. That is not just strange; it is perverse logic to us. In the
view of the old gods, Clytemnestra was justified in murdering her husband, Agamemnon,
because he was not a blood relative. The Furies argued that Orestes was outside the bounds of
civil society because he had killed his mother, a blood relation. The tribal norms of earlier times
and to some extent in parts of the world today, embrace two sets of ethical standards – one
within the tribe and another for strangers. (Curiously, the altruistic norm of hospitality has
always extended to friends and strangers. That is why Clytemnestra was compelled to bring the
unrecognized Orestes into her house, making possible her own murder.)
It was standard form to end Greek tragedies with a deus ex machina. Literally, someone
representing one of the folks from Olympus was lowered onto the stage in a contrivance of ropes
and pulleys. The little speech at the end explained that the actions just witnessed may have
looked like agents determining their own fates by what they did, but the gods had already
arranged matters among themselves. The concept of free agency did not exist until a few
thousand years ago. Eumenides is different. The play ends with a long dialogue between Athena
and the chorus of Furies where it is reported that Zeus Enterprises is downsizing and the Furies
need to go on food stamps.
The revolution in fifth century BCE Greece and in subsequent Western civilization was
nothing less than shifting morality from an externally given and managed system to a community
engagement, affecting and affected by humans. 2 Under the old god system, supernatural forces
pronounced what is good and bad, as well as judging and punishing it. The prestige roles for
humans were as priests and interpreters. The Furies were gods-given punishment. Ostracism,
confiscation, and hounding were appropriate, even required for individuals, and mass
punishments like earthquakes, floods, and defeat in battle represented justice for communities.
Killing of one who had offended the gods was a duty, especially revenge by the family. Killing
of one who had not offended the gods was unacceptable. There were no police force and they did
not lock people up for years at a time. Remnants of the system can be found today in tribal
cultures, in totalitarian regimes, in voodoo, and in the throwback to trial by ordeal to smoke out
witches or Communists.
It is not part of human nature to leave events unexplained and happenings that disturb us
unblamed. When a man died or was injured by falling from a tree or cutting himself with a sickle
in ancient Athens, the tree or sickle was placed on trial. In the Iron Age, motives were given to
inanimate objects, including general forces such as earthquakes, when that fulfilled a need to
make the world understandable. Today, we typically regard supernatural causes as superstition
and allow agency only to persons or animals under specific conditions.
During Old Testament times, morality was God-to-man, not man-to-man, and setting
matters straight was left to revenge killing, famine, plague, and the Assyrians and then the
Babylonians. A representative proof text is found in the fifteenth chapter of First Samuel. The
prophet Samuel announces to the Hebrew’s first king, Saul, that he will be driven out by David
and the land divided into two kingdoms because Saul was disobedient to the commands of God.
The issue concerned the Amalekites, who a few hundred years previously had refused hospitality
to the Hebrews as they fled Egypt. Saul used a large army to defeat the Amalekites, killing all
survivors and bringing out the oxen and sheep for offerings to God. But that was not sufficient.
God had commanded: “Utterly destroy all that they have, and spare them not; but slay both men
and women, infant and suckling, ox and sheep, camel and ass.” Saul only slaughtered the men,
women, and children, while sparing the animals. By any standards of reason or fellow feeling
current today, this behavior would be unethical, and even politically worthy of reproach. There
has been an evolution, not only in which acts are moral and which are not, but in the yardstick
used to measure what is good and right.
Philosophy was born out of the massive shift in perspective that began 2,500 years ago
and gave humankind responsibility for our actions in community. We have been struggling with
how best to manage this power ever since, and we probably do not have it quite right yet. 3 Many
philosophers still play the priest role, offering various new brands of ethical principles. Mankind
stole moral agency from the gods, but the owner’s manual was badly out of date. That much we
owe to Aeschylus and to the biblical creation myth about the knowledge of good and evil – such
knowledge being the very definition of ethics. But we had fair warning in our religious
traditions, and Aeschylus tipped us off about hung juries.
A continuous theme in this book has been that morality is better grounded in the
relationship among individuals than in the relationship of individuals to transcendent principles.
It has been demonstrated that we can in fact make the world better by granting moral agency to
others and by picking mutual courses of action that neither has any reason to alter rather than by
exhortations to approximate conformity to gods-given standards or norms derived from
philosophy. We make the rules together and we play by the rules together, and that makes us
who we are.
In this chapter I look at a curiosity. It turns out that orienting toward the good and the
right in general or on principle alienates us from doing the good and the right in particular
situations. There is a positive antagonism. Humankind is social by nature, and almost every
approach to making mutual moral decisions is better than doing nothing. Failure to engage
undermines morality, and ethical principles often disengage us from others.
Undermining the Moral Community
Here is a seeming contradiction. An easy way to dodge the need for treating others as equal
agents in moral engagements is to increase the number of others in the community. As ethical
imperatives become more universal in nature, their authority over our particular behavior fades.
Where we must deal with a few other agents, we recognize that each might influence our
behavior and that our choices matter to at least some of them in a personal way. Where there are
many, we respond to the average or a subset of the whole, and we become part of the general
background. Large groups melt individual agency. Our actions lose their moral flavor as they
become impersonal and no longer ours (Buchanan 1965). We tell ourselves that we are not really
hurting anyone in particular if we act expediently. We are just temporarily transgressing on
some vague principle whose boundary and interpretation are fuzzy anyway. That allows us to
cheat a little. After all, others are likely to have noticed this opportunity to game the system, and
they might already have a head start. If we can find a justificatory principle, others are just part
of the background.
It was shown in Chapters 3 and 4 how any moral engagement could be approached in
several ways. Many common alternatives such as ego-centrism, caring, and even “cheating”
decision rules are all somewhat effective, but not as good overall as RECIPROCAL MORAL
AGENCY.
Everything, except CONTEMPT, is better than failure to engage. We are the most social
of all creatures. Being an outcast, solitary confinement, and capital punishment are such heavy
penalties because they strike at the essence of what it means to be human – connectedness. The
connection that matters is the one between individuals, not the one between an individual and a
principle. 4
Kitty Genovesse has become a by-word for this obvious secret regarding human nature
(Ross and Nisbett 1991). In 1960, in a section of Queens, New York, Genovese was assaulted
and stabbed repeatedly over a 30-minute period. Subsequent police investigations found that 38
people had seen or heard the incident. No one intervened; no one even phoned the police. It is
likely that all witnesses would describe themselves as good, ethical citizens and that they would
be regarded as such by their neighbors. We can even imagine that these individuals would check
the box that said “yes” when asked whether we have an ethical obligation to help those in need.
This is the case that defines “spectator ethics!” The point of this story is not to embarrass
anyone or even to rehearse the gap between ethics and morality. The fact that intact principles
sometimes fail to get the moral job done is what needs explaining. Genovese’s story has
prompted research that teaches us something about the costs of failure to engage morally.
The 2 x 2 moral engagement matrix is often collapsed into a 1 x 2 parody where the cells
represent only the two strategies open to a single agent, not a pair of agents. The other disappears
into an undifferentiated background. This is an engagement of the individual against the world –
some against the rest. In such a caricature, the only rational agent left to serve as judge is the solo
ego.
John Darley, Bibb Latané, and other social scientists have studied this effect under
controlled conditions. 5 A typical protocol involves having research subjects complete a
questionnaire (which really has nothing to do with the study) while being interrupted during this
process with an “emergency” that would normally reframe the situation so that a civil response is
expected. In one study, smoke was piped into the ventilation panel in the room where the
questionnaire was being completed. In another, a female research assistant left the room and a
loud noise was heard as though she has been hit or taken a fall. There was also a variation where
an epileptic seizure was simulated in the hall just outside the door. The question was whether the
subject would stop the question answering and take helpful action.
The answer is generally yes. In the three conditions described, the moral response of
interrupting the routine and offering help was taken by 75%, 70%, and 85% of the subjects. But
here is the twist. If there were three research subjects in the room instead of one, the probability
that any of them would intervene in the first case (smoke) dropped to 38%. When there were two
subjects in the case of a simulated fall, only 40% responded. When there were five subjects who
witnessed the simulated epileptic attack, only 40% responded. When subjects were paired with
confederates who were instructed to ignore the moral call, the proportion of responsive subjects
dwindled to the 10% range.
The lesson is that we respond to moral situations differently when we believe the
engagement involves only ourselves and another (the victim in this case) than when we frame the
engagement in general terms. Framing based on principle gives us cover. It is not the anonymity
that matters. It is whether we define ourselves as individuals or as members of a mass. It is too
easy in the one-of-a-group situation to endorse the ethical principle that “somebody should do
something about this terrible situation” without drawing the obvious corollary conclusion that we
are “somebody.” By framing the problem ethically as everybody’s responsibility it becomes no
longer our personal responsibility.
Darley and Bateson (1973) conducted a now famous variation on the problem of
willingness to pick up a moral task. Their subjects were graduate students who were told to
complete a questionnaire and then report to another building where they would be videotaped
making a short presentation on the importance of helping others. Half of the subjects were told
that they were expected immediately and half were told that they had a little slack time before
they were needed. On the way to the taping, all students encountered a rough looking man
hunched in a doorway, groaning and complaining of pain. He called out for help. Some graduate
students stopped to at least ask whether they could do anything; many did not. The questionnaire
completed immediately before the incident had covered topics such as the ethical obligation to
assist others. There was no relationship between the attitudes expressed on the questionnaire and
actual willingness to assist the man in apparent distress. Ethics and morality were seen as living
parallel lives. Less than two-thirds of those students who had time before giving their speeches
tried to help. For those who thought they were late and prioritized punctuality above charity in
their framing, the helping rate was 10%.
The students were enrolled in Princeton University’s theological seminary and the title of
the talk they were to give was “The Good Samaritan.” Although this may seem unexpected from
the perspective of ethical principles, it can easily be mapped using the engagement frame where
helping behavior has a reasonably high perceived value and the additional factor (available time)
enters into the framing differently for the two groups of subjects. The external view of the
principle of helping others is part of our (the outside readers’) framing, not the seminary
students’.
Price Taking
The concept that is being unlocked here is best known in economics as “price taking.”
Markets can be divided into those where the number of buyers is so small that the reaction of any
affects prices and those that are so large that no one or even a few individuals make a difference.
6
If three construction firms are bidding to build a school or two consortia of entrepreneurs are
considering purchasing a professional athletic team franchise, any hint of a change in one offer
will prompt an immediate response from others in the market. If there is only one known buyer
of Japanese prints from a particular era, there will be plenty of negotiations over the price. But
when there are lots of buyers or sellers, each becomes a “price taker” and none is a “price
maker.” Whether I want one can of stewed tomatoes at the supermarket on Saturday or several
makes no difference to the price the store charges other customers.
The active ingredient in this distinction is a perception that our actions influence others in
the engagement. In the principles approach to ethics, such differences are not supposed to matter,
per definition. The very principle of “blind justice” -- that everyone should be treated the same -is an argument that ethics should be governed by the logic of price taking.
We can frame the trial of Orestes in either the “old god” way as choosing from a pro and
con list of actions and their consequences or as a full, two agent matrix. On the price-taking
view, Orestes would choose, without consulting the interests of others, between adhering to
social norms by accepting tyranny, forgoing revenge, and living the rest of his life a coward or
defying standards by avenging his father, removing a tyrant, and living in exile and social
ignominy. All other outcomes were off the table. And we argue over which principle gets to call
the tune.
In the price-making structure of reciprocal moral agency, both Orestes and society have
choices to make. Killing his mother and being exonerated would have been Orestes’s first choice
[4]. Self-exile compounded by being a social outcast and suffering the derision of his mother and
Aegisthus would be intolerable [1]. Of the other possibilities, committing the murders and
suffering the consequences deemed just by society [3] would seem preferable to cowardly hiding
in society [2]. When the moral engagement is small enough so that the actions of one agent affect
the actions of others, we must also work out the framing from both points of view. Athena and
the jury matter. As Aeschylus tells the story, the best option was removing a tyrant, accepting a
hero, and risking the wrath of the old gods [4]. Sending a hero into exile would have been very
unattractive [1]. Of the remaining outcomes, punishing Orestes mildly to maintain order in the
community [3] would probably stand higher than quibbling over the case by punishing an
individual using principles that seem unfair or at least do not well suit the case [2].
If we were to work out the full moral engagement matrix and solve it using the RMA
decision rule, we would come to the Win-Win [4 4] outcome of acquitting Orestes, which is
exactly what Aeschylus gave us – but just barely. It is unlikely that such a RMA outcome would
prevail today. At civil law, the interests of one agent are weighed against the interests of another
individual, and that is the model a progressive fifth century BCE Greek would have had in mind.
Today murder is a criminal offense – an affront of an individual against society as a community,
or of nested communities. The Balanced Compromise solution of Orestes killing Mom and
father-in-law and getting manslaughter in the second degree [3 3] is more likely. It should also
be noted that six of the jurors on the hung jury framed the engagement so that “punish” was the
BEST-OUTCOME
resolution.
People have been making collapsed, price-taking choices for eons and there is no
evidence that we are getting better at it. There is, on the other hand, compelling evidence that the
justifications offered for such choices slide around. The right way to treat those with diseases or
disabilities or to treat foster children changes across history. The priority of honor and integrity,
of autonomy and patriotism, or the sanctity of life and larger callings have flipped a bit in a kind
of large-scale floating ethical relativism. But from within normative systems the decisions are
fixed. One can either honor the norms, flaunt them, or try to sneak around them. When norms are
given ex cathedra, being ethical means being a norm taker, not a norm maker.
If the normative view of what is best in the world were an accurate description of human
history and had been perfectly enforced, there would be no moral progress and honor killing
would still be valued. At various times, high ethical standards called for suppression of women
and slaves, persecution and killing of deviants, and being born into life-defining casts. Our only
choice was to “take it” or leave it; we were not allowed until recently to “make it.”
Free Riding
In the social psychology and government fields, the price taking problem is often
associated with an effect called free-riding (Comes 1986). A division is made in peoples’ minds
between personal resources and community resources. Personal ones are accounted for in a
straightforward way as individual property or rights and obligations. 7 But the books are run
differently for community goods and responsibilities. There is a personal cost for contributing to
common resources. Being a volunteer firefighter in one’s home town involves sacrifice for the
public good. The likelihood of ever saving one’s own home is tiny. The likelihood that one will
lose a home to fire because nobody has volunteered is also small. The cost of volunteering,
although not great, is certain and noticeable; the benefit is uncertain and probably negligible.
Free riding means taking more from the common resources than one’s fair share.
Ducking the personal costs of intervening in the Kitty Genovese incident can be understood as
free riding.
There is also an opportunity cost associated with community benefits. If there is a pool of
common goods that will be wasted, or worse, go to others if not taken, each individual is making
a needless sacrifice by not getting what one can. We eat differently when ordering from the
menu and when going through the buffet line. Companies locate plants in areas with fine
schools, low housing prices, and good health and social welfare benefits. This improves the
profits for the shareholders at the expense of the tax revenues of the citizens. If firms can do so
as well, they take advantage of good transportation systems such as the interstate highway
system, the courts, and other common benefits from national sources. Only recently has there
been indignation expressed over large firms paying low wages and dodging the obligation to pay
health benefits because they realize that the public safety net will pick up the tab. By contrast,
claims against potential contributions to the common good by corporations are sometimes
shielded by moving taxable assets off-shore while making most sales (value added) in
jurisdictions that cannot impose taxes. The healthcare and educational benefits in prisons are
often more attractive than those on the street. Everybody manages his or her personal account
with an eye on the available subsidies from the common good.
Literal free riding is common in many places. Bus drivers pick up their friends – and bus
drivers seem to have many friends – and give them a lift for a few blocks without charging a
fare. After all, the bus is going that way and one more passenger adds virtually no cost. The bus
driver gets a little social credit or perhaps a free drink now and then at the shared expense of the
paying customers. No one is damaged enough for it to be worth making a fuss. This can be
framed by noting a small ding on the community’s framing matrix, but the impact is too small to
tip the balance to an alternative strategy because the burden is spread across “the public.” Giving
friends a free ride would be unthinkable in a private livery service with a single paying customer
in the car.
Free riding is a form of transfer from the account of the community to that of individuals
for their own benefit at the expense of the public. That makes it a moral issue. Free riding only
occurs in the case of price taking. The free rider must be able to hide as an undifferentiated
member of a class. The most natural defense against accusations of free riding is that “everyone”
does it. The free rider ceases being a proper noun.
The Swiss economists Ernst Fehr and Simon Gächter (2000) and their collaborators (Fehr
and Schmidt 1999) have performed many experiments where participants play games that
depend on cooperation. These are known as Common Good Games, and they work like this.
Each member of a small group is given a stash of real cash to manage. Agents decide how much
to invest per round and receive a payout based on the total fund invested by all players,
augmented by a bonus from “the bank” that is proportional to the total invested. The enhanced
fund – perhaps 120% of what the “investors” contribute collectively -- is divided equally among
all players. This is a munificent engagement where the best payoff for individuals is achieved
when everyone else invests heavily and the stingy player gets the same big payout that others do,
despite having put up nothing. But if everyone refuses to invest, everyone foregoes an
opportunity to participate in any of the reward from the common pool. This is a Stag Hunt
engagement where agents generally improve their situations by investing, but free riding
defectors may do even better still. In typical experiments under these circumstances consensus is
quickly reached that about 80% free ride for a round or two, but then the game degenerates into
everyone sitting on what they have and not working together to increase the shared public good.
It is not an attractive game -- DISENGAGEMENT.
This research appears to contradict the deep message of this book that RMA decision rules
raise the tide for all boats more than ego-centric rules do. In the Fehr games there is the expected
small increase over DISENGAGEMENT due to a few playing a RENEGING rule. But the free riders
are hurting the community in two ways. First, they violate the abstract principle of justice or the
fair distribution of benefits and burdens. But more to the point, they also dampen the total
positive pool of resources available to the community. They suppress the value of the
engagement. This is what we would expect based on the early chapters in this book.
The problem can easily be fixed. Allowing agents the option of spending some of their
own resources to punish those they regard as free riding boosts the community’s flourishing.
People are willing to “buy” a chance to reframe the engagement to make it more munificent.
Effective communities all have some such mechanism where individuals pay into a pool to
enforce the rules in the community needed to make the community as a whole work better. Taxes
are necessary for courts, public assistance, and agencies such as the Food and Drug
Administration. It is inconvenient but helpful to ask others to refrain from smoking, but we do
so. Whistle blowers risk being socially stigmatized or losing their jobs, but there are laws in the
United States giving legal protection and financial rewards to those who are willing to take the
risk.
Is it worth it? Fehr shows that it is. When penalizing free riders is permitted, the net
return of all participants (including former free riders and those who spend personal resources to
punish them) typically increased by about four-fold over engagements where the framing does
not allow for punishment. Engagement pays, and RMA engagement pays the most. Chapter 8
showed how reframing makes new alternatives possible. Chapter 9 showed that communities can
act as moral agents. Chapter 5 demonstrated that RENEGING is a weak response, and communities
that threaten sanctions against those who ride free but fail to follow through are RENEGING. Now
we see that communities as well as individuals can negotiate for moral engagements that
optimize the value of the engagement for the community.
But free riders can only be reined in when they can be engaged. Playing the game of life
with real people is quite different from playing the game against undifferentiated masses. When
we know that our actions will affect the behavior of specific others we often assess the effort as
being worthwhile. This is why laws are passed after we see in television individuals damaged by
tragedies but remain endemically enforced in general. When the group gets too large we switch
from price makers to price takers. It is one thing to work with others for a common better future
and something else entirely to follow the rules in hopes that others will do the same. In my own
research, I have attempted to replicate Fehr’s results. I was frustrated by contrary outcomes until
I realized I had made a trivial but significant change. Instead of one agent playing against a few
others, I converted the game so one agent played against a computer (actually against a table of
normed responses from an averages across previous agents). The beneficial effect of communityenhancing punishment largely vanished. I had converted price makers to price takers and thus
invited agents to disengage.
An alternative arrangement of this type is called the Ultimatum Game. 8 One of two
individuals is given a stake – say $100. He or she can give any portion of it to the other agent.
There is only one rule: if the receiving agent accepts the offer both agents leave with the money
per the offer; if the offer is rejected, both agents leave with $0 each. Both agents are price
makers. The game has been played under experimental conditions around the world for
generations and under many circumstances. Generally, offers in the 35% to 40% range work
best, all said and done.
The related Dictator Game is the same except that the receivers are not agents. They must
accept what is offered because they are defined as price takers. Typically the offer in the Dictator
Game is closer to $15 out of $100.
One wild variation that is especially intriguing involves spritzing a small dose of the
chemical oxytocin (a dopamine enhancer) in the nostrils of the agent who has to decide how
much to offer (Zak et al. 2007). This normally produces about a 30% increase in the offer.
Fundraising professionals use food, alcohol, and other stimulants for the same purposes. But the
effect is not universal. A whiff of oxytocin boosts willingness to contribute to the community
when sitting face to face with another agent but not when engaging a computer. Building moral
communities seems to require that we recognize ourselves as one agent among others who are
seen as capable of both being affected by us and having an effect on us. This is the basic
requirement for RMA. It goes away when we become price takers. We are not as kind on principle
as we are among others.
Pop behavioral economist Daniel Ariely has developed a similar idea. He calls it the
“fudge factor,” and the notion is that individuals deal themselves a personal, limited-use, free
riding pass. 9 Having contributed to the public good, which everyone does by simple virtue of
general good citizenship behavior, we feel entitled to a little slack in the accounting of
withdrawals from the pool of public goodies in other areas. This margin of moral self-dealing
varies from one individual to another but almost always favors the person at the expense for the
public. We are entitled to help ourselves a bit, especially if it is not clear that any named
individual actually owns the public good. The government, the large (but not the small) company
we work for, insurance systems, and big stores all invite wide moral fudging margins. Ariely
hypothesizes that we can maintain our self-image as an ethical person as long as our free riding
does not cross our personally chosen moral threshold.
Moral Hazard
Communities face a challenge in the equitable distribution of common benefits and
burdens. When participation is voluntary, it is human nature to want in when that is beneficial
and to opt out when it is not. We play where we think we can win. The friction of adjusting to
individual and momentary advantage presents a challenge in administering programs to benefit
the community. The larger problem comes because individuals favor themselves too much when
making the decisions about personal, interpersonal, and public values. Systems such as insurance
attract people who stand to gain from cost sharing because they are very sick and they drive
away people who perceive that they do not stand to gain more than they contribute. Volunteer
organizations face a similar problem because it is easier to count the costs than the intangible
benefits of service. When the fair weather helpers leave, there is an increase in the costs to the
remaining volunteers, often without a corresponding benefit. Still more leave as a result, and the
few remaining try to hold out under less attractive conditions. This cycle of optimizing shortterm personal benefit at the expense of the long-term strength of the community is called moral
hazard. In earlier chapters this was discussed under the RENEGING moral decision rule.
Moral hazard occurs when some members of a community add a framing element that
says “I could do better on my own” and at the same time that change damages the framing of
those who remain in community. 10 Moral hazard is less about the failings of individuals than
about the structure of moral engagements where it is natural to defect. It is most often seen in the
two cases where Stag Hunt (Engagements # 61 and # 65) works against RAM. This is a good
example of how an understanding of the structure of a moral engagement may lead to more
progress than will exhortations that individuals should “just do the right thing” in the abstract.
The coastal villages of England, during the sixteenth through nineteenth centuries, had
life boats and crews to respond to shipwrecks in storms. Usually eight men were required for the
boats and it was dangerous to put out to sea in a storm with fewer men. On a particular night, a
solid citizen heard the alarm, but he thought it over and decided not to answer the call. Here is
how he reasoned. He knew that three of the 12 able-bodied men of the village had gone to
Bristol. He also had overheard his wife mention that a family friend, a potential crewmate, and
his son were sick in bed. At the tavern that night, a grumbler had started a rumor that two
brothers, also members of the crew, were no longer on speaking terms with most in the
community over a cow knocking down a fence. Our hero probably went down to the boats just in
case, but he did not expect nor would others have expected him to go out under the
circumstances. Sometimes nature both denies us a guaranteed win and mocks our attempts.
There are two RMA solutions for this rescue boat engagement, but Rule 4a from Chapter 5
or any of the other approaches to finding the best way forward suggest that the joint strategy of
everyone staying ashore should be chosen. It is just the reasonable thing to do. Everyone using
the BEST STRATEGY rule would have the protagonist going to sea while the villagers waffled.
BEST OUTCOME
and ALTRUISM would have him out in the storm in an undermanned boat, and all
three cheating decision rules would put others at risk while the individual had another pint. It
should be apparent from this analysis that the arrangements for manning rescue boats are what
needs to be addressed, not the ethical choices of individuals against principles.
There are two approaches to curbing moral hazard. The most typical one is mandatory
participation. This is the basis for the insurance exchanges under the Affordable Care Act. This
is justified in cases where heavy costs are inescapable to the community at large. Most states
have laws requiring that motorcyclists wear helmets. This is resented by some who just do not
like to be told what to do, by a few who think they are so skilled that they will never have an
accident, and by those who hold they have a right to risk head injuries as a matter of personal
choice. The argument for mandatory helmets is defended on the grounds that the inevitable
excess medical costs of treating head injuries for uninsured cyclists is being borne by all citizens
in the form of subsidized care for the underinsured.
The other approach to blunting moral hazard is by direct moral engagement. Where it is
perceived that others are cherry picking their engagements, the relationship must be reframed as
a bargaining situation. The free-riders and the community each surrender hoped-for benefits until
equilibrium is reached or one sees no point in remaining part of the group or there is no benefit
in retaining selfish members in the group.
Some against the Rest
In 1883, 130 individuals were lynched in the United States (Curriden and Phillips 1999).
Lynching means killing on the authority of a crowd that substitutes its will for legitimate civil
authority. It is a counterfeit ethical standard; it is mass price taking. One man looking into the
face of another as he suspends him by the neck until suffocated is virtually unheard of. In 1901
the incidence of lynching was also 130, and it remained high in this country until the 1930s.
Virtually all cases of lynching were unpunished in the jurisdictions where they occurred. The
clash was between states’ rights and local standards of justice on one hand and the morality of
larger communities in which they were nested. This moral hazard was not put down by invoking
ethical principles. It was a matter of the evolution of values caused by interactions between part
of a community and the more inclusive community as a whole.
This is a form of moral challenge seldom spoken of in traditional ethics. The abuse of a
few in the group against the rest is usually waved off as a few bad actors not following the rules.
11
If we honor Plato’s misstep from Chapter 1, we would say we have too many rotten apples in
the barrel. But what if the problem is systemic; what if the rules are set up to favor the privileged
at the expense of other members of the community? 12 Such abuses may even be written into law
or secured by custom or may be the result of selective enforcement. The discussion of the moral
community in Chapter 10 suggests instead that we have too many rotten barrels. The examples I
have in mind are the elites and oligarchs, tribal strife in Africa that has been institutionalized by
foreign aid and externally imposed forms of government, terrorists who make war on civilian
populations, party bureaucrats in Asia, kidnap gangs in South America, and the 1% in America.
Sexism, racism, tribalism, and most other forms of “ism” fall into this category as well. George
Bernard Shaw (1911/1946) gave the practice a humorous, but more general twist in his play
Doctor’s Dilemma with the quip that “all professions are conspiracies against the laity.” I have
already quoted in Chapter 3, Adam Smith’s famous remark (1776/1991) that tradesmen naturally
look first to their own benefit even when flying the flag of community service.
The difficulty in analyzing the moral nature of part-whole relationships comes from the
fact that elites or other special groups are both actors and context, while the remainder of the
group is also actor and context. At the same time, each part has a preferred ordering of norms,
rules, laws, or “operating arrangements” that gives it an advantage. We can easily imagine that
such situations get caught in cycles where communities endorse those norms that give them
maximal relative advantage. Much of the problem comes from arguing over which principle
should dominate. Looking for the “take-it-or-leave-it” lever is nasty business.
There is something of a way forward if we switch from seeking guidance in the norms
and look instead at the relationship among agents. This is emergent thinking that has only been
developed in the past few decades. Computer simulations of complex adaptive systems are
among the most useful means of understanding these situations where two or more agents affect
each other simultaneously. 13 Without going into the details, computer simulations provided the
foundation for much of this book. 14 The general superiority of RMA and the interplay of various
decision rules in various contexts were arrived at by inspecting many runs of computer models
under various conditions. They were also used to establish the fact that engagement can be
nested inside more general engagements to create moral communities. Computer simulations
were even used to verify that nesting produces emergent relationships where moral growth
means new rules spring from repeated interactions opening the way for moral progress.
Because of the complexity of such simulations, I have not presented the details here. A
representative discussion is available at www.davidwchambers.com/current-work. One example
will be presented in brief form to illustrate the power of this technique for identifying how parts
of groups can cannibalize the entire group – an analysis I do not believe is possible using
traditional analytical methods.
License to perform dentistry as a general practitioner or as a specialist in the United
States is governed by individual state practice acts administered through the various Departments
of Consumer Affairs or comparable bodies. Legislation is now in place in a few jurisdictions that
extends the model, allowing trained therapists who are not dentists to practice first-level care in
certain settings, provided that they operate under the general supervision of a dentist who may
monitor them by telephone or computer connection. The best developed and tested of these
programs is the Alaska Dental Health Aide Therapists program (DHAT) which has been in place
in the Alaska Native Tribal Health System for several years. The American Dental Association
has taken an official position in opposition to this program, saying that only dentists are qualified
to provide appropriate oral health care.
The framing matrix for this case is shown in Figure 11.1 where there are two delivery
systems: “DDS” or “DDS + DHAT” and two care seeking behaviors by those with oral
problems: “Attend” or “Ignore.” Care provided by DHAT therapists is limited to prevention and
education and first-line needs, and it is delivered locally. Care provided by dentists is
comprehensive but available only to those natives who can be flown to large population centers
and is thus more typically major repair of neglect or episodic care from dentists who visit remote
locations on an itinerant basis. There are some dentists who provide first-line care on a charity
basis by means of short visits to remote villages. Studies conducted by independent organizations
have not identified any difference between the quality of care provided by the DHATs and care
provided by dentists for those services DHATs (Wetterhall 2011). These practices are legal
because they are provided on sovereign native peoples’ land and thus are not subject to the
control of Alaska dental practice act.
One potential framing matrix would look like this from the perspective of Alaska
Natives. The combination of care from dentists for serious matters plus DHATs for prevention
would produce the best outcomes if natives took advantage of these services [4]. It would be a
very poor outcome if such expensive opportunities were available and underused [1] – the
government would withdraw funding for the DHATs and dentists operating on a fee-for-service
basis would not be able to justify economically establishing practices under these conditions. Of
the remaining outcomes many natives seeking care from remote dentists [3] would be preferable
to have a dentist-only system that was underused [2]. The situation looks rather different to the
dental profession because many remain unconvinced that DHATs can provide first-line services
at an acceptable level or fear that this would reduce dentists’ incomes if they can. The issue is
not whether the dentists’ motives are good or bad but what effect their values have on themselves
and others in the community. Organized dentistry is on record as preferring limited, high value
care provided by dentists [4]. If all natives who need it sought care, the delivery system would be
overwhelmed by low paying (Medicaid) patients and that would be the worst of possible worlds
[1]. If DHATs were to be deployed by tribal associations, dentists would most likely prefer to see
them used on such a limited basis that the system would revert back to the status quo [3]. This
framing is depicted in Figure 11.1.
Oral Health Delivery Options
“DDS + DHAT” “DDS”
“Seek”
[4 2]
[3 1]
[1 3]
[2 4]
Natives
“Ignore”
Figure 11.1: Moral matrix for engagement involving two systems for delivering oral health care
to a population and two patterns of using these services.
As framed, this is a Wide Imbalance moral engagement (Engagement #44, mirror). The
RMA
solution that neither groups of agents should want to deviate from would have a
combination of dentists and therapists providing an increased level of care, with emphasis on
prevention. This is in fact, what is currently in place. Other preference orderings could be
defended, but none would have higher values of the engagement, and balanced solutions are very
unlikely. Munificence, the maximum possible value of the engagement under any conceivable
pair of strategies in such a case, is 6. Although there are two ways of arriving at a value for the
engagement of 6, only one of them, the one that is actually in existence, is a stable RMA solution.
But that is not the only way the engagement might play out if decision rules other than
RMA
are used. As demonstrated in Chapters 3 and 4, if both agent groups used a BEST STRATEGY
decision rule where the only consideration is what is in the best interests of each party separately,
dentists would be divided. They are in fact they since some dentists work in the DHAT networks
and others advocate for this model in other states. This would result in a hybrid outcome with an
expected value of [3.5 1.5] and a value of the engagement of 5. Engagements such as the one
depicted here lend themselves to “impression management” (DECEPTION). The goal of such a
decision approach is to fool the other party or public opinion generally into thinking their best
interests are served by switching to a strategy that favors the deceiver. There has in fact been a
campaign on the part of organized dentistry to cast doubt on the quality of care provided by
DHATs. It would not be inconceivable that rumors circulate among the native population
throwing aspersions on the controlling motives of “big city doctors.” When DECEPTION goes up
against RMA or BEST STRATEGY, we find [2 4] outcomes, with the party telling half-truths
getting the sweet deal. Surprisingly, when both groups twist the facts, no change is anticipated,
and we stay with [4 2]. The low-value outcomes [3 1] or [1 3] only show up if one agent
group uses a CONTEMPT decision rule.
With only three of the dozen or so moral decision rules being practical in this case, there
are still nine possibilities to consider (RMA and RMA, RMA and BEST STRATEGY, BEST STRATEGY
and RMA, RMA and DECEPTION, etc.) That could get rather difficult to analyze conceptually, even
before considering combinations of interactions. Such complexity often leads to surface-level
hypothetical analyses or flights to slogans. Computer simulations are a better alternative.
Here is a sketch of what can be learned from computer modelling, using the NetLogo
programing language. A computer environment is populated with a community of agents, each
required to use one of the moral decision rules. Only characteristics thought to be critical to the
moral interaction are specified. Row or Column matters, but as far as anyone knows, age does
not. Agents using different rules for making moral decisions are given colors so they can be
tracked. I have arbitrarily chosen RMA = white, BEST STRATEGY = blue / dark grey, and
DECEPTION
= orange / light grey. Agents are assigned decision rules at random and continue
using these rules until they are no longer workable. Context varies randomly at each iteration and
includes the type of engagement encountered (Engagement # 44 in this case), the decision rule
other agents use (one of the three studied here), and whether one plays Row or Column (selected
at random). When two agents meet over an engagement, the outcome for each is determined
using the rules unique to that engagement. This is just the logic so familiar now from analyzing
many engagements in the book. Another of the advantages of defining moral engagements in
operational terms is now apparent.
When the decision rule is disadvantageous under the circumstances, the agent using it
suffers a small hit in fitness (represented visually by the size of the circle). If the decision rule is
advantageous for the agent (given the engagement and the other agent’s decision rule), the agent
gets a small bump in fitness. Depending on the mix, both agents may benefit, both may suffer, or
one may gain and the other lose. An arbitrary maximum has been imposed on the size or fitness
of agents – at a point, they just stop getting any bigger. There is also a minimum fitness. As
agents approach a vanishingly small fitness because of repeatedly using a disadvantageous
decision rule, they go into a pool and are assigned a new life with average fitness and they are
given one of the three decision rules chosen at random.
Notice that the model is not a two-party contest between dentists and therapists. This is
not about who is right or wrong. It is a comparison of three methods for bringing about a better
world under conditions such as this. Is it better to work together, to press forward with one’s own
interests, or strategically manipulate the situation?
The computer is programed to repeat this process of random engagements a large number
of times, most typically 100,000 iterations per simulation run in my work. Usually, a steady state
among the agents is reached by this point, and we can draw conclusions about the wisdom of
each decision rule under the circumstances of the moral engagements being studied. The
outcomes of interest are the proportion of agent types (some decision rules come to dominate and
others approach extinction), the average and standard deviation fitness (size) of each agent type,
and the overall standard deviation and average fitness of the population. The latter is the best
single measure of the moral strength of various combinations of agent types. It is literally a
quantitative index for building better moral communities.
Figure 11.2: NetLogo computer simulation of a moral engagement with three types of agents,
one each using RMA (white), BEST STRATEGY (dark grey / blue), and DECEPTION (light grey /
orange) decision rules.
The screen shot in Figure 11.2 is of a typical NetLogo run showing what happens in a
population with RMA, BEST STRATEGY, and DECEPTIVE agents when the engagement is a Wide
Imbalance one that fits the framing in this example. Almost all of the “White” agents, using the
RMA
decision rule, have reached the maximum size allowed (4.0). The proportion of agents using
this approach is seen, in the top line of running graph in the upper right, to rise over time. This
type of agent (those using RMA) flourishes under these circumstances. Operationally, this means
they were less apt to be exchanged for another type of agent as a result of suffering relatively
poor outcomes in engagements. The average fitness of agents using RMA was 3.6 of a possible
4.0. Working it out together to optimize joint self-interests is the morally right thing to do in this
case.
The trajectories of the other two moral decision rules (BEST STRATEGY and DECEPTION)
were similar to each other. Both perform poorly compared to RMA in terms of number and
fitness. Thus we would say, when confronted with circumstances like those in Engagement # 44,
self-interest or devious dealings are, on average, unhealthy strategies, regardless of what kind of
agent one is facing.
Overall, the fitness of such a system is very positive -- above 3.5 when averaged across
all agents. It should be noted in passing that every simulation run is unique – the movement of
the agents and the engagement is completely random at each iteration. The eventual stability of
the system shows that every community will produce the level of morality its mix of agents and
engagements is “designed” to produce. Figure 11.2 is just one example showing that RMA is a
means for bringing about future worlds that are preferred by agents given the circumstances and
are the best means for building the moral community under these circumstances. This conclusion
is based on having run thousands of such simulations, taking the averages across various
meaningful categories.
Although the example for Figure 11.2 is objectively accurate and optimistic, caution is
still required. As presented here, the differences between decision rules are slightly exaggerated.
First, a very large numbers of uniform iterations is used. Second, only one of the possible 78
engagements has been analyzed. Had the analysis been performed in a Win-Win context, any
decision rule other than CONTEMPT or AVOID THE WORST would have been equally effective. It
happens that this Wide Imbalance engagement demonstrates a clear advantage for RMA. Most,
but not all of the other 77 do. Generally, RMA is slightly better than alternatives or there is no
difference. Only a few, such as Prisoners’ Dilemma or two cases of Stag Hunt and sometimes
in Mixed Equilibria, and these only under some conditions, run counter to the direction of the
example worked here. The best summary is that RMA is superior in general, but perhaps not as
conspicuously so as this example suggests.
Still it would be prudent to wonder, if RMA is so good, why has it not driven selfishness
and other decisions rules out of our world? Why is the problem of ethics so often construed as
finding ways for caring, charity, or cooperation to place some curb on self-interest? There are
probably several reasons, but agent-based modeling suggests a very straightforward answer.
Since Plato set us off on this path, philosophers have tended to define ethics as finding a better
principle rather than finding a more moral world. We keep starting from a position that selfinterest is not the answer rather than asking under what conditions it seems to prosper.
Somehow we need to account for the staying power of selfishness in moral contexts.
Here is one possibility. The screen shot in Figure 11.3 is from a computer simulation run that is
identical in all essential respects to the one that produced the optimistic picture in Figure 11.2.
But the system has gone to pot. Notice that the dominant decision rule, in terms of the number of
agents, is now the selfish approach of BEST STRATEGY represented by a few large dark grey
circles and a lot of small ones. Fitness in the overall system has fallen from about 3.5 to almost
2.0 (where 1.25 is as low as the scale goes). 15 The standard deviation has increased, meaning
there is a large gap between the agents that are best off and all the rest. Although not the most
numerous, the most fit agent type now is the practitioner of DECEPTION. Unless one is an elite
player of BEST STRATEGY or a practitioner of DECEPTION this is an unhappy community. This
looks more like our common understanding of the way things are in the world today.
Figure 11.3: NetLogo computer simulation of a moral engagement with three types of agents,
one each using RMA (white), BEST STRATEGY (dark grey / blue), and DECEPTION (light grey /
orange) decision rules. Same as Figure 11.2 except for restriction on self-interested agents
engaging unless they enjoy a high level of fitness.
The code for the simulations that produced these very different outcomes is the same as
the previous code -- except for a single line. The change was insertion of a line of code saying
only selfish agents that are very fit are allowed to participate in moral engagements with others.
That is all. There was no restriction placed on very fit selfish agents, RMA agents (that would be
contrary to the definition of their approach), and DECEPTIVE agents could take advantage of any
one (as is their wont). Otherwise, the moral engagements function exactly as they did in the first
simulation.
The effect of this tiny change is to populate the community with underperforming selfinterested agents that feed the privileged self-centered agents. Selfish agents become more
numerous, but fewer of them prosper, the “income gap” between the top and the bottom of the
BEST STRATEGY
group grows wider (as seen in the continuously increasing standard deviation in
the lower left), and the overall fitness for both selfish agents and the entire community declines
(as seen in the lower right on the graph). The “fatal feature” is allowing an isolated
underperforming class to exist. This is a stable, dysfunctional arrangement, kept in place by part
of a community working against another part of the same community. 16
Examples of this type of restricted moral engagement include gated communities, tax
havens for the well-off, apartheid, anticompetitive legislation, restrictions on Jews in the
professions in Nazi Germany, fences along the borders, the 1821 legislation in South Carolina
making it a crime to educate slaves or current-day limitations of voting rights, unequal pay or
access to executive positions for women, and large Gini Indexes. 17 These are not the effects of
some mysterious moral disease or flaw in human nature. They are the nefarious conditions part
of the group smuggles into the framing matrix for the whole community to serve their own ends.
There has been no society in history where self-interest has led to the thriving of a
privileged group without its having created a structural underclass. This is a well-known
phenomenon in economics called the Matthew Effect. The name comes from a misquotation of
Matthew 13:12 in the Bible to the effect that the “rich get richer.” More formally it is also known
as the Pareto curve in honor of the Italian economist who made so much of the very pronounced
positive skew in wealth. 18 The effect is produced by compounding of successive outcomes. The
gains from the first round are carried over as a new baseline for the second round. Alternatively,
those who do well initially by chance alone often have the opportunity to change the rules so
they can continue to benefit. The analysis of moral communities in terms of universal principles
has no way of identifying, explaining, or rectifying this phenomena.
It is generally known but not always part of common practice that labeling this or that
behavior or type of individual “unethical” is insufficient to bring about the changes we desire. In
this example of the part against the rest it emerges that one and the same pattern of moral
behavior can lead to both the kinds of worlds we favor and those that cause revulsion. Egocentrism is held in check in one community and runs wild in another. But that cannot be
explained entirely by characteristics of self-centeredness. Too much attention on what we do not
like and the assumption that “it would be best if others just changed” draws our attention away
from the true levers of change. Morality is systematic in the sense of being a community affair. It
matters when individuals act selfishly or cheat; this effect is compounded into the future when
selfish cheaters get their hands on the rules for how future moral engagements should be
managed. When running as intended, the entire community determines what is moral. When a
part works against the whole the system become misaligned. The motives of the part that wants
to set the rules or the principles does not matter; the moral wisdom of a community emerges
from its self-corrective whole nature, not its parts.
Point of View: Are “Good” and “Right” Predicates, Propositions, Sentences, or None of the
Above?
History is our present understanding of events that occurred in the past. All philosophy is history.
Predicates are the parts of sentences that can be either affirmed or denied in a claim. “Six
is more than four,” “Blücher was the real hero at Waterloo,” and “Rude people are unlikable” all
contain predicates (“more than four,” “hero,” and “unlikable”). They pivot on the nouns that
follow the copula and make a claim about the subject of the sentence. Sentences with predicates
are either true or false. In the sentence “Saint Louis is the capital of Missouri,” “Saint Louis” is a
clunker that makes the sentence false; in the sentence “Saint Louis is the largest city in
Missouri,” “Saint Louis” is a fit predicate in the sense that it makes the sentence true. Predicates
out of context do not amount for much.
Right away we know from this example that “good” and “right” and all the principles that
have secure places in ethical theory are going to be problematic. They are predicates. The
general reader may be starting to think this note is intended for a very small circle of academic
philosophers. More’s the pity to have to confess that is so.
When we embed a predicate in a sentence and further take a position about it we are
making a proposition. “I know that six is more than four,” “Some historians argue that it was
Blücher’s arrival at the decisive point that turned the tide at Waterloo,” and “Everyone knows
that it is justifiable to dislike rude people” are all propositions. They signal where we stand on
matters. They are our point of view. Just as predicates are embedded in propositions,
propositions can be embedded as predicates: “I think it is an oversimplification for academics to
credit Blücher as the hero of Waterloo.” We can keep up this compounding process until
sentences become so long people lose interest in them before they get to the end.
In 1903 the English philosopher G. E. Moore (1903/2004) set us off on a merry chase
with his “open question argument,” or what is more commonly known as the Naturalistic
Fallacy. What Moore wanted to show was that “the good” cannot be equated to or defined by any
set of natural properties. “A man is a featherless biped.” That is true because all the creatures that
have the physical property of being two-legged while lacking feathers picks out exactly the same
things that we call men and women. We can give a naturalistic definition that is equivalent to
mankind (more or less). Moore doubted that we could do the same for “the good.” Is telling a lie
always bad? What about doing things your mother would disapprove of? Torturing babies for
personal pleasure probably counts as full-strength wrong, but the category we are interested in is
larger.
Moore did not consider every possible material case of good or not good. Instead he used
a move in logic. He wondered whether it makes sense to ask whether typical statements that
describe things people do could be called good or right. It seems so; I have just given examples
where the question seems meaningful – we would be interesting to know what others think the
answer might be. With regard to a particular case of sharp business dealing, some might say
“yes,” some “no,” and many will say “it depends.” But very few indeed would repost, “What a
silly thing to ask.” If it is an open or meaningful question whether the good has this or that
natural property, the property cannot be the definition of the good. Moore is taken as having
argued that to equate “the good” with any particular natural condition or to define “the good” as
possessing physical features that define it is a fallacy. There are many other properties that are
irreducible to natural terms. Moore’s model was the color yellow. My favorite is a weed. What is
the biological property of a plant that is classified as a weed? As far as I know, biologists have
chosen not to go there.
A hundred years later we are still misinterpreting Moore’s argument and bellyaching
about it. This book takes a position that morality is naturalistic to the core. But I see no particular
reason to reject Moore’s argument. Moore certainly did not make anything like a claim that “the
good” and “the right” are normative universal principles. He only said that they are not identical
with any set of physical properties that can be understood independent of human judgment about
what is desirable. That is completely compatible with naturalism, or at least with most flavors of
naturalism that regard human judgment as having natural properties.
To untangle this, we have to decide whether the good and the right are predicates,
propositions, or statements. I take the position that they are none of the above. It cannot be said
of the two words “the good” or the two words “the right” that they are true or false. They have
no meaning until used in some specific fashion. “Good is a positive term” and “What is right is
what should be done” are sentences with predicates. They have truth value in some sense. We
could think of conditions associated with the predicates that would make people either affirm or
deny the sentences. Some sentences appear simply true based on the meanings of the words.
(“He is not the queen” is an example.) As written here, they completely escape Moore’s
criticism. If we modify them slightly to substitute natural terms, we get “cheating on your
income taxes is not good, under certain generally agreed conditions” or “Farlydork was wrong to
cut in at the front of the line like that.” We can think of standardized conditions to serve as test
beds to determine whether these sentences are true or not. Even if we recognize that there may be
a few people who disagree with us, it is reasonable that some form of test would be helpful. I
take it that this was all that Moore was attempting to show – or at least that is all he could
defend.
This will not be a completely satisfactory way to leave the problem for a true naturalist.
Saying that there are tests that could be useful in determining the truth or falsity of morality as
sentences with ethical predicates is “too open a question.” One way to shore this up would be the
appeal to principles that are not, in themselves, natural. “Cutting in front of the line is wrong
because it is unethical.” Of course, that just will not do. First, it gets circular pretty quickly and
in uninteresting ways. Second, it lacks naturalistic grounding. We could say that all this move
accomplishes is to redefine our ignorance. And it is still appropriate to want to know who says it
is unethical and why. Normativists often act as though ethical principles exist in some
disembodied form. IT is unethical. No one has to actually hold any position regarding moral
sentences, they often say. 19 The principles are the judges and we are the messengers.
It seems natural to place moral predicates in propositions and propositions in sentences.
Recall that the proposition relates a predicate to a disposition that some real people hold. “I
believe it is wrong to cut in the front of the line” or “The event organizers take a dim view of it”
or ”Those in line swallowed their discontent.” When being sloppy about this, we wave in a
general direction, “Most people would agree that . . .” The point of a moral proposition is to link
a predicate (the description of an action) to actual individuals. There are, in my way of looking at
it, no disembodied moral points of view. I can think of no reason whatsoever to be concerned
with a predicate that no one knows or cares about. We can image that they exist, but in that
capacity they serve only as abstract illustrations of what it means to talk nonsense. (My saying
that ethical realists are making meaningless claims is not meaningless because it is a position I
hold and express in propositions. Ethical realists defending their view is not meaningless either;
that is their proposition. As soon as we walk away from the argument, however, the issue
vanishes and is of no value.)
So far, I have linked moral predicates to people and said that unlinked ones can safely be
ignored. We are only going to talk about what is spoken of. Moral predicates are necessarily
embedded in propositions. And propositions are embedded in sentences. This much normativists
and naturalists agree on.
The dividing line is the naturalist’s insistence that everything that is meaningful has to
exist in time and space. Sentences, because they are universal abstractions, do not measure up to
this standard. There is an important difference between saying that “X is true regardless of the
circumstances under which the claim is made” and “X is true when voiced or acted upon” and
“X is true even if no one has ever thought of X.” There is a koan to the effect: “If a man says
something and there is no woman to hear him, is he still wrong?” “I believe that killing is wrong,
regardless . . .” still sounds a lot like hugging a norm rather than performing a natural act.
The problem can be solved by honoring the traditional philosophical distinction between
a sentence and a statement. The sentence “I believe that killing is wrong” is an abstraction. Its
truth value can be thought of as being decidable, one way or another, across a vast range of
particular circumstances. The “I” is indeterminate and washes out across all the cases we care
about. Philosophers have a strong tradition of analyzing such sentences from the point of view of
nobody in particular. Such sentences are complex nouns and can be treated as any other noun –
parsed, denied, ignored, embraced. They are non-natural.
The technical term for a sentence that has a particular person, specifiable circumstances,
and a given time attached to it is “statement.” 20 Statements are natural verbs, actions. Only one
person can do it at a time, except, of course, for committees, juries, and other communities that
speak in the plural. A repeated sentence must be either true or false every time it is uttered
because it is about something that is either true or false regardless of the circumstances of
claiming it. A repeated statement may be true on one occasion and false on the next. For
example, “I was mistaken about what I thought her motives were” can vary in truth value
depending on the occasion. More technically, sentences are neither true nor false at all. They
simply do something in the natural world. We sometimes get a bit confused and hide a statement
in a sentence. For example, “Frege was wrong to endorse logicism,” which looks like a sentence,
is really an ellipse for the statement “I believe the evidence now shows that Frege was wrong to
have endorsed logicism.” 21
Terms such as “expression” and “claim” only add to the confusion. They have meaning
as both nouns and verbs. Naturally, I focus on the usage that connects them with actual people
and their particular behavior.
Now we have gotten what we were looking for. All statements are 100% natural. And it
will prove workable to expand the sense of statements to include any public interoperable actions
in time and space. They can be uniquely and adequately defined in material terms. “I apologize
to you for having said ‘your intentions were questionable’” is a moral statement that can be
verified in the context we share. We can measure it on surveys, confirm it by observation, and
potentially capture it with brain imagery. It functions appropriately in the moral engagement
analysis developed in this book. Others respond to it so that is all that is needed to carry through
the analysis based on RECIPROCAL MORAL AGENCY. No additional type of predicate or
proposition is required. No recourse need be made to sentences.
Sentences containing moral propositions can be embedded within each other. They can
also be embedded in statements, particularly when we take statements in a generous sense of
including acts of commission and omission as well as verbal expressions. But as Wilfred Sellars
has taught us,22 the outer level or the top and final level in such embedding is always a particular,
natural statement. Much confusion in morality (a statements-based discipline) comes from
treating principles as if they were free-floating ethical entities (sentences).
1
Jean-Paul Sartre’s Les Mouches (1943/1989) is a twentieth century retelling of part of the
Oresteiana. The Furies are replaced by more naturalistic flies. Not every conversion of worldshaping forces to natural phenomena is equally effective. But Sartre at the time was writing for
the theater of the absurd.
2
The flowering of Greek philosophy and tragedy intended to “explain” how things are was not
an isolated geographic phenomena at the time. Much the same shift in understanding morality as
an individual response in community rather than a divine overlay on peoples was taking place at
the same time in the Old Testament, in India with Buddhism, in the Persian Empire with the
Magi, and in China with the development of Taoism and Confucianism. This is known as the
Axial Period from the Greek term for values. See Eisenstadt (1982) for an academic treatment
and Gore Vidal’s novel Creation (1981) for a delightful fictitious account.
3
It is more than coincidence that certain philosophers started saying that the gods are dead about
150 years ago. What they were eulogizing was the hoped for passing of an ideology that
excluded human kind from engagement in the supernatural and regarded us principally as
spectators. Gods in the image of man were not working out as expected. The new science started
working with phenomena beyond the reach of human determinism and universal, humancentered rules seemed less certain. Among the most prominent were Arthur Schopenhauer,
Friedrich Nietzsche, Andre Gide, and Jean-Paul Sartre, all of whom had a thorough grounding in
Greek drama.
4
Particularism is a rigorously defended antipode to universalism. See Dancy (2004) and Hooker
and Little (2000).
5
For the literature on decreased willingness to help those in need as the size of the available
group of helpers increases see Latané and Darley (1968), Darley and Latané (1968), Latané and
Rodin (1969), and Latané and Nada (1981).
6
Mancur Olson (1965) is the father of research on the behavior of markets of various sizes. His
protégé, Elinor Ostrom (1990) received a Nobel Prize for her work in this field.
7
Champions of self-interest often quote John Locke’s famous remark (1689/1924 II, V) that
“For this ‘labour’ being the unquestionable property of the labourer [as he extracts resources
from the common good], no man but he can have a right to what that is once joined to.” They
seldom finish the sentence which concludes “. . . at least where there is enough, and as good left
in common for others.” This is simply a statement of Pareto optimality as presented in Chapter 9.
8
Research on the Ultimatum Game and the Dictator Game can be found in Colin Carmerer
(2003) and Carmerer, Loewenstein, and Rabin’s (2004) anthology Advances in Behavioral
Economics.
9
The recipe for “ethical fudge” is presented in Mazar, Amir, and Ariely (2007).
10
For moral hazard see Dembe and Boden (2000) and Stan Haski’s (2005) polemic.
11
In Chapter 10 of his Breaking the Spell, Daniel Dennett (2006) presents a good discussion of
the abuse of groups by their most fundamental members.
12
Some issues centering on the abuse of the privileged part on the general whole are considered
by Tim Harford (2005) and Robert Frank (2011).
13
Computer programing of complex adaptive systems, especially by means of agent-based
models, has become a standard tool in environmental sciences, economics, physical sciences
such as weather forecasting, and political science. See John Maynard Smith (1982), Brian
Skyrms (2004), and Jason Makenzie Alexander (2007). Agent-based modeling techniques are
described by Nigel Gilbert (2008).
14
There are two advantages of using computer simulations to model complex human
interactions. First, it helps isolate the important moving parts. In the game of modeling moral
engagements, all that must be specified are: (a) the rank order of preferred outcomes for two
agents with two strategies, (b) the type of engagement, and (c) the decision rule used by each
agent. The second advantage is the ability of simulations to consider complex interactions over
time.
15
“Fitness” is the “value of the engagement” concept introduced earlier, scaled for use in
computer simulations. It is similar to Darwin’s use of the term as the capacity to replicate given
one’s environment. It also resembles Nietzsche’s (1882/1974) understanding of “health” as the
capacity to withstand environmental insults. As I use fitness, it differs from the Nussbaum and
Sen (1996) idea of flourishing. The latter is self-referentially the capacity to achieve one’s own
goals rather than being viable in one’s community.
16
The general case that ideologies are meme or manufactured constructs that maintain unjust
social conditions is developed by J. M. Balkin (1998).
17
Jonathan Kozol presents a detailed narrative analysis of the damage caused to societies that
block off some members from participation in his (1991) Savage Inequities. Much the same story
is told with dollar signs and big statistics by Nathan Kelly (2008), Timothy Noah (2012), Niall
Ferguson (2013), and Nobel laurite Joseph Stiglitz (2015).
The Gini Index is a measure of income inequality within countries. It is tracked by the
CIA because it is regarded as a measure of political instability. Numbers in the low 30s represent
high equality and are typical of European democracies and of the United States following the
Second World War. Numbers in the 60s are found in a few small dictatorships and in South and
Central America. The United States now has a score in the mid- to upper-40s which is similar to
Iran and Mexico.
18
The Pareto curve is not the same as the Pareto Principle introduced in Chapter 9.
19
Kant’s (1785/1948) Groundwork is a well-known example of the problems that can result from
ambiguity over scope in logical predicates (the ambiguity of this sentence is intended). In
Chapters 1 and 2 of his book he works out the power of the categorical imperative in the realm of
pure reason. Chapter 3 is titled “Passage from a metaphysic of morals to a critique of pure
practical reason.” But something is lost in the “passage.” Kant would like it to be the case that
the truth value of the imperative in the pure context where no one actually acts on the rule
remains intact when brought over to serve as a guide for what people do in their daily lives. The
logic of “’X is good’ is a universally true principle” differs from the logic of “I am doing X
because I believe ‘”X is good” is a universally true principle.’”
20
The difference between a sentence and a statement is well explained in Jaakko Hintikka’s little
(1962) classic Knowledge and Belief: An Introduction to the Logic of the Two Notions.
21
The view that the truth of a statement is not found in its relationship to a nouminal world is, of
course, that of John Dewey (1929) in Experience and Nature, W. V. O. Quine (1981) in Theories
and Things, Richard Rorty (1982) in Consequences of Pragmatism, and sometimes seconded by
Donald Davidson (2006) and Hilary Putnam (2004).
22
Sellars makes his point that when embedding sentences and propositions, the latter always bat
last in his chapter on “Grammar and Existence” in Science, Perception, and Reality (1963).