Investment Committee Decisions - Society of Quantitative Analysts

J. W. Payne
5/6/2010 5:01 PM
Last changes 4/8/2010
Investment Committee Decisions:
Potential Benefits, Pitfalls, and Suggestions for Improvement
John W. Payne
Fuqua School of Business
Duke University
To appear in Perspectives on Behavioral Finance, A. Wood (Ed.)
1
Abstract
Investment committees (teams) are responsible for many billions of dollars in
investments worldwide. And, as the world of financial decision making becomes more
complex and dynamic, the potential value of committee based decisions will increase.
This chapter asks, and answers, three questions regarding the quality of investment
committee (team) decision making. 1) When and why do collections of individuals
perform better than the average individual judge? 2) When and why do groups or teams
perform worse than the average decision maker? In particular, when might groups
(committees) amplify, not mitigate, individual decision biases? 3) How might the
processes of investment committee decision making be improved? It is argued that the
quality of investment committee decisions is likely to matter most where individual
judgmental biases also matter most, and therefore, if committees are going to be used to
manage investments then the processes of good committee decision making must also be
managed, and managed continually.
2
Introduction
Investment committees (teams) are responsible for many billions of dollars in
investments worldwide. Included in the decisions made by investment committees are
asset allocation s, judgments about future market conditions, choices regarding specific
investments, and the hiring (and firing) of money managers. Most committees or teams
meet face-to-face and use a “strength in numbers” decision rule (e.g., majority or
consensus type scheme)1 . Committee or team-based decision making is also common in
other areas of financial decision making. For example, it has been estimated that almost
60% of the actively managed equity mutual funds are managed by teams, up substantially
from just 30% or so in 1992 (Bliss, Potter, & Schwarz, 2008).
There are a number of reasons why committees or teams might be used to make
financial decisions. For instance, in the area of nonprofits, it has been suggested that
having a number of important people on an investment committee helps fund raising
because such people will be more likely to give, and to get others to give. There is related
evidence that the more members of a group are involved the making of a decision , the
greater the commitment to the implementation of that decision. In the area of mutual fund
management, it has been suggested that teams provide a more stable management
structure than an individual (Kovaleski, 2000). That is, funds managed by committee are
less subject to the impact of a single manager leaving the company. Another reason that
has been offered for group or committee decision making is the diffusion of
responsibility for a poor decision.
1
A consensus decision rule is often a 2/3ths majority rule in practice. That is, once 2/3ths of a group decide
on a judgment or choice that tends to become the “consensus” decision.
3
The primary reason for the popularity of committee decision making, however, is
the view that committees or teams will make better financial decisions. With teams,
committees, or groups, it is felt that there will be more knowledge or info rmation to be
shared. This information argument for committee decision making appears even more
compelling as the world of financial decision making becomes more complex and
dynamic. Consequently, it becomes less likely that any single individual will have
sufficient information and skills for good decision making. Gary Becker, the 1992 Nobel
economics laureate, acknowledged this problem when he stated that “the main problem
with the modern financial system based on the widespread use of derivatives and
securitization is that while financial specialists understand how individual assets
functions, even they have limited understanding of the aggregate risks created by the
system” (Wall Street Journal, 10/7/08). The potential of having more information, and
the sharing of that information, is perhaps the strongest argument for group decision
making.
Teams also can provide “error” checking of the reasoning being presented during
decision making. That is, simply having more of the relevant information is not enough;
information must also be processed in the right way. Groups are seen as helping to avoid
poor reasoning. For example, individuals frequently neglect or do not properly use base-
rate or distributional information when making probability judgments. The hope is that a
member of a committee might “correct” such an error in reasoning by another member of
the committee. In addition, humans are subject to random “slips” and “mistakes” in
judgments. If the judgments of individual members of a group include rando m error or
“noise”, then a statistical combination those individual judgments will cancel out those
4
errors and lead to the “wisdom of crowds” (Surowiecki, 2004). Finally, committees
provide a way to incorporate different values, such as risk attitudes or a concern with
social responsibility , into a decision. All of the reasons above suggest that investment
committees have the potential of being a good way to manage investments. And, there is
some empirical support for the superiority of group decision making. For example,
Binder and Morgan (2005) find that group decisions about monetary policy questions are,
on average, better than individual decisions.
On the other hand, some view investment committees much more negatively –
“they don’t meet often, they act slowly, and when they act, they tend to make the wrong
decisions” (Robert Jaeger, quoted in the October, 2004, issue of Foundation and
Endowment Money Management ). Manzoni, Strebel, and Barsoux (2010) have also
recently argued that diversity in thinkin g on corporate boards can “backfire” and lead to
poorer performance. Substantial empirical support for this more negative view of group
decision making exists. For example, Barber and Odean (2000) report that investment
clubs performed worse than individual investors. To make the empirical results more
unclear, Bliss, Potter, and Schwarz (2008) recently reported “no statistically or
economically significant differences in performance between individually -managed and
team-managed mutual funds” (p. 115). Thus, the empirical evidence on the performance
of team-based versus individual financial decision making is mixed; although, the overall
pattern of results do suggest limits of group decision making. For a recent review of the
quality of group decision making in general, see Kerr and Tindale (2004).
The quality of investment committee decisions will matter most when the
investment decisions involve more complex valuations and subjective forecasts; i.e.,
5
where individual judgmental biases may also matter most. Lo and Mueller (2010)
recently provided a characterization of levels of uncertainty in markets. Committee
decisions may matter more when the levels of uncertainty are higher, i.e., levels 4 and 5,
i.e., when there are limits on what we can deduce or can “know” about the underlying
phenomena generating whatever data is available. Also the quality of investment
committee decisions will matter most when there are significant constraints on arbitrage
possibilities. Examples of such decisions include investing in newer companies, extreme
growth stocks, hedge funds, and new types of financial instruments. Jeremy Siegel in the
Wall Street Journal (9/16/08) suggests, for instance, that “Group think prevailed” when
the risks and values of new types of subprime real-estate loans and credit instruments
were assessed.
This chapter asks, and answers, three questions regarding the quality of team or
investment committee decision making. 1) When and why do collections of individuals
perform better than the average individual judge? 2) When and why do groups perform
worse than the average decision maker? In particular, when might groups amplify , not
mitigate, decision biases? 3) How might the processes of investment committee decision
making be improved?
The rest of this chapter is organized as follows: First, a simple model of
individual judgment is presented , followed by a classic model of group judgment. Next,
an illustration of how groups can outperform the average individual judge is presented.
Third, a number of examples are presented where groups amplify, not mitigate, decision
biases. Reasons why poor group decision making happens are discussed in terms of
cognitive, motivational, and social interaction factors. A key point is that poor group
6
decision making occurs, to a large extent, because the members of the group are human,
not because the members of the group are bad or individually not smart. The chapter
concludes with suggestions for improving committee or team financial decision making.
The focus of those suggestions is on the process of group decision making, and the need
for continued monitoring of group processes.
Truth, noise, and bias
Individual judgments. Individual judgment can be thought of in terms of three
components. The first is “truth”. That is, the part of a judgment that reflects the true state
of the world or coherent preferences. Judgments also frequently include an element of
random error or “noise”. That is, unpredictable deviations from truth. The third
component is bias or predictable deviations from truth. Over the past four decades a
growing body of research has shown that human judgments and choices frequently
exhibit predictable biases. Ariely (2008) refers to such biases as “predictably irrational”
behaviors. An example of a “bias” that is of great relevance to financial decision making
is the overconfidence that people often show in the precision or accuracy of their
judgments. Another “bias”, one of the most important in human judgment, is the
tendency to search for confirming rather than disconfirming evidence of an initial
hypothesis. A third decision “bias” that is related to preferences is the highly contextdependent nature of choice behavior. For instance, sometimes A is preferred to B when C
is the third option in a choice set but B is preferred to A when D is the third option, even
through neither C or D is ever chosen.
To the extent that a judgment only includes a truth component plus random error
(or the bias component is relatively small) then a group judgment is typically better than
7
the average of the individual judgments. This is because the group judgment will “cancel
out” the noise component - the larger the group the better. This is the fundamental
principle behind the “wisdom of crowds”. Although, it turns out that going from a single
judge to even a small group (n=3 or better) can improve judgment substantially 2 . See
Hogarth (1978) for more on why aggregating even a few people’s estimates usually
suffices to boast accuracy. This positive feature of “group” or combined decision making
does not require that the collection of individuals defining a group even meet. One could
take a collection of individual judgments and just combine those judgments statistically
(a simple average works well). An example of the wisdom of even small groups of judges
is given below.
On the other hand, to the extent that a decision is subject to substantial amounts of
individual biases in judgment or choice, making that decision in a group often does not
mitigate the biases seen in decision s. Instead, as reviewed below, group decision making
can actually amplify biases in judgment and choice. That is, N heads can be worse than
one head.
Group judgment. The classic model of group performance (Steiner, 1972) sees the
actual performance of a group as a function of the potential productivity of the group
members minus “process loss” components that cause a group to perform at a level below
its potential. One might also include the possibility of “process gain” due to the positive
or synergistic aspects of a group of individuals working together. Group potential is a
function of the number of “independent” judges with task-relevant information and skills
making up a group. Group potential if often defined in terms of a statistical combination
2
A survey of investment committees by Arnold Wood (Martingale Asset Management) found that
investment committees ranged from 3 to 12 in size with the median being 7 members.
8
of individual judgments. The concept of independent judges is stressed above because
simply adding more and more members to an investment committee is not going to help
if the added members share common background (knowledge, beliefs, and values) with
existing com mittee members. Generally, diversity in knowledge and thought processes is
more important in group composition than the size of the group. An example of a
“process loss”, to be discussed later, is when a member of a group does not fully share
the information that he or she might have that is relevant to a decision. Error checking of
the facts or the reasoning being presented by one person by another member of the group
is an example of a potential “process gain”.
Unfortunately, the ubiquitous finding across decades of research is that groups
usually fall short of their potential productivity – they exhibit process loss (Kerr &
Tindale, 2004), . While process loss may be a common result with group decision making,
it does not occur in all cases. There are problem solving tasks in which groups come
close to their apparent potential. Further, process interventions in group decision can
mitigate process losses. Finally , Kerr and Tindale (2004) report that “process gains”,
where group performance is better than any individual or a statistical combination of
individual member efforts, have proven elusive or modest at best.
While “process gains” have proven elusive, as discussed below, there are a
number of studies showing that group judgments are frequently better than the average
individual judgment. The best individual judgment is frequently better still, but it is often
hard, unfortunately, to identify the best judge before the decision is made.
Example of Group Accuracy in Judgment- N heads can be better than one.
9
Figure 1 shows a picture of a container full of nickels. How much money, in total,
do you think is in that container? Now imagine that three individuals were asked that
question as individuals. How well do you think the average individual (one of those three
judges picked at random) would do on that estimation task? How well in terms of
accuracy might the group of all three individuals do if asked to reach a consensus
estimate? How well might that consensus group judgment do in comparison to a simple
averaging of the three individual estimates?
Recently threee colleagues (Jack Soll, Al Mannes, and Lehman Benson) and I
asked 177 undergraduates to estimate the amount of money in the container (shown in
Figure 1), first as individuals and then as part of groups of three persons. The correct
answer is $21.55. A measure of how close an estimate is to the correct answer is provided
by the absolute deviation from the true answer. At one extreme, a randomly selected
individual from the “crowd” of 177 people was found to deviate (+ or -) from the truth by
a value of $8.33. Most people underestimated the amount of money in the container. At
the other extreme the combined (average) estimate of all 177 respondents deviated from
the truth by $4.55 (average estimate of $17.05). That is, there is a 45% improvement in
the estimate by moving to the “wisdom of crowd” provided by the average of all 177
respondents. Picking just three judges at random, averaging their estimates, would lead to
a 28% improvement in the estimate as compared to the randomly selected individual. The
mean deviation for the average of three randomly selected individuals (a small group)
was $6.02. Interestingly, the actual performance of the interacting groups of size three
was worse, with an average of $6.77 (a 19% improvement rather than a 28%
improvement). In part, this decrease in performance seems to have been the result of the
10
behavior of groups where there were two people with similar estimates (referred to as the
middle and near judge) and one person with a substantially different estimate (referred to
as the outlier judge) . Generally, the opinion of the third judge whose estimate was further
away (the outlier) was overly discounted . That is, the process used for reaching a group
decision effectively reduced the size of the group from 3 to closer to 2. Interestingly, the
discounting of the “outlier” opinion increased the more the outlier’s opinion deviated
from the opinions of the other two members of the group. While this makes sense if
deviation of opinion is a sign of less knowledge by the outlier , it also has the effect of
reducing the likelihood that the individual opinions to be combined will “bracket” truth.
That is, one judge will produce an estimate that is below “truth” while another judge will
produce an estimate that is above “truth.” Similar results were found in a study were the
task was to forecast changes in month sales on the basis of cues like the level of website
traffic. While outliers were generally less accurate than the two judges that were more in
agreement, including the estimate of the outlier was important in collections of opinions
doing better than any individual judgment.
The results described above are common in the group judgment literature. Groups
generally outperform a randomly selected individual. However, interacting groups
frequently do worse than a simple statistical average of the individual judgments. Groups
both are better but also show evidence of “process loss”.
Why a committee might perform poorly: Sources of process loss
There are many sources of “process loss” in group or committee decision making.
Some of the sources are more cognitive and include poor information sharing and the
amplification of biases in judgment through the search for confirming information. Other
11
sources are more motivational in nature and include lower levels of effort when one is
part of a group (“social loafing”) , and the introduction of goals, such as the desire to
conform to a group opin ion, that can lower accuracy. Also, influence in a group may be a
function of factors that are not related to knowledge or skills. For instance, people who
talk a lot may have more influence on a group judgment than is actually warranted. Or, as
suggested above, one may treat opinions that are most similar to your own as necessarily
being more valid. Finally, there is an illusion of group effectiveness that often exists.
That is, people believe groups do much better than individual judges even though the
evidence is mixed, at best.
Do committees mitigate cognitive biases?
As noted above, research has identified many biases in individual judgments and
choices. See Weber and Johnson (2009) for a recent review. These individual thinking
biases clearly impact financial decision making. A number of studies have invest igated
whether or not group decision making mitigates or amplifies or has no effect on the
existence of biases in consensus decisions. Below are a few illustrations of this research.
Overconfidence in judgment. Overconfidence in the accuracy and precision of
one’s knowledge is sometimes viewed as possibly the greatest deterrent to rational
investment (J. Clements, Wall Street Journal, 2/27/2001). To illustrate one type of
question asked in overconfidence studies consider the following: Over the next year,
what do you expect that the average S & P 500 return will be (+ or -)
_________%?
Obviously the return could be lower (worse). Please give a lower bound to your estimate
such that there is a 1- in-10 chance that it will be less than ____ %. On the up side, please
give an upper bound to your estimate such that is a 1- in-10 chance that the return will be
12
greater than ____%. Another form of an overconfidence question takes the following
form: Next year (12 months from now), the competitor (in your industry) with the largest
market share will have : a) less than or equal to 30% of the market, or b) greater than 30%
of the market? What is the probability (.5 to 1.0) that you r answer a) or b) is correct? A
frequently used measure of the quality of such subjective judgments is to assess the
proportion of times events said to have a certain probability (e.g., .8) of occurrence (or of
being correct) do in fact occur. A “well calibrated” probability assessor would be one
where the frequency of occurrence was, in fact, 80%. Similarly, a “well-calibrated” 80%
confidence interval around an estimate, such as the percent return of the S & P 500,
should contain the actual return about 80% of the time.
In general, people are not well-calibrated. For example, in a study of 7000
forecasts of the S & P 500 returns by top corporate executives (Ben -David, Graham, and
Harvey, 2007) , only about 38% of the forecasts were within the 80% confidence interval.
Further, it appears that the gap between confidence and accuracy grows wider as the
amount of information available to the judge increases (Tsai, Klayman, & Hastie, 2008).
That is, confidence goes up quickly with more and more information but accuracy
increases at a much slower rate.
Do groups mitigate the overconfidence effect? Several studies have investigated
this question. One by Plous (1995) nicely illustrates the findings. Individuals and groups
of those individuals were asked to assess 90% confidence intervals for ten questions.
Assuming perfect calibration on e would expect that for 9 out of the 10 questions the true
answer would fall within the 90% confidence interval. This did not happen for
individuals or groups. Instead , for only about 3.1 of the 10 items did the true answer lie
13
within the confidence intervals assessed by individuals. Interacting groups assessment
were only slightly better (4.2 out of 10). Had the individual judgments been combined
statistically, the performance would have been much better (7.4 out of 10) indicating
“process loss”.
Interestingly, the groups studied by Plous thought they would be much better, on
average, than the individuals. That is, people thought that groups would be much better
than they actually were. This “illusion” of group effectiveness partially explains why
group decision making is so popular.
Sunstein and Hastie (2008) point out that deliberation among group members
tends to increase confidence in the group’s judgment or decision , whether or not such
deliberation actually increases judgmental accuracy. In part this is due to the fact that
deliberation usually reduces the variance in members’ judgments over time as people
with each other. And, consensus is often taken as a sign of correct thinking. Interestingly,
this same deliberation process can increase the variance in judgments on the same
problem between groups.
Planning fallacy . A common judgmental bias that is related to overconfidence is
the planning fallacy (Buehler, Griffin, & Ross, 1994). Planning is defined in terms of the
estimate of the time needed to complete a project. One striking result is that when people
are asked to give optimistic and pessimistic estimates of the time needed to complete a
project, more than 50% of the projects took longer than the pessimistic estimate. Another
result was that people who were 70% confident in their best estimates of the time needed
to complete a project were only correct about 40% of the time.
14
One reason for the planning fallacy seems to be that thoughts are focused more
on the path to likely success of a project than on potential impediments. In addition, the
distribution of actual outcomes experienced in the past for similar projects was given
little or no weight. This discounting of past knowledge is a form of “base-rate”
information neglect, a common judgmental bias itself.
The “planning fallacy” seems slightly stronger when groups make the estimates.
For example, Buehler, Messervey, and Griffin (2005) found that individuals tended to
underestimate the time needed to complete a project (estimate = 45.16 days, actual =
59.31 days). Groups were even more optimistic (estimate = 42.25 days). Similarly , in
another study the actual task completion time was 2.30 days, on average, and the
individual estimates were 1.87 days and the group estimates were 1.07 days.
To summarize, groups do not seem to mitigate common judgmental biases. There
are even times when groups seem to amplify the biases. In the next section this
amplification effect is illustrated with preference tasks.
Sunk costs. One of the most common decision errors is the “sunk cost”
phenomenon. This phenomenon is also known as “escalation of commitment”.
Essentially the phenomenon refers to the observation that people frequently persist in a
project more than might be justified on the basis of projections regarding future costs and
benefits, when there are prior costs or investments in that project. It has been suggested
one reason for the “sunk cost” effect is the reluctance of people to admit mistakes. In
other words, one way to hide a little mistake is to bu ry it under a potentially bigger one.
Whyte (1993) has studied the frequency of sunk cost related decisions made by
individuals and by groups. In a control condition, a project was presented to both
15
individuals and groups without any reference to a prior investment (sunk cost). Based on
the description of the project, 29% of the individuals and 26% of the groups were in favor
of going ahead with the project. However, in the case of a different set of individuals,
when some sunk cost information was presented, about 69% of the individuals decided to
go ahead with the project, showing the “sunk cost” effect. For groups, the sunk cost effect
was even stronger. In that case, 86% of the groups voted to go ahead with a project when
sunk cost information was presented.
Further, in groups where only a minority of the members of the group initially
favored escalation, about 50% of the decisions ended up in favor of continuing the
project. In contrast, groups with an initial minority in favor of abandonment of the project
only had about 2% of the final group decisions in the direction of abandonment. This
suggests that an incorrect form of economic reasoning was actually more persuasive in
group discussion than the more correct form of economic reasoning.
Context effects in preference. Context-independence refers to the assumption that
the relative ranking of any two options should not vary with the addition or deletion of
other options. This is sometimes called the principle of “independence of irrelevant
alternatives”. It is often viewed as a fundamental property of rational choice behavior.
Despite its logical appeal, decision makers do not always satisfy context-independence in
their choices. For example, Simonson and Tversky (1992) report a study in which given a
choice between $6 and a Cross pen, only about 36 percent of the people selected the
Cross pen. However, when a clearly inferior pen (a decoy) was added to the choice set (a
pen selected only about 2% of the time), the percentage of people selecting the Cross pen
over the $6 rose to 46%.
16
Slaughter, Bagger, and Li (2006) explored whether or not this context effect
would occur with group decision making. The task was to select an employee based on
ratings of sales and service potential to consumers. (This task is clearly related to
investment committee decisions regarding the hiring of money managers.) One candidate
for the marketing position (named Johnson) had an average rating in terms of sales but a
high rating in terms of service. Another candidate (named Smith) had a high rating in
terms of sales but only and average rating in terms of service. For half of the individual
decision makers and half the groups, the choice problem was to select between Johnson,
Smith, and a third candidate (named O’Brien - J) who was clearly inferior to Johnson.
O’Brien - J had a similar high rating on service as Johnson but only a low rating on the
sales dimension. For the other half of the individual decision makers and groups, the
choice problem was to select between Johnson, Smith, and a third candidate (named
O’Brien – S) who was clearly inferior to Smith. O’Brien – S had a similar high rating on
sales as Smith but only a low rating on the service dimension. The percentage of
individuals choosing Johnson over Smith when the third option was dominated by
Johnson (O’Brien – J) was 83% compared to 59% when the third option was dominated
by Smith (O’Brien –S). This effect was slightly stronger with group decisions. Ninety
percent of the groups selected Johnson when that was the dominating candidate compared
to only 49% when Smith was the asymmetrically dominating candidate. That is, groups
may exhibit more context-sensitive preference than individuals.
Polarization of attitudes and risk-taking. One potential plus of group decision
making is the ability to bring to bear on a decision a diversity of attitudes and values. An
implicit assumption is that the diversity of attitudes and values will cause less extreme
17
positions to be adopted by a group. The evidence is clear, however, that group discussion
often leads to the polarization of attitudes, not the mitigation or compromising of
attitudes. An excellent example of this effect is provided by Schkade, Sunstein, and
Hastie (2007). In that research groups of citizens from Boulder Colorado (a
predominantly liberal city) and groups of citizens from Colorado Springs Colorado (a
predominantly conservative city) were asked to deliberate and reach consensus about
three policy issues – global warming, affirmative action, and civil unions for same-sex
couples. The major effect of deliberation as part of a group was to make group members
more extreme in their views than before they started to talk. Another related, and
important, finding was that both liberal and conservation groups became more
homogeneous; deliberation reduced internal diversity, see the earlier discussion of how
this can also lead to greater confidence in decisions. The Schkade et al. (2007) results
also suggest that initial diversity in a group beliefs and values will not persist over time.
In an investment context, this polarization effect suggests that if you have a
committee made up of a majority of slightly risk-taking people, the committee decisions
will often exhibit even more risk-taking. On the other hand, if you have a committee
made up of a majority of slightly risk-averse people, the committee decisions will often
exhibit even more risk -aversion. Clearly this result is important for investment
committees when dealing with the issue of asset allocation, among other issues.
Myopic loss aversion refers to suggestion that people are myopic (short-sighted)
in evaluating outcomes over time and are more sensitive to losses than to gains. Benartzi
and Thaler (1995) suggest the concept of myopic loss aversion as an explanatio n of the
equity premium puzzle. Sutter (2007) has shown that groups as well as individuals are
18
prone to myopic loss aversion in that they are more risk averse when considering
investments in shorter evaluation and commitment periods of time. He does suggest,
however, that group decision making attenuates the degree of loss aversion. This
attenuation is due to the fact that in his study three-person teams invested more in a risky
asset than individuals regardless of time period .
To summarize, N heads are sometimes better than one and sometimes worse. In
particular, groups do not always mitigate judgmental biases and risk attitudes. Why might
groups amplify decision biases and attitudes? More generally, what are some of the
reasons for “process loss” in group decision making? We next turn to a brief discussion
of a few of the reasons for poor group decisions. The reasons are divided into cognitive
factors and motivational factors, although actual committee decisions will typically
include a variety of both types of factors.
What causes process losses?
Poor information sharing. For groups to be most effective there needs to be both
different information held by the different members of a group, and that the different
information must also be shared among the group members. Different information held
by different members of a group reflects the group ’s potential. Sharing the unique and
different information held by different members of a group is an important part of the
decision process. Consequently, o ver the past two decades there has been much research
devoted to investigating the role of shared versus unshared information on decisions.
Much of this work was pioneered by Stasser and Titus (1985) and is referred as “hidden
profile” decision problems.
19
To illustrate this type of research, imagine that Kate, Ken, and Keith are members
of a three person investment committee who have the task of selecting between one of
three candidates (A, B, and C) for a position as a money manger. There are eight items of
information (x1 to x8) about the candidates on job dimensions considered relevant to the
selection task, e.g., a rating of technical skill. To make things easier, assume that each
item of information takes on just two possible values, 1) positive or 2) negative/unknown.
Now consider two information conditions – shared and unshared. In the shared
information condition, all three members of the committee, Kate, Ken, and Keith , know
all the information about each candidate. It is known by all members that candidate A has
positive ratings on all eight job dimensions. It is known by all members that candidate B
has positive ratings on dimensions x1 to x5 and negative ratings on dimensions x6 to x8.
Similarly, it is known by all committee members that candidate C has negative ratings on
dimensions x1 to x4 and positive ratings on the remaining four dimensions. Knowing all
the information, it is clear that candidate A is the superior choice. Often however
information is not fully known to all members of a committee. Instead some members
know more about some things and other members know more about other things. This is
called the “unshared” condition. For example, imag ine that Kate knows the following
information: Candidate A has positive ratings on three dimensions x1, x2, and x3.
However, Kate knows nothing about the ratings for candidate A on the other dimensions.
Kate does know that candidate B has positive ratings on five dimensions x1, x2, x3, x4,
and x5. Kate also knows that candidate C has positive ratings on four dimensions, x5, x6,
x7, and x8. Committee member Ken, on the other hand, knows the same information
about candidates B and C as everyone else but only knows that candidate A has positive
20
ratings on three dimensions: x4, x5, and x6. Finally, committee member Keith shares the
information on candidates B and C but knows only that candidate A has positive ratings
on the three dimensions: x6, x7, and x8.
If the committee members in the unshared condition described above were to fully
share all their information , then the group would be back in a shared information
condition. Again the choice of candidate A would be obvious. Unfortunately that is not
what often happens. Instead, choice of candidate A becomes much less likely in the
unshared condition . Instead candidate B, and sometimes candidate C, become more likely
to be selected by the group. The reason is that people tend to first share the information
that is already known to everyone. Shared information is also discussed more often.
Another reason is that p eople seem to adopt a position of advocating the candidate they
think was best initially, i.e., candidate B, and present the information in support of that
position rather than sharing all their information. The bottom line is that all the
information known to the members of the group is frequently not shared during the
decision making process. This failure to completely share unique information is an
important source of “process loss”.
The search for confirming information . One of the important, and most common,
“biases” in human judgment is the tendency of people to search out information that
tends to confirm (rather than disconfirm) previously held beliefs. As Francis Bacon
noted,
“The human understanding when it has once adopted an opinion (either as being the
received opinion or as being agreeable to itself) draws all things else to support and
agree with it. And though there be a greater number and weight of instances to be found
21
on the other side, yet these it either neglects and despises, or else by some distinction sets
aside and rejects, in order that by this great and pernicious predetermination the
authority of its former conclusions may remain inviolate.”
Information that supports a previously held position tends to be subject to a “can I
believe it” test. Information that does not support a position tends to be subject to a much
more stringent “must I believe it” test. Further, an item of information that is ambiguous
in its meaning tends to be interpreted as supporting the already held opinion .
Confirmation bias is a form of path -dependence in judgment that characterizes behavioral
models of human judgment and choice.
Unfortunately, this tendency towards a confirmation bias seems to be amplified
during group decisions. Schulz-Hardt, Frey, Luthgens, and Moscovici (2000), for
instance, found that the confirmation bias in the search for supporting rather than
conflicting information was significantly stronger for groups than for individuals.
Individuals did show the confirmation bias but to a lesser extent. Further, the larger the
majority in a group that was in favor of the initially preferred option, the stronger the
confirmation bias.
More recently, Kerschreiter, Schulz-Hardt, Mojzisch, and Frey (2008)
demonstrated that one factor contributing to a confirmation bias in group information
seeking is the high confidence that groups can develop in the correctness of their
decision. Highly confident groups show a strong confirmation bias. Further, groups that
are more homogeneous in their initial preferences are more confident. Remember, that a
group’s high confidence in the correctness of their decision may reflect overconfidence
22
not accuracy. Consequently, the impact of confidence on confirmatory information
seeking is likely to amplify poor decision making.
At an even more general level, Benabou (2009) argues that groups, organizations,
and markets can be reinforce individual errors in beliefs where contagious exuberance
can take hold and collective delusions occur. One factor that can contribute to groups
reinforcing rather than dampening errors in beliefs is social conformity, discussed next.
Social conformity . Poor group decision making can result from social conformity
pressures and other factors related to social interactions. Often, for example, people will
modify their opinions in the direction perceived to be consistent with opinions held by
others in a group. The classic demonstration of a social conformity effect is by Asch
(1956). In that work Asch showed that even a straightforward perceptual judgment (the
line of a line) can be influenced by the wrong opinions expressed by other people in a
group. There is also some evidence that social conformity pressures can lead group
members to suppress divergent opinions, decide quickly in order to avoid unpleasant
tensions within a group, and defer to a respected leader’s position. McCauley (1998) has
argued in a similar vein that poor group decision making results from seeking to preserve
friendly relations in a group based on the personal attractiveness of group members.
Sunstein and Hastie (2008) refer to this as “reputational cascades” in that information that
is expressed earlier in deliberations tends to get reinforced over time because people are
reluctant to express a contrary opinion in order to avoid losing the good opinion of the
person who spoke earlier. Poor decision making can also result from members of a group
seeking to support an attractive leader. Often criticism of ideas is seen as criticism of the
individuals behind those ideas, and thus something to avoid. This social interaction
23
action effect also can be seen as a leading to a confirmation bias in information
processing. More generally, see the special issue in Organizational Behavior and Human
Decision Processes, Vol. 7, 1998, on the topic of Groupthink (Janis, 1972) and the social
interaction determinants of good and bad group decision making. Benabou (2008) offers
a formal model of when collective denial of reality in more or less likely in groups.
Summary of Quality of Group Judgments
This chapter has provid ed a few examples of when groups do better than
individuals in judgment and choice tasks, and when groups do poorer. For other examples
see Kerr and Tindale (2004) and De Dreu, Nijstad, and van Knippenberg (2008). A
summary of the results from the extensive literature on group performance follows. First,
when groups are faced with tasks where once a correct solution is proposed the answer is
clear, groups do better than individuals. That is, groups do better in solving “Eureka”
tasks. The major danger in such tasks is when the shared conceptual model of what is a
correct is flawed. Further, sometimes people treat tasks as more intellectual than they
really are, forgetting the critical role of assumptions. During the late 1990s, for instance,
many people shared the view that a “new” internet economy had emerged in which old
rules no longer applied. Today we are dealing with the results of a shared view that
housing prices could only go up. We are also dealing with the shared belief that the
wealth of information now available to financial managers meant that risk could be
understood (modeled) and priced much better than before (see Breeden, 2009, for a
discussion of the use and misuse of models in investment management).
Second, on more judgmental tasks such as estimation, where the primary source
of error is random noise, then groups tend to be better than the average individual.
24
However, for estimation and forecast tasks you are often better off by simply taking a
statistical average of a collection of individual forecasts or estimates. Having the
members of a group meet and reach a “consensus” forecast or estimate may be worse
than statistical averaging. Interacting groups, compared to statistical groups, tend to add
noise, which lowers the overall validity (truthfulness) of judgment. That is, interaction
among group members can produce more unpredictably in judgments (see, for example,
Schkade, Sunstein, and Kahneman, 2000).
An exception to the recommendation for statistical averaging is when you have
individuals with very different levels of forecasting ability , and you are able to identify
the better forecasters. In that case, adding the opinions of the poorer forecasters through
statistical averaging will “hurt”. However, evidence suggests that people are not that
good at identifying the better versus poorer forecasts. People who are the most
overconfident in their forecasts may also be seen as better. In addition, we often tend to
view that people whose forecasts are more in agreement with our own are “better”.
People also tend to view themselves as more accurate than the aver age judgment, and
consequently tend to overweight their own opinion when revising opinions in light of the
opinion of an advisor. For more on how, and how well, identify and use other expert
opinions, see Soll and Larrick (2009).
Suggestions for Improvement
While there are clearly reasons to be skeptical regarding the promise of
investment committee decision making it is unlikely that investment committees will
disappear. One recommendation for improving investment committee decision making is
simply to become aware of the pitfalls (traps) that can affect group decisions. Awareness
25
of decision traps can help (Russo & Schoemaker, 2002). In addition, the literature on
group decision making suggests a number of other ways in which groups might make
better judgments and choices. In this section of the chapter a few of the best established
suggestions for improving committee or group decision making are discussed.
Selection of committee members. Arnold Wood and I did a survey a few years
ago of investments committees and found that investment committees were very
homogeneous in membership. Over 90% of the members were white males, with most of
those 60 years or more old. An interesting question, for which we do not have data, is
how homogeneous the committee members were in educational backgrounds and other
experiences. However, given that group formation tends to be guided by the principle of
similarity among potential grou p members, a high degree of shared backgrounds (e.g.,
same graduate schools), experiences, and viewpoints is likely to be the case. The more
the backgrounds and experiences of a committee’s members are shared, the more likely it
is that their judgments will be correlated. That is, they will exhibit shared biases in their
judgments. This has implications for how to go about selecting the members of an
investment committee. For instance, there are some modeling results regarding group
judgments suggesting that a smaller committee (n = 4) with less correlation in judgments
(e.g., r = 0) would perform better than a much larger committee (n=16) with even modest
levels of correlations (e.g., r = .30). Thus, one suggestion is to put resources into trying to
find a small number of relatively independent judges rather than spend resources on
getting a lot of people on a committee. This suggestion is also consistent with the old idea
that an increased size of a group can be a curse in terms of coordination costs. Note, this
suggestion is in terms of the accuracy of judgments that are made. One might want a
26
larger committee if that is related to fund raising, not the quality of the decisions to be
made.
Because deliberations tend to reduce the internal diversity of groups over time,
another suggestion is to systematically change the membership of an investment
committee over time. This procedure will incur some greater management costs but will
increase the diversity of thought that is one of the reasons for using committee decision
making.
In the section above it is argued that diversity in group membership (information,
perspectives, skills, and values) enhances the quality of decisions. Research supports the
benefits of diversity of group membership in terms of knowledge, etc. And, as argued
above, investment committees may not be diverse enough in terms of group decision
making. Nonetheless, recent research does suggest that preference diversity across group
members may come at a decision cost. Nijstad and Kaps (2008) have shown that
preference diversity, particularly in terms of disliking different options, can lead groups
to become more “decision averse”. That is, increased preference diversity can lead groups
to be more likely to refuse to decide.
Training of committee members. If the majority of the members of a group
exhibit a judgmental bias, then group decisions tend to amplify that bias. This is
particularly true if the group uses some form of majority rule decision scheme.
Consequently, individuals need to be trained to avoid decision biases, rather than
counting on groups to overcome those biases. That is, do not count on committees to
correct for systematic biases in judgment.
27
An idea that has been suggested for improving group behavior is to have each
individual member of a group think independently about a problem before meeting in a
group. To the extent that such a procedure is adopted it also suggests the importance of
the training of each individual committee member. This suggestion also relates to the
need to actively manage the exchange of information, as discussed next.
Herzog and Hertwig (2009) have suggested that individuals can be trained to
capture some of the benefits of collective wisdom by using an approach they call
“dialectical bootstrapping”. The idea is to reduce an individual’s error in quantitative
estimates by averaging a person’s first estimate with a second one that harks back to
somewhat different knowledge. One way to encourage accessing different knowledge is
to present people with the first estimates and then ask them to “assume that your first
estimate is off the mark…think about a few reasons why that could be. Which
assumptions and considerations could have been wrong? …what do these new
considerations imply? Was the first estimate rather too high or too low?” (p.234). Herzog
and Hertwig report that this procedure boasts accuracy but does not quite emulate the
wisdom of many independent judges.
Management of information sharing. Poor information sharing is one of the key
reasons for poor group decisions. An important role for any group leader is to actively
manage the information sharing process. Manzoni, J, Strebel, P, and Barsoux, J. (2010),
for instance, argue that a key factor in making diversity membership on boards a “good”
thing is the role of the leader as a facilitator of discussion. The leader needs to make it
clear that people were selected for membership in the group, at least in part, because of
the unique information they were expected to bring to various decisions. The sharing of
28
unique information is to be encouraged. Also, leaders need to actively work at balancing
the participation of the group members in the sharing of information.
One important way to encourage more information sharing is to separate the
discussion of the pros and cons of a proposed solution to a problem from the person who
first proposed the solution. Critical thought about proposed solutions should be
encouraged by the leader of a group. Criticism of a person behind a solution should be
discouraged. Leaders should also emphasize the importance of information exchange
between the members of a committee rather than on persuading others about the
correctness of one’s position. Finally, leaders should try to prevent a premature consensus
on decision refusal (Nijstad & Kaps, 2008).
Two other leadership suggestions by Russo and Schoemaker (2002) for avoiding
poorer group decisions are 1) the leader should make sure that the group listens to
minority views, and 2) the leader should withhold his or her ideas at first. They also
mention a custom in some Japanese companies that the member of a group with the
lowest status should speak first, followed by the next lowest, etc. The idea is that this will
mitigate a tendency for the lower ranked person to withhold his or her opinion from a fear
of contradicting a higher ranked person.
Whether the causes of poor group decision making are cognitive, motivational, or
due to social interaction effects, there is a common thread of a confirmatory bias in
judgments and choices. As noted above, one of the most significant individual biases in
judgment is the search for and interpretation of information in a way that confirms a prior
or favored belief. Many of the process losses of group decision making result from an
29
amplification of this confirmatory bias. That is, groups may be even more susceptible to a
confirmation bias in judgment than individual judges.
Accountability for process. Groups should agree upon what constitutes “good”
decision processing early in deliberations. For example, one should get agreement that
the group will include “base-rate” information in probabilistic forecasts, that the group
will seek “disconfirming” as well as confirming information, and that “sunk-costs” are
not relevant to future decisions. Unfortunately, too often agreement on process is viewed
as luxury item that should be minimized when groups meet due to time pressures. This is
a mistake. Time spent on getting agreement on processes of committee decision making
is seldom wasted. Further, the group should take joint responsibility for monitoring the
groups decision processing. Every member of the group should value the contribution of
unique, and sometimes conflicting, information and beliefs, and seek such information
during deliberations. That is, the group should be accountable for the process, not just the
outcome, of a decision.
A simple procedure for increasing process accountability that has been used in
experimental studies is to set up (or pretend to) a meeting in which the decision -making
processes used in a prior decision would be reviewed and discussed. Evidence that
process accountability can alleviate the poor sharing of information is provided by
Scholten, van Knippenberg, Nijstad, & De Dreu (2007). Russo and & Schoemaker (2002)
suggest an audit of group decision making processes.
Finally, the focus on good decision reasoning, information sharing, etc., must be a
continuing activity. Most of the pitfalls of group or investment committee decision
making are “hu man” errors that occur very naturally. Even very smart people who are
30
well motivated can be part of a poor group or committee decision making process. While
a group might start off with the best of intentions, and with good practices, it is easy to
fall back into more natural human tendency that can hurt group decisions. Group, and
leader, accountability for good decision processing must be a continuous activity.
Conclusion
There are good reasons why investment committees (teams) are responsible for
many billions of dollars in investments worldwide. As the world of financial decision
making becomes more complex and dynamic, the potential value of committee based
decisions will increase. Making decisions as a group also increases feelings of
participation (ownership) of the decision, confidence in the decision, and consequently
willingness to implement the decision. Feelings of decision “ownership” maybe
particularly important in an increasingly uncertain world where even “good” investments
can be expected to have bad outcomes from time to time.
The quality of the decisions can also be higher when the majority source of error
in judgments is due to “noise”. Collective decision making does tend to cancel out
random judgmental errors particularly when individuals are more diverse in the
knowledge and thought processes and therefore their individual judgments are likely to
“bracket” truth.
Unfortunately, the potential pitfalls of poor group decision making are also likely
to increase in size and frequency as an investment committee deals with more complex,
uncertain, and dynamic judgmental tasks. The purpose of this chapter is to increase
awareness of some of the potential pitfalls (traps) of group decision making. Hopefully,
an awareness of “traps” will help people avoid those traps. Awareness of a few of the
31
many ways in which the processes of group decision making can be enhanced should also
help to improve investment committee decisions. There are methods available to increase
the potential benefits of investment committee decisions while mitigating the potential
pitfalls of group decisions. To close, if committees are going to be used to manage
investments, the processes of good committee decision making must also be managed,
and managed continually.
32
References
Ariely, D. (2008). Predictably irrational: The hidden forces that shape our decisions.
Harper Collins. New York.
Asch, S. E. (1956). Studies of independence and submission to group pressure.
Psychological Monograms, 70.
Barber, b. & Odean, T. (2000). Too many cooks spoil the profits: The performance of
investment clubs. Financial Analyst Journal. 17-25.
Benabou, R. (2009). Groupthink: Collective delusions in organizations and markets.
Working paper 14764, National Bureau of Economic Research.
Benartzi, S., &Thaler, R. (1995). Myopic Loss-Aversion and the equity premium puzzle,
Quarterly Journal of Economics, 110, 73-92.
Ben-David, I., Graham, J. R., & Harvey, C. R. (2007). Managerial overconfidence and
corporate policies. Working paper. Duke University.
Binder, A. S., & Morgan, J. (2005). Are two heads better than one? Monetary policy by
committee. Journal of Money, Credit, and Banking, 37, 789-811.
Bliss, R. T., Potter, M. E., & Schwarz, C. (2008). Performance characteristics of
individually -managed versu s team-management mutual funds. The Journal of Portfolio
Management, 110-119.
Breeden, D. T. (2009). The use and misuse of models in investment management. CFA
Institute Conference Proceedings Quarterly, 36-44.
Buehler, R., Griffin, D., & Ross, M. (1994). Exploring the "planning fallacy": Why
people underestimate their task completion times. Journal of Personality and Social
Psychology , 67, 366 -381.
33
Buehler, R., Messervey, D., & Griffin , D. (2005). Collaborative planning and prediction:
Does group discussion affect optimistic biases in time estimation? Organizational
Behavior and Human Decision Processes, 97, 47-63.
De Dreu, C. K. W., Nijstad, B. A., & van Knippenberg, D. (2008). Motivated information
processing in group judgment and decision making. Personality and Social Psychology
Review, 12, 22 - 49.
Herzog, S. M. & Hertwig, R. (2009). The wisdom of many in one mind: Improving
individual judgments with dialectical bootstraping. Psychological Science, 20, 231 -237.
Hogarth, R. M. (1978). A note on aggregating opinions. Organizational Behavior and
Human Performance, 21, 40-46.
Kerr, N. L., & Tindale, R. S. (2004). Group performance and decision making. Annual
Review of Psychology, 55, 623-655.
Kerschreiter, R., Schulz-Hardt, S., Mojzisch, A., & Frey, D. (2008). Biased information
search in homogeneous groups: Confidence as a moderator for the effect of anticipated
task requirements. Personlity and Social Psychology Bulletin, 34, 679 -691.
Kovalesky, D. (2000). More mutual fund companies take a team approach. Pensions and
Investments. 32.
Lo, A. W., & Mueller, M. T. (2010). WARNING: Physics envy may be hazardous to
your wealth. Working paper, March 19, 2010, MIT.
Manzoni, J, Strebel, P, & Barsoux, J. (2010). Why diversity can backfire on company
boards. Wall Street Journal, R3.
McCauley, C. (1998). Group dynamics in Janis’s theory of groupthink: Backward and
forward. Organizational Behavior and Human Decision Processes, 73, 142-162.
34
Nijstad, B., A., & Kaps, S. C. (2008). Taking the easy way out: Preference diversity,
decision strategies, and decision refusal in groups. Journal of Personality and Social
Psychology, 94, 860 -870.
Plous, S. (1995). A comparison of strategies for reducing interval overconfidence in
group judgments. Journal of Applied Psychology, 80, 443 -454.
Russo, J. E., & Schoemaker, P. J. H. (2002 ). Winning Decisions. Doubleday, New York,
NY.
Schkade, D. A., Sunstein, C. R., & Kahneman, D. (2000). Deliberating about dollars: The
severity shift. Columbia Law Review, 100.
Schkade, D., Sunstein, C. R., & Hastie, R. (2007). What happened on deliberation day?
California Law Review, 95, 915 -940.
Scholten, L., van Knippenberg, D., Nijstad, B. A., & De Dreu, C. K. W. (2007).
Motivated information processing and group decision making: Effects of process
accountability on information processing and decision quality. Journal of Experimental
Social Psychology, 43, 539-552.
Schulz-Hardt, S., Frey, D., Luthgens, C., & Moscovici, S. (2000). Biased information
search in group decision making. Journal of Personality and Social Psychology, 78, 655669.
Simonson, I & Tversky, A. (1992). Choice in context: Tradeoff contrast and extremeness
aversion. Journal of Marketing Research, 29, 281-295.
Slaughter, J. E., Bagger, J., & Li, A. (2006) . Context effects on group-based employee
selection decisions. Organizational Behavior and Human Decision Processes. 100, 47-59.
35
Soll, J B., & Larrick, R. P. (2009). Strategies for revising judgment: How (and how well)
people use others’ opinions. Journal of Experimental Psychology: Learning, Memory,
and Cognition, 35, 780-805.
Stasser, G. & Titus, W. (1985). Pooling of unshared information in group decision
making: Biased information sampling during discussion. Journal of Personality and
Social Psychology, 48, 1467-1478.
Steiner, I. D. (1972). Group process and productivity. Academic Press, New York.
Sunstein, C. R., & Hastie, R. (2008). Four failures of deliberating groups. Working Paper
No. 401, University of Chicago Law School.
Surowiecki, J. (2004). The wisdom of crowds. Random House, New York.
Sutter, M. (2007). Are teams prone to myopic loss aversion? An experimental study of
individual versus team investment behavior. Economic Letters, 97, 128 -132.
Tsai, C. I., Klayman, J., & Hastie, R. (2008). Effects of amount of information on
judgment accuracy and confidence. Organizational Behavior and Human Decision
Processes, 107, 97-105.
Weber, E. U. & Johnson, E. J. (2009). Mindful judgment and decision making. Annual
Review of Psychology , 60.
Whyte, G. (1993). Escalating commitment in individual and group decision making.
Organizational Behavior and Human Decision Processes. 73, 430-455.
36
Figure 1
37