Creating Illusions 1 Running Head: Illusions of Knowledge Creating

Creating Illusions 1
Running Head: Illusions of Knowledge
Creating Illusions of Knowledge:
Learning Errors that Contradict Prior Knowledge
Lisa K. Fazio
Carnegie Mellon University
Sarah J. Barber
University of Southern California
Suparna Rajaram
Stony Brook University
Peter A. Ornstein
University of North Carolina at Chapel Hill
Elizabeth J. Marsh
Duke University
Address correspondence to:
Lisa K. Fazio
Psychology Department
Carnegie Mellon University
5000 Forbes Ave
Pittsburgh, PA 15213
[email protected]
Phone: (412) 268-4109
Fax: (412) 268-2798
Word Count: 2754
Creating Illusions 2
Abstract
Most people know that the Pacific is the largest ocean on Earth and that
Edison invented the light bulb. Our question is whether this knowledge is stable, or
if people will incorporate errors into their knowledge bases, even if they have the
correct knowledge stored in memory. To test this, subjects answered general
knowledge questions two weeks before reading stories that contained errors (e.g.,
“Franklin invented the light bulb”). On a later general knowledge test, subjects
reproduced story errors despite previously answering the questions correctly. This
misinformation effect was found even for questions that were answered correctly
on the initial test with the highest level of confidence. Furthermore, prior knowledge
offered no protection against errors entering the knowledge base; the
misinformation effect was equivalent for previously known and unknown facts.
Errors can enter the knowledge base even when learners have the knowledge
necessary to catch the errors.
Keywords: Fiction, False Memory, Suggestibility, Knowledge
Creating Illusions 3
Introduction
Most people know that the Pacific is the largest ocean on Earth, that Edison
invented the light bulb and that the cheetah is the fastest land animal. Is this
knowledge stable, or can it be easily altered? The literature on false memory
creation contains many demonstrations of people misremembering details of events
(e.g., Loftus, 1975). People can even falsely recall entire events that never occurred
(e.g., Loftus & Pickrell, 1995; Wade, Garry, Read, & Lindsay, 2002), with
consequences for later behavior (e.g. Geraets et al., 2008). Although much is known
about modifying memory for one’s personal past (e. g., Garry & Gerrie, 2005; Loftus,
2004; Roediger & McDermott, 2000), less is understood about modifying one’s
knowledge base. Our question is whether general knowledge about the world can
be easily changed, with people coming to believe that Franklin invented the light
bulb or that the Atlantic is the largest ocean.
One’s knowledge base may be more resistant to change than personal
memories. These two types of memories depend upon different brain areas (Prince,
Tsukiura, & Cabeza, 2007), and can be dissociated behaviorally (Tulving, 1985). In
addition, while most episodic memories refer to unique events, most basic facts
have been encountered repeatedly, increasing their strength in memory. Facts are
also associated with numerous other facts; in addition to knowing that Edison
invented the light bulb, people know that he was an American inventor who lived in
the 1800s. This supporting information helps to stabilize one’s knowledge about
Edison and his invention (Myers, O'Brien, Balota, & Toyofuku, 1984). In fact,
research on naïve science beliefs suggests that prior knowledge can be extremely
Creating Illusions 4
resistant to change; even college-level physics courses fail to correct many students’
naïve misconceptions. (Carey, 1986; Clement, 1982; Posner, Strike, Hewson, &
Gertzog, 1982).
It is unsurprising when people learn factual errors in novel domains.
Without prior knowledge, this simply involves encoding new memories according to
the principles of episodic memory. More interesting is whether people learn errors
despite having the correct knowledge stored in memory. Several studies hint at this
possibility. For example, reading factual inaccuracies in stories (e.g., Marsh, Meade,
& Roediger, 2003) or questions (Bottoms, Eslick, & Marsh, 2010) increases the
likelihood that people will answer later general knowledge questions with those
errors. However, rather than directly measuring prior knowledge in an individual, it
has been inferred from norms (Nelson & Narens, 1980) or a post-experiment
knowledge check. This study breaks new ground by directly measuring individuals’
knowledge two weeks before they read stories containing errors.
To establish what individuals knew, our subjects first answered a series of
general knowledge questions as part of an on-line survey purportedly about tip-ofthe-tongue states. Then, at least two weeks later, the subjects completed a
seemingly unrelated experiment in the lab. During this session, subjects read two
short fictional stories that contained incorrect information (e. g., “Newton proposed
the Theory of Relativity”). The stories were clearly labeled as fictional, and readers
were warned that the stories might contain errors. Soon after reading the stories,
subjects took a final general knowledge test on which they were warned against
guessing; a subset of these questions referred to the facts mentioned in the stories
Creating Illusions 5
(e.g., “Who proposed the Theory of Relativity?”). Of interest was whether
suggestibility depended upon individuals’ previously demonstrated knowledge. We
expected that subjects would answer general knowledge questions with story errors
if they were unable to answer the questions correctly on the survey; of primary
interest was whether subjects would answer with story errors even after
demonstrating that they had known the correct answers on the earlier survey.
Method
Subjects. Twenty-four undergraduates from the University of North Carolina
at Chapel Hill participated for course credit. Two subjects were eliminated because
they did not answer at least two questions correctly in each set of questions on the
online survey, leaving 22 subjects in the analyses.
Materials. To measure prior knowledge, we used an online survey containing
64 general knowledge short answer questions (from Nelson & Narens, 1980).
Sample questions included “What is the largest ocean in the world?” and “What is the
last name of the man who invented the telegraph?”. The questions were selected to
have a range of difficulty. Thirty-two questions measured knowledge of critical facts
that were later referenced in the stories and the rest were filler questions. The
thirty-two critical questions were split into two sets, which were matched for
difficulty according to the Nelson and Narens norms. One set appeared later in the
stories as misleading items, and the other set appeared as neutral references. The
sets rotated across subjects, so items that some subjects saw as misleading, others
saw as neutral.
Creating Illusions 6
Two fictional stories (modified from Marsh, 2004) were used, each of which
contained character, dialogue, and plot. The stories were approximately 1400
words in length. Critically, each story made eight references to incorrect but
plausible answers (misleading items) and eight references to critical concepts
without suggesting a specific answer (neutral references). For example, a
misleading reference referred to “paddling around the largest ocean, the Atlantic”,
whereas the neutral reference mentioned “paddling around the largest ocean”
without specifying the name of the ocean. Across the two stories there were 32
critical facts, which corresponded to the 32 critical questions on the online survey.
Each story also included eight correct filler facts (which differed from the fillers
from the survey), so that accurate information was also presented. These correct
fillers were the same for all subjects; they did not overlap with the 32 critical facts
and were not considered part of the experimental design. The final general
knowledge test contained 64 short answer questions: the 32 critical questions from
the online survey (also referred to in the stories) along with 32 new fillers.
Procedure. Subjects completed an online survey, which was described as
investigating tip-of-the-tongue states. They were warned that some of the
questions would be difficult, and told to respond with “I don’t know” instead of
guessing. Subjects rated their confidence in each answer using a 5-point scale.
Later, subjects were invited to participate in a seemingly unrelated experiment that
occurred at least two weeks later in the lab (M = 19 days, SD = 4).
The laboratory session began with a story-reading phase. Subjects were
presented with a booklet that contained both stories and were told to read the
Creating Illusions 7
stories carefully as they would later be asked questions about the stories. Subjects
were warned that the stories were fictional and that some of the information
presented might be incorrect. Following each story, subjects answered four
comprehension questions. After reading both stories at their own pace, subjects
solved puzzles for five minutes before taking the final general knowledge test, with
instructions warning against guessing. Finally, subjects read the correct version of
each fact and were debriefed about the experiment.
Results
Online Survey. Subjects correctly answered 38% of the critical questions.
Questions that later appeared as neutral facts (M = .38) and misleading facts (M =
.39) were equally difficult, t < 1.
Final Test. Subjects’ responses on the final test were coded as correct,
misinformation, other wrong, and “don’t know”. For the question about the largest
ocean, “Pacific” would be scored as correct, “Atlantic” as misinformation, and
“Indian” as another wrong answer. Overall, subjects answered “don’t know” to 42%
of questions, indicating that they followed the warning against guessing on the final
test.
Of primary interest were the effects of reading errors that contradicted
previously demonstrated knowledge. Thus, we first analyzed only items that were
answered correctly on the online survey.
The critical result involved ability to answer general knowledge questions
after reading neutral versus misleading references in the stories. As shown in
Figure 1, panel A, subjects answered fewer questions correctly after reading the
Creating Illusions 8
misinformation (.82 vs. .71, t(21) = 2.23, SEM = .05, d = .47), even though they had
answered all of these questions correctly two weeks earlier. Numerically, a similar
pattern occurred when the analysis was restricted to the strongest prior beliefs,
which were defined as correct survey answers produced with the highest level of
confidence (.95 vs. .87, t(16) = 1.22, SEM = .05, p = .24, d = .46).
Most importantly, subjects answered more general knowledge questions
with misinformation after reading it in the stories (.01 vs. .20, t(21) = 3.52, SEM =
.05, d = 1.12), as shown in panel B . Reading the misinformation was harmful, even
when the analysis was restricted to the strongest prior beliefs (.00 vs. .11, t(16) =
2.21, SEM = .05, d = .76).
Figure 1. Panel A shows the proportion of final general knowledge questions
answered correctly after knowledge was demonstrated on the online survey. Panel
B shows the proportion of final general knowledge questions answered with
misinformation after knowledge was demonstrated on the online survey.
Creating Illusions 9
We also examined the effects of story reading when subjects did not have
accurate prior knowledge. The analysis below includes only questions that were
answered incorrectly or with “I don’t know” on the online survey (the results were
very similar when incorrect and “don’t know” results were considered separately).
As shown in Table 1, subjects were unlikely to correctly answer final general
knowledge questions if they did not know the answers two weeks earlier (M = .06),
and this was unaffected by reading neutral versus misleading information, t < 1.
This finding is not surprising since subjects never saw these correct answers in the
experiment. In contrast, when subjects did not have accurate prior knowledge,
there was a large misinformation effect. Reading story errors greatly increased the
likelihood of responding with misinformation on the general knowledge test, as
compared to having read neutral references, t(21) = 5.17, SEM = .04, d = 1.36.
Table 1: Questions on the final general knowledge test answered correctly or with
misinformation given that students failed to demonstrate the requisite knowledge
on the online survey (and instead answered the relevant questions incorrectly or
with “don’t know”).
Neutral Frame
Misleading Frame
Prp Correct
.06
.06
Prp Misinformation
.08
.26
Creating Illusions 10
In the final analysis, we examined whether suggestibility differed for facts for
which subjects had previously demonstrated knowledge vs. ignorance. In order to
make this comparison, we needed to adjust for the different baserates, since
subjects were more likely to respond with misinformation after reading neutral
references for unknown facts (.08, as shown in Table 1), than for known facts (.01,
as shown in Figure 1). To do this, we took the proportion of questions answered
with misinformation after reading the misleading frame and subtracted the
proportion answered with misinformation following the neutral frame.
Interestingly, the misinformation effect was no larger when subjects did not have
accurate prior knowledge (M = .18), than when they had been able to answer the
survey questions correctly (M = .19), t < 1. Prior knowledge offered subjects no
protection from learning the errors presented in the stories.
Discussion
Reading stories containing misinformation led subjects to reproduce factual
inaccuracies that contradicted their previously demonstrated knowledge. These
errors entered the knowledge base after a single exposure, even though subjects
were warned that the stories were fictional and contained errors, and were further
warned not to guess on the final general knowledge test. Given that they had
produced the correct answers two weeks earlier, the errors should have been
blatant to readers. Furthermore, readers only needed to recognize the errors to
avoid them, as opposed to recalling the correct answers. Even so, reading the errors
increased their later production, in contrast to the typical episodic memory finding
Creating Illusions 11
where blatant errors act as warnings and reduce suggestibility (e.g., Loftus, 1979).
This misinformation effect was just as big as the one observed when subjects did not
have any relevant prior knowledge. Having the knowledge necessary to catch or
correct the errors did not prevent them from entering the knowledge base.
These results provide further support for the claim that episodic false
memories and illusions of knowledge are different from one another. More
generally, many manipulations known to reduce memory errors when remembering
specific events either have no effect or even increase illusions of knowledge. For
example, in eyewitness and other episodic memory paradigms, pre-encoding
warnings, slowed presentation, source monitoring instructions, and blatant errors
all reduce suggestibility (Gallo, McDermott, Percer, & Roediger, 2001; Greene, Flynn,
& Loftus, 1982; Lindsay & Johnson, 1989; Loftus, 1979; Tousignant, Hall, & Loftus,
1986), whereas these either have no effect (Marsh & Fazio, 2006; Marsh et al., 2003)
or increase learning of errors that contradict general knowledge (Fazio & Marsh,
2008b). In addition, populations like older adults and children typically make more
episodic memory errors, but are less likely to show illusions of knowledge (Fazio &
Marsh, 2008a; Marsh, Balota, & Roediger, 2005).
Our results support the idea that illusions of knowledge are at least partly
driven by knowledge neglect: subjects have the requisite knowledge stored in
memory, but fail to bring it to bear when it should be used. Consistent with this
idea, when readers are explicitly asked to mark errors in stories, they miss most of
them (Marsh & Fazio, 2006). This problem is not limited to the story-reading
paradigm used here; students often fail to notice inconsistencies in general
Creating Illusions 12
knowledge questions (Bottoms et al., 2010) and readers sometimes fail to benefit
from activating related world knowledge prior to reading historical discrepancies
(Rapp, 2008). Undetected errors accrue fluency and later come to mind at test, and
this retrieval fluency is interpreted as truth (Kelley & Lindsay, 1993; Schwarz,
Sanna, Skurnik, & Yoon, 2007).
Note that this account does not involve overwriting of the original memories;
we do not believe that the error replaces the correct knowledge in memory; rather
the two representations co-exist in memory. The error was encountered much
more recently, however, and thus is more accessible at test. Consistent with this
account, manipulations that increase attention to the errors (and thus their later
accessibility) increase suggestibility; slowing presentation and highlighting errors
both increase the chance that readers will encode them, increasing the likelihood
that they will later come to mind fluently (Eslick, Fazio, & Marsh, 2011; Fazio &
Marsh, 2008b; Marsh et al., 2003). As this activation fades (e.g., over time), so does
the likelihood that subjects will produce the errors on later general knowledge tests
(Barber, Rajaram, & Marsh, 2008; Marsh et al., 2003). The fading of activation is not
surprising; what is surprising is that a single recent exposure to an error increases
its accessibility above that of a strongly held prior response. Anything that further
strengthens that activation (e.g., retrieval practice) will make it more likely that the
error will continue to be produced after a delay (Barber et al., 2008).
Practically, our results show that educators need to be careful when teaching
with fiction or other sources that might contain errors (e.g., interviews of politicians,
feature films, television shows, and even some documentaries). Such approaches
Creating Illusions 13
are common practice since students generally find non-traditional sources more
interesting (e.g., Fehim Kennedy, Senses, & Ayan, 2011; Fernald, 1987) and yet this
approach may lead students to learn errors, even if they already possess the correct
knowledge. Educators need to take extra steps and actually point out the exact
errors to reduce the harmful effects of exposure to errors (Butler, Zaromb, Lyle, &
Roediger, 2009). Similar problems exist in advertising and politics, where
consumers and constituents are frequently exposed to false claims that may affect
their beliefs, even if they contradict prior knowledge. The more general point is that
people’s knowledge about the world is malleable and errors can easily enter the
knowledge base, regardless of what is already stored in memory.
Creating Illusions 14
Acknowledgements
A collaborative activity award from the James S. McDonnell Foundation
supported this work. The opinions expressed are those of the authors and do not
represent the views of the Foundation.
Creating Illusions 15
References
Barber, S. J., Rajaram, S., & Marsh, E. J. (2008). Fact Learning: How information
accuracy, delay and repeated testing change retention and retrieval
experience. Memory, 16, 934-946.
Bottoms, H. C., Eslick, A. N., & Marsh, E. J. (2010). Memory and the Moses Illusion:
Failures to detect contradictions with stored knowledge yield negative
memorial consequences. Memory, 18, 670-678.
Butler, A. C., Zaromb, F. M., Lyle, K. B., & Roediger, H. L. (2009). Using popular films
to enhance classroom learning: The good, the bad, and the interesting.
Psychological Science, 20(9), 1161-1168.
Carey, S. (1986). Cognitive science and science education. American Psychologist,
41(10), 1123-1130.
Clement, J. (1982). Students' preconceptions in introductory mechanics. American
Journal of Physics, 50, 66-71.
Eslick, A. N., Fazio, L. K., & Marsh, E. J. (2011). Ironic effects of drawing attention to
story errors. Memory, 19, 184-191.
Fazio, L. K., & Marsh, E. J. (2008a). Older, not younger, children learn more false facts
from stories. Cognition, 106, 1081-1089.
Fazio, L. K., & Marsh, E. J. (2008b). Slowing presentation speed increases, rather than
decreases, errors learned from fictional stories. Psychonomic Bulletin &
Review, 15, 180-185.
Fehim Kennedy, N., Senses, N., & Ayan, P. (2011). Grasping the social through
movies. Teaching in Higher Education, 16(1), 1-14.
Creating Illusions 16
Fernald, L. D. (1987). Of windmills and rope dancing: The instructional value of
narrative structures. Teaching of Psychology, 14(4), 214-216.
Gallo, D. A., McDermott, K. B., Percer, J. M., & Roediger, H. L., III. (2001). Modality
effects in false recall and false recognition. Journal of Experimental
Psychology: Learning, Memory, & Cognition, 27, 339-353.
Garry, M., & Gerrie, M. P. (2005). When photographs create false memories. Current
Directions in Psychological Science, 14, 321-325.
Geraets, E., Bernstein, D. M., Merckelbach, H., Linders, C., Raymeekers, L., & Loftus, E.
F. (2008). Lasting false beliefs and their behavioral consequences.
Psychological Science, 19, 749-753.
Greene, E., Flynn, M. S., & Loftus, E. F. (1982). Inducing resistance to misleading
information. Journal of Verbal Learning & Verbal Behavior, 21, 207-219.
Kelley, C. M., & Lindsay, D. S. (1993). Remembering mistaken for knowing: Ease of
retrieval as a basis for confidence in answers to general knowledge
questions. Journal of Memory & Language, 32, 1-24.
Lindsay, D. S., & Johnson, M. K. (1989). The eyewitness suggestibility effect and
memory for source. Memory & Cognition, 17, 349-358.
Loftus, E. F. (1975). Leading questions and the eyewitness report. Cognitive
Psychology, 7, 560-572.
Loftus, E. F. (1979). Reactions to blatantly contradictory information. Memory &
Cognition, 7, 368-374.
Loftus, E. F. (2004). Memories of things unseen. Current Directions in Psychological
Science, 13, 145-147.
Creating Illusions 17
Loftus, E. F., & Pickrell, J. E. (1995). The formation of false memories. Psychiatric
Annals, 25, 720-725.
Marsh, E. J. (2004). Story stimuli for creating false beliefs about the world. Behavior
Research Methods, Instruments, and Computers, 36, 650-655.
Marsh, E. J., Balota, D. A., & Roediger, H. L., III (2005). Learning facts from fiction:
Effects of healthy aging and dementia of the Alzheimer type.
Neuropsychology, 19, 115-129.
Marsh, E. J., & Fazio, L. K. (2006). Learning errors from fiction: Difficulties in
reducing reliance on fictional stories. Memory and Cognition, 34, 1140-1149.
Marsh, E. J., Meade, M. L., & Roediger, H. L. (2003). Learning facts from fiction.
Journal of Memory & Language, 49, 519-536.
Myers, J. L., O'Brien, E. J., Balota, D. A., & Toyofuku, M. L. (1984). Memory search
without interference: The role of integration. Cognitive Psychology, 16, 217242.
Nelson, T. O., & Narens, L. (1980). Norms of 300 general-information questions:
Accuracy of recall, latency of recall, and feeling-of-knowledge ratings. Journal
of Verbal Learning & Verbal Behavior, 19, 338-368.
Posner, G. J., Strike, K. A., Hewson, P. W., & Gertzog, W. A. (1982). Accommodation of
a scientific conception: Toward a theory of conceptual change. Science
Education, 66, 211-227.
Prince, S. E., Tsukiura, T., & Cabeza, R. (2007). Distinguishing the Neural Correlates
of Episodic Memory Encoding and Semantic Memory Retrieval. Psychological
Science, 18, 144-151.
Creating Illusions 18
Rapp, D. N. (2008). How do readers handle incorrect information during reading?
[Empirical Study
Quantitative Study]. Memory & Cognition, 36(3), 688-701.
Roediger, H. L., & McDermott, K. B. (2000). Tricks of memory. Current Directions in
Psychological Science, 9, 123-127.
Schwarz, N., Sanna, L. J., Skurnik, I., & Yoon, C. (2007). Metacognitive expeiences and
the intricacies of setting people straight: Implications for debiasing and
public information campaigns. Advances in experimental social psychology, 39,
127-161.
Tousignant, J. P., Hall, D., & Loftus, E. F. (1986). Discrepancy detection and
vulnerability to misleading postevent information. Memory & Cognition,
14(4), 329-338.
Tulving, E. (1985). How many memory systems are there? American Psychologist, 40,
385-398.
Wade, K. A., Garry, M., Read, J. D., & Lindsay, D. S. (2002). A picture is worth a
thousand lies: Using false photographs to create false childhood memories.
Psychonomic Bulletin & Review, 9, 597-603.