Concept Revision Tool

Supporting Written Knowledge Communication 1
Supporting Experts’ Written Knowledge Communication
Through Reflective Prompts on the Use of Specialist Concepts
Regina Jucks, Petra Schulte-Löbbert, & Rainer Bromme
University of Münster
Correspondence concerning this article should be addressed to
Regina Jucks
Psychological Institute III
Fliednerstr. 21, 48149 Muenster, Germany.
phone: +49 251 83 31337, fax +49 251 83 39105
E-mail: [email protected].
Supporting Written Knowledge Communication 2
Abstract
Communicating expert knowledge to a lay addressee in writing is a demanding task that
requires a great deal of mental effort. This article reports on a study in which experts were
prompted to reflect either on a text they had produced (content focus condition) or on its
comprehensibility to a layperson (recipient focus condition). A software tool highlighted the
specialist terms or concepts used by the expert writers and guided the reflection process.
Subsequent to this reflection phase, writers had the opportunity to revise their texts. Experts
in the recipient focus condition significantly expanded their texts and made more meaningful
revisions. For example, they were more likely than experts in the content focus condition to
explain central concepts in their revision. Results are discussed from the perspective of
writing theories and in terms of their practical implications for written knowledge
communication.
Keywords: Knowledge communication, writing, text revision, audience awareness, text
comprehensibility, expert–layperson communication
Supporting Written Knowledge Communication 3
Introduction
Mrs. Meyer, a consumer protection expert specializing in nutrition, receives a query
via e-mail. Paul, a humanities student, has some questions about genetic food. He asks how
genes and DNA can be studied and how methods like “gel electrophoresis” and “PCR
analysis” that he has read about in the newspaper work. Mrs. Meyer’s problem is that she
does not know Paul in person and has to guess at his level of understanding. As she starts
writing her e-mail, she realizes how difficult it is to adapt her response to Paul’s needs
without being able to ask him whether or not he follows. She wonders whether Paul really
knows very much about the specialist terms he has used (e.g., DNA and genetics—terms that
are also common in everyday communication). She thus has to draft her response without
knowing what needs to be explained in more detail or whether her answer is sufficient.
This is just one example of how written communication is becoming more important
with the increasing use of new media such as e-mail, chat, and instant messaging (Goldberg,
Russell, & Cook, 2003; MacArthur, 2006). The example also highlights the task demands
and cognitive demands faced by writers interacting with less knowledgeable others. The
activities of the “sender” in written interactions such as this have generally been studied
from the perspective of theories of oral communication (see, e.g., Clark, 1996). It has been
neglected that such written communication is also an activity of writing (and reading).
Successful writing means transporting information to the reader through a written
artifact such that there is a high degree of overlap between the intended message and the
acquired message (Traxler & Gernsbacher, 1992). According to Traxler and Gernsbacher,
writers have to construct three different representations during the writing process. First,
they have to consider the intention of their text message (e.g., plan what they want to
communicate). Second, they need to bear the audience in mind (e.g., formulate their message
in a manner appropriate to their audience, and that allows their audience to grasp the
Supporting Written Knowledge Communication 4
intended message). And third, they have to construct a representation of the text produced so
far and identify the discrepancies between it and their intended message to control the further
process of writing and revise the text produced so far. In the following, we briefly sketch the
task demands faced by experts interacting with laypersons by means of written
communication. We draw on two distinct strands of research: research on writing and
research on expert–layperson communication. Writing research has identified the specific
task demands and difficulties faced by writers seeking to convey information through written
texts; expertise research has addressed why it is difficult to communicate expert knowledge
to laypersons in any mediated setting. We conclude this introductory section by describing
methods to improve written knowledge communication by bringing together these two
strands of research.
The Writing Perspective on Written Knowledge Communication
Writing, independent of genre, is commonly seen as a complex process involving the
coordination of several skills (Rijlaarsdam et al., 2005). Writers also have to orchestrate
several task demands. The writing task guides the writer: it defines the topic, the audience,
and the genre (Alamargot & Chanquoy, 2001). In the example of Mrs. Meyer, for instance, it
is clear that the written response has to cover the topic of genetic food, that it should be in
the style of an e-mail, and that the reader is a layperson. Mrs. Meyer has to adapt her writing
to the demands of this specific writing task. In coordinating this information, she is engaged
in various cognitive activities. In their prominent model of writing, Hayes and Flower (1980)
distinguish three cognitive processes in the activity of writing: 1. planning, which means
constructing a mental representation of the message to be composed, 2. transforming, which
refers to the actual process of converting the intended message into text, and 3. revising,
which “involves identifying the discrepancies between intended text and instantiated text”
Supporting Written Knowledge Communication 5
(Fitzgerald, 1987, p. 484). Although the processes of planning, transforming, and revising
are more recursive than linear (i.e., all of these strategies may occur at any time during the
writing process) and are regulated by higher-order monitoring processes, more planning
occurs at the beginning of the writing process and more revising at the end (Kellogg, 1994).
Bereiter and Scardamalia (1987) drew an important distinction between two writing
strategies depending on the expertise level of the writer: knowledge telling and knowledge
transforming. In a nutshell, knowledge telling means writing everything down as it comes to
the writer’s mind. It does not involve any elaborate monitoring or evaluating of text with
respect to coherence, for instance. Knowledge transforming is a more elaborate strategy that
also involves monitoring activities and the restructuring of knowledge through reflection on
the written text. There is an important distinction between content (what to say) and the
rhetorical problem space (how to say it). The interaction between these two spaces—the
reflective component of writing—is where knowledge transformation occurs; evaluating and
monitoring tend to be part of the revision process (Hayes, Flower, Schriver, Stratmann, &
Carey, 1987).
Although revision is considered a “natural” part of the writing process, many writers
find it difficult to revise their texts successfully. The difficulties are manifold, but two main
problems can be identified. First, writers often fail to make necessary meaningful changes to
their texts. For example, they may take a certain paragraph as given, although it is tangential
to their line of argument. They avoid making substantial changes, and instead revise their
text on the surface level by doing a kind of proofreading (van Gelderen & Oostdam, 2003).
Second, writers have difficulty in identifying errors and inconsistencies in their own texts
(Hacker, Plumb, Butterfield, Quarthamer, & Heineken, 1994). Based on their inaccurate
mental representations of their texts, they unconsciously correct inconsistencies between the
intended and the instantiated text during reading, and are unable to identify these
Supporting Written Knowledge Communication 6
inconsistencies until they are pointed out explicitly (Kaup & Foss, 2005). This phenomenon,
which affects both experienced and inexperienced writers, is highly related to the writer’s
content knowledge (Kellogg, 2006).
Graham and Perin (2007) recently presented a meta-analysis examining the effects of
writing instruction for adolescents, distinguishing between treatments such as strategy
instruction, summarization, and peer assistance. Their results, together with those of Graham
(2006), suggest that strategy instruction, especially instruction designed to foster writers’
self-regulatory skills (e.g., procedural facilitation through prompts or feedback), can lead to
substantial improvement in writing, with large effect sizes being found in most cases. Before
outlining possible methods for enhancing the evaluation and revision of written texts, we
consider a matter of particular interest for the present study: the difficulties faced by experts
seeking to formulate lay-oriented explanations.
The Expert–Layperson Communication Perspective on Written Knowledge Exchange
It is essential for experts in various fields to be able to communicate their knowledge
to laypersons. However, the very expert–layperson knowledge gap that necessitates these
exchanges restricts understanding and causes communication problems (Bromme, Jucks, &
Runde, 2005). Not only do experts know more than laypersons, their domain-specific
knowledge is also structured differently (Sternberg & Hovarth, 1999), which influences their
categorical perception from the outset (Feltovich, Prietula, & Ericsson, 2006). Training and
experience mold people’s perceptions of what is important for their work, leading them to
view it in a specific way (Boshuizen, Bromme, & Gruber, 2004; Smith, 1992). In sum,
experts and non-experts differ greatly in the way they perceive and represent information.
Professional knowledge establishes a certain perspective on a domain that might—in
interactions with a lay audience—conflict with the layperson’s perspective.
Supporting Written Knowledge Communication 7
It can be assumed that experts’ extensive and highly integrated knowledge of their
field makes it difficult for them to appreciate a layperson’s rather different perspective, and
to overcome their immersion in their own knowledge. Keysar, Barr, and Horton (1998)
demonstrated that, even in everyday interactions, people erroneously use “privileged
information,” that is, information that is available only to them and not to their interlocutor.
When experts instruct non-experts, their expert knowledge can be regarded as privileged
information (which is why they are being asked to give advice in the first place). When
responding to laypersons’ queries, they have to draw on this privileged information. They
cannot simply communicate that information in the way it is represented in their minds,
however, but have to formulate new, addressee-oriented, and comprehensible responses.
However, specialists are experts only in their own field. They often lack the skills necessary
to communicate their knowledge. Laypersons needing expert advice or explanations often
find that experts fail to express themselves in comprehensible terms, whether in face-to-face
interactions (e.g., in hospitals; Lerner, Jehle, Janicke, & Moscati, 2000) or in written
communication (e.g., in e-mail responses to queries submitted to health-related Internet sites;
Bromme et al., 2005; Jucks & Bromme, 2007). The use of technical terms and concepts is a
particular barrier to understandability (Jucks, Becker, & Bromme, in press). How can the
quality of experts’ responses to laypersons be improved?
Integrating the Two Strands of Research: Implications for Supporting Written Knowledge
Communication
Increasing writers’ awareness of the metacognitive and task-related activities
involved in the writing process would seem a promising way of improving written
knowledge communication (Graham & Perin, 2007). Experts seeking to communicate their
knowledge to laypersons must fulfill two task demands. First, they have to activate their
Supporting Written Knowledge Communication 8
expert knowledge and use it to produce an appropriate response; second, they have to
compose their answers in a way that is comprehensible to a lay audience. In the terminology
of writing research, they need to match the instantiated text with both the intended message
and the specific needs and competencies of the intended audience. In the words of Flower
(1979), writers have to avoid writer-based writing, and instead produce reader-based text.
That is, they have to approach their writing with an open mind, seeing it from the reader’s
perspective, and choosing language that is suitable for the reader. Given these two task
demands, the writer’s awareness may be directed toward the intended content of the text
(i.e., the discrepancies between the intended text and the instantiated text) or toward the
intended audience.
Research has repeatedly demonstrated the effectiveness of making the writer aware
of the audience’s needs (Traxler & Gernsbacher, 1993). There is evidence that writers tend
to overestimate the comprehensibility of their texts, and that they find it difficult to take the
reader’s perspective (Hayes et al., 1987). A text might be coherent and comprehensible to an
expert reader, but incoherent to a layperson (see Vidal-Abarca, Gilabert, Gil, & Martínez,
2005). The latter point is particularly relevant in communication settings where the
knowledge gap is wide.
In the Traxler and Gernsbacher (1992) study, writers were given the task of
describing tangram figures. The quality of the descriptions written by participants provided
with minimal reader feedback was higher than that of participants in a control group, who
were prompted to reflect on the content of their descriptions. In a more recent study,
Holliway and McCutchen (2004) replicated Traxler and Gernsbacher’s findings for young
writers. The authors distinguished three kinds of feedback and interventions. All groups
received feedback about the accuracy of their descriptions. Additionally, participants in
group 2 rated others’ texts, and participants in group 3 were instructed to read their own texts
Supporting Written Knowledge Communication 9
from the perspective of a potential reader (i.e., to take their audience’s perspective).
Significant improvements in text quality were observed only in group 3. Sato and
Matsushima (2006) reported similar results for texts describing geometrical figures. Thus,
enhancing writers’ awareness of what their readers may or may not understand facilitates the
production of audience-oriented texts and may be more effective than a focus on the content
as such. In most studies seeking to foster audience awareness in text production, revision is
prompted by real or fictitious reader feedback provided after the text is transmitted to the
reader (Schriver, 1992). In contrast, the present study tests an intervention that does not
involve reader feedback. Most communication in Internet-based settings such as that
sketched in the introduction entails only one turn (a layperson’s query and a single e-mail
response). More subtle ways are thus needed to enhance writers’ awareness of the content
and the audience when revising their written responses.
Research on writing and research on expert–layperson communication both indicate
that tailoring a message to a specific audience is often more difficult than the primary
demand of any writing process: putting a certain content (as the writer has it in mind) into
text form. To draw on Bereiter & Scardamalia’s (1987) concepts of knowledge telling and
knowledge transforming, the message has to be adapted to a lay audience by a process of
knowledge transformation. Because a writing task also requires the activation of domain
knowledge, the focus on the audience can best be integrated into the revision rather than the
planning process (Roen & Willey, 1988).
Although Graham’s meta-analysis (2006) was restricted to long-term interventions
(of at least 3 days), there is evidence that minimal interventions can also enhance the
revision process (Wallace & Hayes, 1991; Wallace et al., 1996). In the following, we present
a software tool designed to enhance writers’ reflection on their text and thereby to prompt
Supporting Written Knowledge Communication 10
revision with minimal intervention. The tool was inspired by previous studies demonstrating
the potential of computerized prompting (Reynolds & Bonk, 1996).
Rationale for the Present Study
The aim of the present study was to investigate whether domain experts (i.e., people
knowledgeable about a certain content area but not adept in explaining it to others) improve
their texts when prompted to reflect on them. We compared two types of prompts, both
asking experts to reflect on the terms and concepts used in their responses to a layperson’s
inquiry. One type of prompt asks writers to reflect on the content of their message (content
focus). The other asks writers to reflect on the needs of a layperson reader (recipient focus).
Because this intervention targets the revision phase of writing rather than the initial planning
phase, we expected the recipient focus to have a stronger impact than the content focus.
Moreover, because considering the recipient is the more demanding task, we expected the
recipient focus to be associated with longer revision times. Furthermore, we expected
participants in the recipient focus condition to produce longer texts and to make more
revisions that affected the meaning of the text than participants in the content focus
condition.
The Concept Revision Tool (CRT) is a Microsoft Access®-based application
designed to support written expert communication with laypersons. It is an adaptive tool that
analyzes the text produced with regard to occurrences of specialist concepts stored in a
database. Because the hotline scenarios we are interested in tend to be confined to welldefined domains of knowledge (e.g., certain types of illnesses), such databases can be
compiled with reasonable effort. The software highlights these terms one at a time, and
writers are asked to answer questions (e.g., “Is the term XYZ familiar to your reader?”) or
evaluate statements (e.g., “The term XYZ is used correctly”) about each term on a 7-point
Supporting Written Knowledge Communication 11
scale ranging from “totally false” to “totally true.” Figure 1 shows a screenshot of the rating
procedure. In this example, the writer has to rate three statements with respect to the concept
“genetic.” Concepts that have already been covered remain colored. When all concepts
stored in the database have been evaluated, the writer has the opportunity to revise the text.
--INSERT FIGURE 1 ABOUT HERE--
The CRT focuses on concepts that can be represented by words (single nouns or
combinations of nouns). Concepts are an appropriate unit for reflecting on the content as
well as on the wording of a text, because they have to be represented by certain terms.
Concepts can be conceived as the building blocks of knowledge; they function not only as
means for communication but also as explanations, and are used for reasoning and
classification (Medin & Rips, 2005). Most of the concepts listed in the CRT database are
technical terms, but many important concepts in science and medicine can be expressed in
nontechnical language. Therefore, the database used in this study encompassed both
technical terms and everyday language (Bromme, Jucks, & Wagner, 2005).
Method
Participants
Thirty-two biology students participated in this experiment. On average, participants
were in their 3rd year of university education (M = 6.33 semesters, SD = 1.98) and between
21 and 26 years of age (M = 25.47, SD = 3.16); 53% were female. Participants reported
using computers for a mean of 11.77 hours per week (SD = 8.42) and almost all participants
had Internet access at home (86.7%). With respect to advisory experience, 65.5% of
Supporting Written Knowledge Communication 12
participants stated that they at least sometimes explained biological content to friends or
family members.
An assessment of biological and genetic knowledge was administered to ensure that
the participants had sufficient knowledge of genetic analysis, the topic to be explained to a
layperson in the experiment. The 23-item multiple-choice questionnaire was developed with
the assistance of a biotechnologist. The pass rate was set at 50%. Two participants who
failed to meet this criterion were excluded from the analyses. On average, the other
participants answered 80% of the items correctly (M = 17.90 out of 23 items; SD = 2.56). All
of the remaining 30 participants were native Germans, with the exception of one participant
who had spoken German since early childhood. One participant did not indicate his or her
home language. Students were paid 8 euros for participation.
Material
The experiment was conducted within the CRT, a software tool developed to support
written expert communication with laypersons (Bromme & Jucks, 2002). The participants’
main task was to respond to a written inquiry from a fictitious layperson. With the help of a
biotechnologist, we constructed an inquiry about genetic analysis in the context of criminal
investigations (see Appendix) and compiled a CRT database of 133 specialist concepts. This
CRT dictionary contained terms likely to be used in responses to the layperson’s inquiry, all
of which were listed in the keywords index of a standard biotechnology textbook (Alberts et
al., 1999). Some of the terms are highly technical (e.g., “polymerase chain reaction”), others
are also used in everyday language (e.g., “gene”). The CRT automatically highlights the
words listed in the dictionary that are used in the expert’s response.
Supporting Written Knowledge Communication 13
Design and Procedure
Experts were randomly assigned to one of two groups that differed only with regard
to the reflection phase (see below). The groups did not differ with respect to age, scores on
the knowledge assessment, number of semesters, all F(1,28) < 2.42, ns, or gender, χ²(1) =
.84, ns.
First, participants were given a brief introduction to the task. They were told that the
experimental setting simulated an “ask the expert” setting in an online forum on biological
topics and that their task was to answer a layperson’s e-mail inquiry. They were then asked
to produce a written response to the query without being informed that they would later have
the opportunity to revise their text. Once they had finished composing their response, experts
had to rate their use of terms listed in the CRT dictionary, with a focus either on the content
or on the recipient of the text. Participants in the content focus (CF) condition evaluated
three statements relating to the content of their text (e.g., “The term genetic is central to this
topic.”). Participants in the recipient focus (RF) condition evaluated three statements relating
to the recipient (e.g., “It is important that the reader knows what the term genetic means.”).
Table 1 lists the statements rated in each of the experimental conditions.
-- INSERT TABLE 1 ABOUT HERE --
Participants were then given the opportunity to revise their texts. Again, they were
given as much time as they needed. After finishing the writing task, participants were asked
about their Internet use, advisory experience, and domain knowledge. Finally, demographic
data were obtained.
Dependent Measures
Supporting Written Knowledge Communication 14
Experts’ answers were analyzed with respect to the following variables.
Length of experts’ responses. Microsoft Word® was used to count the total number of
words in the experts’ original and revised responses.
Time spent on text revision. The time an expert spent revising his or her initial text
after the reflection phase was logged using time stamps within the CRT.
Revisions made. The experimental manipulation in the reflection phase was expected
to result in differences in the way participants revised their texts. The responses were thus
analyzed with regard to the revisions made. Rather than analyzing online revisions (cf.
Lindgren & Sullivan, 2006), we examined experts’ final texts to assess revisions prompted
by the CRT. We developed a multidimensional coding scheme based on established schemes
commonly used in revision research. Specifically, we integrated coding schemes focusing on
revisions at the linguistic level (Sommers, 1980) with schemes integrating the semantic level
of text modification (Allal, 2000, 2004; Faigley & Witte, 1984; Rouiller, 2004).
In a first step, the “compare documents” function in Microsoft Word® was used to
identify the revisions. The principal coder then categorized the revisions. She first decided
whether or not the revisions affected the meaning of the text. She then classified the revision
action (i.e., identified what the writer had done; e.g., correcting grammar or spelling errors).
Finally, she classified the range of the revision (i.e., whether the revision affected a single
word, a phrase, or a sentence). Each revision was thus characterized with respect to its type,
action, and range. Any change to a single unit (e.g., word or sentence level) was counted
only once. For the present purposes, we report only analyses examining the type of revision
(surface vs. meaningful changes and their subcategories) and do not present more detailed
analyses (e.g., of revision actions). These further differentiations were necessary to improve
rater accuracy and are not related to any of our hypotheses.
Supporting Written Knowledge Communication 15
In the following, we describe the two main categories of revisions and their
subcategories in more detail.
Surface changes. A revision was classified as a surface change if it did not affect the
message of the passage (word, sentence, or section). Surface changes were subdivided into
formal changes and meaning-preserving changes. Formal changes pertain to spelling,
grammar, punctuation, and formatting. Meaning-preserving changes are modifications that
may influence readability but do not affect the meaning of the text (e.g., rearrangement of
words within a paragraph).
Meaningful changes. Revisions that affected the message of the text were classified
as meaningful changes (e.g., adding examples, providing explanations, or deleting passages).
We differentiated between meaningful changes involving rated concepts from the CRT
dictionary and other meaningful changes. Concept-related meaningful revisions might
involve providing further information about the concept, or changes like deleting or
substituting a term. Table 2 lists some examples of the revisions made by the experts in the
present study.
-- INSERT TABLE 2 ABOUT HERE –
Two additional coders analyzed the text corpus with respect to the different revision
categories. The coders were given detailed training and had a written guide at hand.
Interrater reliability was assessed by comparing the ratings of the principal coder with those
of one other coder; intraclass correlations (ICC) were calculated (Shrout & Fleiss, 1979).
The interrater reliability for identifying text modifications was good, with an overall
ICCunjust,random of .96 (with .91 for surface and .98 for meaningful changes).
Supporting Written Knowledge Communication 16
Further Measures
Time spent composing the original response. The time an expert took to compose his
or her original response was logged using time stamps within the CRT.
Number of specialist concepts used. The number of specialist concepts listed in the
CRT dictionary that were used in the original response was counted (these concepts were
highlighted and assessed in the reflection phase).
Time spent reflecting on the highlighted concepts in the reflection phase. Again, time
stamps within the CRT were used to determine how long an expert took to reflect on the
specialist concepts that were highlighted in the reflection phase.
Results
Unless specified otherwise, all tests reported were one-tailed with an alpha set at .05.
The first step in our analysis was to ensure that the two groups did not differ significantly in
terms of their domain-specific knowledge (as reflected by their scores on the knowledge
assessment), Internet usage, and advisory experience. An analysis of variance (MANOVA)
with the independent variable experimental group and these three dependent variables
revealed no differences, F(3,25) = .65, ns.
Furthermore, we examined whether the two groups differed in terms of the original
responses produced. Because the original text was written before our intervention, no
differences were expected. We examined the original texts with respect to their length and
the time spent composing them. On average, the experts spent 17.56 minutes (SD = 8.79)
composing their original text, which was on average 271.70 words long (SD = 141.11; range:
69–608). A MANOVA with the independent variable revision focus and these two
dependent variables revealed no differences, F(2,27) = .28, ns.
Supporting Written Knowledge Communication 17
Additionally, we analyzed whether the two groups differed in two variables related to
the reflection phase: (1) the number of concepts from the CRT dictionary used and (2) the
time spent reflecting on and rating the highlighted concepts. On average, experts used 14.60
concepts (SD = 7.34; range: 5–36) and spent 4.15 minutes (SD = 1.82) reflecting on their use
of those concepts and evaluating them on the three statements. A MANOVA with the
independent variable revision focus and these two dependent variables revealed no
differences, F(2,27) = .36, ns.
Thus, no differences were found between the two experimental conditions prior to the
text revision phase.
Length of experts’ responses. Experts varied widely in the amount of detail provided
in their revised texts. The revised responses ranged from 92 to 669 words in length. A
repeated measurement ANOVA with the variable experimental group revealed a significant
main effect of time (i.e., original vs. revised response), F(1, 28) = 27.34, p < .01, η² = .49,
and a significant interaction effect of condition and time, F(1, 28) = 5.13, p < .05, η² = .16.
Both groups extended their answers during the revision phase but participants in the RF
condition wrote significantly longer revised versions (M = 330.10 words; SD = 146.93) than
participants in the CF condition (M = 271.93 words; SD = 132.04).
Time spent on text revision. Experts tended to spend only a few minutes revising their
original text (M = 4.71 minutes; SD = 4.97). However, we found a substantial difference
between the two conditions. Whereas experts in the CF condition spent an average of 3.17
minutes (SD = 3.24) on revision, experts in the RF condition took nearly twice as long (M =
6.25, SD = 5.96). A t-test for independent samples revealed a significant difference, t(28) = 1.76, p < .05, d = .64. This difference persisted when initial composition time was included
as a covariate, F(2,27) = 2.50, p < .05, η² = .10.
Supporting Written Knowledge Communication 18
Revisions made. Content analyses showed that experts differed to a great extent in the
number of text revisions made, ranging from 0 to 23 single revisions (M = 5.63, SD = 5.80).
Four participants did not make any changes to their original text (17% of the sample).
A repeated measurement ANOVA revealed a significant interaction effect of the
between-subjects variable experimental group and the within-subjects variable type of
revision (surface vs. meaningful), F(1, 28) = 5.70, p < .05, η² = .17. Neither main effect
yielded significant differences, both F(1, 28) < 1,80, ns. Experts in the RF condition made
significantly more meaningful changes (M = 4.13, SD = 3.02) than surface changes (M =
3.00, SD = 2.65) to their texts, t(14) = 1.97, p < .05, d = .40. The same did not apply to the
CF condition, t(14) = -1,50, ns. Hence, experts in the RF condition made more meaningful
changes than experts in the CF condition, although the two groups did not differ in terms of
the absolute number of changes made. This finding indicates that the recipient focus
manipulation led to qualitative and not quantitative differences in text revision. Table 3 lists
all means and standard deviations of all text revisions.
-- INSERT TABLE 3 ABOUT HERE --
Are these qualitative differences between the conditions directly related to the experts’
reflection on specialist concepts? A repeated measurement ANOVA with the betweensubjects variable experimental group and the within-subjects variable type of meaningful
revision (rated CRT concept vs. other), revealed a significant main effect for experimental
group, F(1, 28) = 7,27, p < .01, η² = .21. No further differences were found, both F(1, 28) <
.30, ns. This finding, which is illustrated in Figure 2, persisted when the time spent on text
revision was included as a covariate, F(1,27) = 3.78, p < .05, η² = .12.
Supporting Written Knowledge Communication 19
-- INSERT FIGURE 2 ABOUT HERE --
Compared with the content focus condition, experts who reflected on the recipient
orientation of their original response made more meaningful changes of both kinds, those
involving rated concepts from the CRT dictionary and other meaningful changes.
Discussion
Drawing on two strands of research—research on writing and research on expert–
layperson communication—we argued that experts must fulfill two main task demands when
communicating their knowledge to laypersons. First, they have to activate their expert
knowledge and use it to provide an appropriate response. Second, they have to compose their
answers in a way that is comprehensible to a lay audience. In the terminology of writing
research, they need to match the instantiated text with both the intended message and the
specific needs and competencies of the intended audience. Research has shown that the
layperson often “fades” into the background in written, Internet-based settings, and that
experts formulate their response in more technical terms than would be appropriate from the
reader’s point of view (Jucks, Bromme, & Runde, 2007).
In the present study, we manipulated whether experts were prompted to focus on the
content itself or on the recipient when reflecting on their use of specialist terms and
concepts. There was then an unspecific invitation to revise their original text; most of the
experts took this opportunity (only 4 of 30 experts made no revisions at all).
Our results showed that experts spent more time on revisions when prompted to
reflect on the audience orientation of their language than when prompted to focus on the
content of the text. Furthermore, experts in the recipient focus condition significantly
extended their texts during this time. Of course, longer texts are not necessarily better texts,
Supporting Written Knowledge Communication 20
but we found that the revisions made by the two experimental groups differed in terms of
quality rather than quantity (i.e., the total number of revisions made). With regard to
meaningful revisions, which are of particular interest in this context, experts in the recipient
focus condition made more revisions than experts in the content focus condition. This
applied both to changes involving rated concepts from the CRT dictionary and to other
meaningful changes. In other words, experts in the recipient focus condition elaborated the
concepts in more detail and made more meaningful changes to the text than did experts in
the content focus condition.
The differences found are notable for several reasons. With a relatively small sample
size (n = 30) and considerable variation in participants’ text production and text revision
activity, the effects are practically relevant. Additionally, the intervention can be considered
subtle: both groups worked in the same writing environment, and the experimental
manipulation influenced only the focus of reflection. Note that participants were given the
opportunity to revise their original text, but were not required to revise it. Exactly what they
were expected to change was not specified. The differences found between conditions can
therefore not be explained by different instructions on how or what to revise.
The present experiment provides useful insights into how the writing and revision
process can be supported in the context of written knowledge communication. A focus on
the recipient has a greater impact on text revision than reflection on the content itself.
Besides the specific effects that the recipient focus intervention had on the text revision
process, our findings provide some suggestion of how the CRT supports the writing activity.
Although it should be noted that revisions do not necessarily lead to qualitatively better texts
(Breetvelt, van den Bergh, & Rijlaarsdam, 1994), reflecting on one’s use of specialist
concepts with the support of the CRT seems to be a promising method to support written
knowledge communication. Regardless of the specific orientation during reflection, the
Supporting Written Knowledge Communication 21
revision activity itself seems to be qualitatively improved by reflection. Whereas most
previous revision research has found that authors tend to remain at the surface level of error
detection and correction (e.g., Faigley & Witte, 1984), our findings showed a balanced
relationship between surface and meaningful text revisions—presumably because the focus
on specialist terms stimulates deeper reflection in both conditions. Of course, this
assumption remains speculative because our study did not include a control condition
without the CRT.
The results demonstrate the positive effects of reflective prompts on text revisions.
Further research is needed to determine whether such interventions also have long-term
effects on writing and revision strategies. The finding that meaningful changes were also
made to pieces of information that were not targeted in the reflection phase (concepts not
listed in the CRT dictionary) might be taken as an indicator for more general positive effects
of the Concept Revision Tool. Furthermore, research is needed to determine to what extent
the outcomes described impact laypersons’ understanding of experts’ text. Bromme et al.
(2005) describe methods for assessing laypersons’ understanding of such texts using the
cloze procedure or questionnaires.
In general conclusion, Bereiter and Scardamalia (1987) as well as Clark and Brennan
(1991) identified the particular demands facing writers and recipients communicating in
written form. Scaffolds present in oral, face-to-face settings (e.g., nonverbal gestures; the
opportunity to interrupt and request clarification) are absent from written settings. Writers
often receive no feedback or external cues, and therefore have to self-regulate their writing,
as described by Bereiter and Scardamalia in their knowledge transforming model. Clearly,
written Internet-based communication is a demanding task that can benefit from external
support. As demonstrated in this paper, it might be fruitful to integrate a writing perspective
in future analysis of text-based computer-based communication.
Supporting Written Knowledge Communication 22
References
Alamargot, D., & Chanquoy, L. (2001). Studies in writing: Vol. 9. Through the models of
writing. Dordrecht, Netherlands: Kluwer Academic Publishers.
Alberts, B., Bray, D., Johnson, A., Lewis, J., Raff, M., Roberts, K., & Walter, P. (1999).
Lehrbuch der molekularen Zellbiologie [Essential cell biology: An introduction to the
molecular biology of the cell]. Weinheim: Wiley-VCH.
Allal, L. (2000). Metacognitive regulation of writing in the classroom. In A. Camps & M.
Millan (Eds.), Studies in writing: Vol. 6. Metalinguistic activity in learning to write (pp.
145-167). Amsterdam: Amsterdam University Press.
Allal, L. (2004). Integrated writing instruction and the development or revision skills. in L.
Allal, L. Chanquoy, & P. Largy (Eds.) Revision: Cognitive and instructional processes
(Vol. 13, pp. 139-156). Boston, MA: Kluwer Academic Publishers.
Bereiter, C., & Scardamalia, M. (1987). The psychology of written composition. Hillsdale,
NJ: Lawrence Erlbaum.
Boshuizen, H. P. A., Bromme, R., & Gruber, H. (2004). Professional learning: Gaps and
transitions on the way from novice to expert. Dordrecht: Kluwer Academic Press.
Breetvelt, I., Van den Bergh, H., & Rijlaarsdam, G. (1994). Relations between writing
processes and text quality: When and how. Cognition and Instruction, 12(2), 103-123.
Bromme, R., & Jucks, R. (2002). Rezipientenorientierung in der netzgestützten, schriftlichen
Kommunikation zwischen Experten und Laien [Recipient orientation in web-based,
written communication between experts and laypersons]. Unpublished DFG grant
application, Westfälische Wilhelms-Universität Münster.
Bromme, R., Jucks, R., & Runde, A. (2005). Barriers and biases in computer-mediated
expert-layperson communication: An overview and insights into the field of medical
advice. In R. Bromme, F.W. Hesse, & H. Spada (Eds.), Barriers and biases in
Supporting Written Knowledge Communication 23
computer-mediated knowledge communication—and how they may be overcome (pp.
89-119). New York: Springer.
Bromme, R., Jucks, R., & Wagner, T. (2005). How to refer to “diabetes”? Language in
online health advice. Applied Cognitive Psychology, 19, 569-586.
Clark, H. H. (1996). Using language. Cambridge, IL: Cambridge University Press.
Clark, H. H., & Brennan, S. E. (1991). Grounding in communication. In L. B. Resnick, J. M.
Levine, & S. D. Teasley (Eds.), Perspectives on socially shared cognition (pp. 127149). Washington: APA Books.
Faigley, L., & Witte, S. P. (1984). Measuring the effects of revisions on text structure. In R.
Beach & L. Bridwell (Eds.), New directions in composition research (pp. 99-108). New
York: Guilford Press.
Feltovich, P. J., Prietula, M. J., & Ericsson, K. A. (2006). Studies of expertise from
psychological perspectives. In K. A. Ericsson, N. Charness, P. J. Feltovich, & R. R.
Hoffmann (Eds.), The Cambridge Handbook of Expertise and Expert Performance (pp.
41-68). Cambridge, NJ: Cambridge University Press.
Fitzgerald, J. (1987). Research on revision in writing. Review of Educational Research, 57,
481-506.
Flower, L. (1979). Writer-based prose: A cognitive basis for problems in writing. College
English, 41(1), 19-37.
Goldberg, A., Russell, M., & Cook, A. (2003). The effect of computers on student writing: A
meta-analysis of studies from 1992 to 2002. Journal of Technology, Learning, and
Assessment, 2(1), 52.
Graham, S. (2006). Strategy instruction and the teaching of writing: A meta-analysis. In C.
A. MacArthur, S. Graham, & J. Fitzgerald (Eds.), Handbook of writing research (pp.
187-207). New York: The Guilford Press.
Supporting Written Knowledge Communication 24
Graham, S., & Perin, D. (2007). A meta-analysis of writing instruction for adolescent
students. Journal of Educational Psychology, 99(3), 445-476.
Hacker, D. J., Plumb, C., Butterfield, E. C., Quathamer, D., & Heineken, E. (1994). Text
revision: Detection and correction of errors. Journal of Educational Psychology, 86, 6578.
Hayes, J. R., & Flower, L. (1980). Identifying the organization of writing processes. In L. W.
Gregg & E. R. Steinberg (Eds.), Cognitive processes in writing (pp. 3-30). Hillsdale,
NJ: Lawrence Erlbaum.
Hayes, J. R., Flower, L., Schriver, K. A., Stratmann, J. F., & Carey, L. (1987). Cognitive
processes in revision. In S. Rosenberg (Ed.), Advances in applied psycholinguistics,
Volume 2: Reading, writing, and language learning (pp. 176-240). Cambridge, UK:
Cambridge University Press.
Holliway, D. R., & McCutchen, D. (2004). Audience perspective in young writers’
composing and revising. In L. Allal, L. Chanquoy, & P. Largy (Eds.), Revision:
Cognitive and Instructional Processes (pp. 87-102). Dordrecht, Netherlands: Kluwer
Academic Publishers.
Jucks, R., Bromme, R., & Becker, B.-M. (in press). Lexical entrainment in Written
Discourse – Is Experts' Word Use Adapted to the Addressee? Discourse Processes.
Jucks, R., & Bromme, R. (2007). Choice of words in doctor-patient communication: An
analysis of health-related Internet sites. Health Communication, 21, 289-297.
Jucks, R., Bromme, R., & Runde, A. (2007). Explaining with non-shared illustrations: How
they constrain explanations. Learning and Instruction, 17, 204-218.
Kaup, B., & Foss, D. J. (2005). Detecting and processing inconsistencies in narrative
comprehension. In D. T. Rosen (Ed.), Progress in Experimental Psychology Research
(pp. 67-90). Hauppauge, NY: Nova Science Publishers.
Supporting Written Knowledge Communication 25
Kellogg, R. T. (1994). The psychology of writing. New York: Oxford University Press.
Kellogg, R. T. (2006). Professional writing expertise. In K. A. Ericsson, N. Charness, P.
Feltovich, & R. R. Hoffmann (Eds.), The Cambridge handbook of expertise and expert
performance (pp. 389-402). New York: Cambridge University Press.
Keysar, B., Barr, D. J., & Horton, W. S. (1998). The egocentric bias of language use:
Insights from a processing approach. Current Directions in Psychological Science, 7,
46-50.
Lerner, E. B., Jehle, D. V., Janicke, D. M., & Moscati, R. M. (2000). Medical
communication: Do our patients understand? American Journal of Emergency
Medicine, 18(7), 764-766.
Lindgren, E., & Sullivan, K. P. H. (2006). Analysing online revision. In K. P. H. Sullivan &
E. Lindgren (Eds.), Computer keystroke logging: Methods and applications (Vol. 18,
pp. 157-188). Oxford: Elsevier.
MacArthur, C. A. (2006). The effects of new technologies on writing and writing processes.
In C. A. MacArthur, S. Graham, & J. Fitzgerald (Eds.), Handbook of writing research
(pp. 248-274). New York: The Guilford Press.
Medin, D. L., & Rips, L. J. (2005). Concepts and categories: Memory, meaning and
metaphysics. In K. J. Holyoak & R. G. Morrison (Eds.), The Cambridge handbook of
thinking and reasoning (pp. 37-72). Cambridge, UK: Cambridge University Press.
Reynolds, T. H., & Bonk, C. J. (1996). Facilitating college writers’ revisions within a
generative-evaluative computerized prompting framework. Computers and
Composition, 13, 93-108.
Rijlaarsdam, G., Braaksma, M., Couzijn, M., Janssen, T., Kieft, M., Broekkamp, H., & van
den Bergh, H. (2005). Psychology and the teaching of writing in 8000 and some words.
Pedagogy: Learning for Teaching, BJEP Monograph Series II, 3, 127-153.
Supporting Written Knowledge Communication 26
Roen, D. H., & Willey, R. J. (1988). The effects of audience awareness on drafting and
revising. Research in the Teaching of English, 22(1), 75-88.
Rouiller, Y. (2004). Metacognitive regulations, peer interactions and revision of narratives
by sixth-graders. In G. Rijlaarsdam, H. van den Bergh, & M. Conzijn (Eds.), Studies in
writing: Vol. 14. Effective teaching and learning writing: Current trends in research
(pp. 77-89). Amsterdam: Amsterdam University Press.
Sato, K., & Matsushima, K. (2006). Effects of audience awareness on procedural text
writing. Psychological Reports, 99, 51-73.
Schriver, K. A. (1992). Teaching writers to anticipate readers’ needs. Written
Communication, 9, 179-208.
Shrout, P. E., & Fleiss, J. L. (1979). Intraclass correlations: Uses in assessing rater
reliability. Psychological Bulletin, 86(2), 420-428.
Smith, M. U. (1992). Expertise and the organization of knowledge: Unexpected differences
among genetic counselors, faculty, and students on problem categorization tasks.
Journal of Research in Science Teaching, 29, 179-206.
Sommers, N. (1980). Revision strategies of student writers and experienced adult writers.
College Composition and Communication, 31, 378-388.
Sternberg, R. J., & Horvath, J. A. (Eds.) (1999). Tacit knowledge in professional practice.
Mahwah, NJ: Erlbaum.
Traxler, M. J., & Gernsbacher, M. A. (1992). Improving written communication through
minimal feedback. Language and Cognitive Processes, 7, 1-22.
Traxler, M. J., & Gernsbacher, M. A. (1993). Improving written communication through
perspective-taking. Language and Cognitive Processes, 8(3), 311-334.
van Gelderen, A., & Oostdam, R. (2003). Revision of form and meaning in learning to write
comprehensible text. In L. Allal, L. Chanquoy, & P. Largy (Eds.), Studies in writing:
Supporting Written Knowledge Communication 27
Vol. 13. Revision: Cognitive and instructional processes (pp. 103-123). Amsterdam:
Kluwer Academic Publishers.
Vidal-Abarca, E., Gilabert, R., Gil, L., & Martínez, T. (2005). How to make good texts for
learning: Reviewing text revision research. In A. V. Mitel (Ed.). Focus on educational
psychology (pp. 277-306). New York: Nova Science Publishers.
Wallace, D. L., & Hayes, J. R. (1991). Redefining revision for freshmen. Research in the
Teaching of English, 25(1), 54-66.
Wallace, D. L., Hayes, J. R., Hatch, J. A., Miller, W., Moser, G., & Murphy Silk, C. (1996).
Better revision in eight minutes? Prompting first-year college writers to revise globally.
Journal of Educational Psychology, 88, 682-688.
Supporting Written Knowledge Communication 28
Author Note
The authors are thankful to Benjamin Niedergassel for professional advice on genetics and to
Susannah Goss for native speaker advice. This research was funded by the German Research
Foundation (DFG).
Supporting Written Knowledge Communication 29
Table 1:
Manipulated Reflection Items (Translation and German Original) for the Two Experimental
Groups
Content focus condition
Recipient focus condition
The term XYZ is central to this topic.
(Der Begriff XYZ ist zentral für dieses
Thema.)
It is important that the reader knows what the
term XYZ means.
(Es ist wichtig, dass die Person weiß, was
der Begriff XYZ bedeutet.)
My use of the term XYZ is technically
correct.
(Ich habe den Begriff XYZ fachlich korrekt
verwendet.)
For this person, the term XYZ is explained in
comprehensible enough terms.
(Für diese Person ist der Begriff XYZ
verständlich genug erklärt.)
I give enough detail on the term XYZ.
(Ich bin auf den Begriff XYZ ausführlich
genug eingegangen.)
For this person, the term XYZ is explained in
enough detail.
(Für diese Person ist der Begriff XYZ
ausführlich genug erklärt.)
Supporting Written Knowledge Communication 30
Table 2:
Examples of Revisions of the Different Categories (Translation and German Original)
Revision category
Formal
Surface
changes
Meaning
preserving
Concepts
from the
CRT
dictionary
Original passage
Revised passage
The hereditary information is
stored in a molecule, the
DNA.
[Die Erbinformation ist in
ein Molekül, der DNS,
gespeichert.]
This hereditary information
is stored in a molecule, the
DNA.
[Diese Erbinformation ist in
ein Molekül, der DNS,
gespeichert.]
That is why genes are called
the blueprint for the
organism.
[Die Gene werden deswegen
auch als Bauplan des
Organismus bezeichnet.]
That is why genes are called
the blueprint for life.
[Die Gene werden deswegen
auch als Bauplan des Lebens
bezeichnet.]
The proteins and especially
the DNA are extracted from
these mucosal cells.
[Aus diesen Schleimhautzellen werden die Proteine,
und vor allem die DNA
extrahiert.]
The proteins and especially
the DNA are extracted from
these mucosal cells. (The
genetic information is
located on the DNA. It is a
kind of “ID card.”)
[Aus diesen Schleimhautzellen werden die Proteine,
und vor allem die DNA
extrahiert. (Auf der DNA
findet sich das Erbgut eines
Menschen, es ist also eine
Art "Personalausweis".).]
If this procedures is repeated,
then …
[Wird dieser Vorgang
wiederholt, wird... ]
If this procedures is repeated
(in a chain reaction, which is
why it is called a chain
reaction), then …
[Wird dieser Vorgang
wiederholt (in einer
Kettenreaktion, daher Chain
Reaction!), wird... ]
Meaningful
changes
Other
Supporting Written Knowledge Communication 31
Table 3:
Descriptive Statistics for the Revisions of Different Categories
Content focus Recipient focus
condition
condition
M (SD)
M (SD)
2.73 (4.37)
3.20 (2.91)
Formal
1.47 (2.77)
1.13 (1.13)
Meaning-preserving
1.27 (2.02)
2.07 (2.40)
Meaningful changes in total
1.6 (2.03)
3.80 (3.08)
.73 (.96)
2.27 (1.67)
.87 (1.55)
1.80 (2.86)
Variable
Surface changes in total
Rated concepts from the
CRT dictionary
Other
Supporting Written Knowledge Communication 32
Figure Captions
Figure 1: Screenshot of the reflection phase – recipient focus condition. The specialist
concepts used in the expert’s answer are highlighted. The expert is asked to rate three
statements (on the right) for each specialist concept. Specialist concepts that have already
been rated (here: “PCR”, “DNA”, and “hereditary”) remain colored, the current concept (in
its different grammatical forms; here “genetic”) is highlighted throughout the text.
Figure 2: Meaningful revisions of the original texts.
Supporting Written Knowledge Communication 33
Figure 1
PCR is commonly used in medical and biological research labs for a variety
of tasks, such as the detection of hereditary diseases, the identification of
genetic fingerprints, the diagnosis of infectious diseases, the cloning of
genes, paternity testing, and DNA computing. PCR is used to amplify
specific regions of a DNA strand. This can be a single gene, just a part of a
gene, or a non-coded sequence. The DNA is a nucleic acid that contains
the genetic instructions for the development and function of living
organisms. All known cellular life and some viruses contain DNA. The main
role of DNA in the cell is the long term storage of information. It is often
compared to a blueprint, since it contains the instructions to construct other
components of the cell, such as proteins and RNA molecules.
Genetic fingerprinting is a forensic technique used to identify a person by
comparing his or her DNA with a given sample. An example is blood from a
crime scene being genetically compared to blood from a suspect. The
sample may contain only a tiny amount of DNA (obtained from a source
such as blood, semen, saliva, hair, or other organic material). Theoretically,
just a single strand is needed. First, one breaks the DNA sample into
fragments; then amplifies them using PCR. The amplified fragments are
then separated using gel electrophoresis. The overall layout of the DNA
fragments is called a DNA fingerprint. Since there is a very tiny possibility
that two individuals may have the same sequences (one in several million),
Supporting Written Knowledge Communication 34
Figure 2
2,50
2.50
2.00
2,00
1.50
1,50
content focus
recipient focus
1.00
1,00
0.50
0,50
0.00
0,00
rated concepts from the
CRT dictionary
other meaningful changes
Supporting Written Knowledge Communication 35
Appendix: English Translation of the Layperson’s Inquiry
There has been a lot in the papers lately about offenders being convicted on the basis of
genetic tests. Unfortunately, it is never specified how exactly this works. It is usually only
mentioned very briefly. That is why I have some questions. For example, I have read about
“mass genetic tests,” where saliva samples are taken from a bunch of people and tested. I am
interested in how exactly the genes or DNA are analyzed. I have heard of “PCR analysis”
and “gel electrophoresis,” for example.
What I’m wondering is how do you get hold of the genes or the hereditary information in the
first place? What exactly are these methods of analysis and how do they work?