Research indicates

Science Communication
http://scx.sagepub.com
Improving the Clarity of Journal Abstracts in Psychology: The Case for Structure
James Hartley
Science Communication 2003; 24; 366
DOI: 10.1177/1075547002250301
The online version of this article can be found at:
http://scx.sagepub.com/cgi/content/abstract/24/3/366
Published by:
http://www.sagepublications.com
Additional services and information for Science Communication can be found at:
Email Alerts: http://scx.sagepub.com/cgi/alerts
Subscriptions: http://scx.sagepub.com/subscriptions
Reprints: http://www.sagepub.com/journalsReprints.nav
Permissions: http://www.sagepub.com/journalsPermissions.nav
Citations (this article cites 26 articles hosted on the
SAGE Journals Online and HighWire Press platforms):
http://scx.sagepub.com/cgi/content/refs/24/3/366
Downloaded from http://scx.sagepub.com at PENNSYLVANIA STATE UNIV on February 6, 2008
© 2003 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
SCIENCE
ARTICLE
10.1177/1075547002250301
Hartley
/ CLARITY
COMMUNICATION
OF JOURNAL ABSTRACTS IN PSYCHOLOGY
Improving the Clarity of Journal
Abstracts in Psychology
The Case for Structure
JAMES HARTLEY
Keele University
Background. Previous research with structured abstracts has taken place in mainly medical
contexts. This research indicated that such abstracts are more informative, more readable, and
more appreciated by readers than are traditional abstracts.
Aim. The aim of this study was to test the hypothesis that structured abstracts might also be
appropriate for a particular psychology journal.
Method. Twenty-four traditional abstracts from the Journal of Educational Psychology were
rewritten in a structured form. Measures of word length, information content, and readability
were made for both sets of abstracts, and forty-eight authors rated their clarity.
Results. The structured abstracts were significantly longer than the original ones, but they were
also significantly more informative and readable and judged significantly clearer by these academic authors.
Conclusions. These findings support the notion that structured abstracts could be profitably
introduced into psychology journals.
Keywords: abstracts; structured writing; information clarity; readability
Author’s Note: I am grateful to Geoff Luck for scoring the abstract checklist, James Pennebaker
for the Linguistic Inquiry and Word Count data, and authors from the Journal of Educational
Psychology who either gave permission for me to use their abstracts or took part in this inquiry.
Address correspondence to Professor James Hartley, Department of Psychology, Keele University, Staffordshire, ST5 5BG, UK; phone: 011 44 1782 583383; fax: 011 44 1782 583387;
e-mail: [email protected]; Web site: http://www.keele.ac.uk/depts/ps/jhabiog.htm.
Science Communication, Vol. 24 No. 3, March 2003 366-379
DOI: 10.1177/1075547002250301
© 2003 Sage Publications
366
Downloaded from http://scx.sagepub.com at PENNSYLVANIA STATE UNIV on February 6, 2008
© 2003 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
Hartley / CLARITY OF JOURNAL ABSTRACTS IN PSYCHOLOGY
367
Readers of this article will have already noted that the abstract that precedes
it is set in a different way from that normally used in Science Communication
(and, indeed, in many other journals in the social sciences). The abstract for
this article is written in what is called a structured format. Such structured
abstracts typically contain subheadings, such as Background, Aim(s),
Method(s), Results, and Conclusions, and provide more detail than traditional ones. It is the contention of this article that structured abstracts represent an improvement over traditional abstracts because not only is there more
information presented, but also their format requires their authors to organize
and present their information in a systematic way—one that aids rapid search
and information retrieval when looking through abstract databases (Hartley,
Sydes, and Blurton 1996).
The growth of structured abstracts in the medical sciences has been phenomenal (Harbourt, Knecht, and Humphries 1995), and they are now commonplace in almost all medical research journals. Furthermore, their use is
growing in other scientific areas and, indeed, in psychology itself. In January
1997, for instance, the British Psychological Society (BPS) introduced structured abstracts into four of its eight journals (the British Journal of Clinical
Psychology, the British Journal of Educational Psychology, the British Journal of Health Psychology, and Legal and Criminological Psychology). In
addition, since January 2000, the BPS has required authors to send conference submissions in this structured format, and it has dispensed with the need
for the three- to four-page summaries previously required. These structured
abstracts are published in the Conference Proceedings (e.g., see BPS 2001,
2002).
The case for using structured abstracts in scientific journals has been bolstered by research, most of which has taken place in a medical or a psychological context. The main findings suggest that compared with traditional ones,
structured abstracts
• contain more information (Hartley 1999a; Hartley and Benjamin 1998; Haynes
1993; McIntosh 1995; McIntosh, Duc, and Sedin 1999; Mulrow, Thacker, and
Pugh 1988; Taddio et al. 1994; Trakas et al. 1997);
• are easier to read (Hartley and Benjamin 1998; Hartley and Sydes 1997) and to
search (Hartley, Sydes, and Blurton 1996), although some authors have questioned this (Booth and O’Rourke 1997; O’Rourke 1997);
• are possibly easier to recall (Hartley and Sydes 1995);
• facilitate peer review for conference proceedings (Haynes et al. 1990;
McIntosh 1995; McIntosh, Duc, and Sedin 1999); and
• are generally welcomed by readers and by authors (Hartley and Benjamin
1998; Haynes 1993; Haynes et al. 1990; Taddio et al. 1994).
Downloaded from http://scx.sagepub.com at PENNSYLVANIA STATE UNIV on February 6, 2008
© 2003 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
368
SCIENCE COMMUNICATION
However, there have been some qualifications. Structured abstracts
• take up more space (Harbourt, Knecht, and Humphries 1995; Hartley 2002),
• sometimes have confusing typographic layouts (Hartley 2000b), and
• may be prone to the same sorts of omission and distortion as are traditional
abstracts (Froom and Froom 1993; Hartley 2000a; Pitkin and Branagan 1998;
Pitkin, Branagan, and Burmeister 1999; Siebers 2000, 2001).
Some authors—and editors too—complain that the formats for structured
abstracts are too rigid and that they present them with a straightjacket that is
inappropriate for all journal articles. Undoubtedly, this may be true in some
circumstances, but it is in fact remarkable how the subheadings used in the
abstract for this article can cover a variety of research styles. Most articles—
even theoretical and review ones—can be summarized under these five subheadings. Furthermore, if readers care to examine current practice in the BPS
journals and in their Conference Proceedings, and elsewhere, they will find
that although the subheadings used in this present article are typical, they are
not rigidly adhered to. Editors normally allow their authors some leeway in
the headings that they wish to use.
In this article, I report the results of a study designed to see whether it
might be helpful to use structured abstracts in one particular social science
journal, namely the Journal of Educational Psychology (JEP). Here, the
abstracts are typically longer and more informative than those presented in
Science Communication, and the authors are told that the abstracts for empirical articles should describe the problem under investigation; the participants
or subjects, specifying pertinent characteristics such as number, type, and
age; the experimental method, including the data-gathering procedures and
test names; the findings, including statistical significance levels; and the conclusions and implications or applications (American Psychological Association [APA] 2001, 14). And all of this is to be done in 120 words!
Method
Choosing and Creating the Abstracts
Twenty-four traditional abstracts were chosen (with permission of the
authors) from volume 92 (2000) of JEP by selecting every fourth one available. Twenty-two of these abstracts reported the results from typical empirical studies, and two reported the findings from research reviews. Three of the
empirical abstracts contained the results from two or more separate studies.
Downloaded from http://scx.sagepub.com at PENNSYLVANIA STATE UNIV on February 6, 2008
© 2003 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
Hartley / CLARITY OF JOURNAL ABSTRACTS IN PSYCHOLOGY
369
Structured versions of these twenty-four abstracts were then prepared by
the present author. This entailed reformatting the originals and including the
necessary additional information obtained from the article to complete the
text for five subheadings—Background, Aim(s), Method(s), Results, and
Conclusions. And because structured abstracts are typically longer than traditional ones, a word limit of 200 words was imposed (as opposed to the 120
words specified by APA’s Publication Manual, 5th ed.). Table 1 provides an
example of the effects of applying these procedures to the abstract of a review
article.
Measures
Two sets of objective computer-based measures and two different subjective reader-based measures were then made using these two sets of abstracts.
The two sets of computer-based measures were derived from (1) Microsoft’s
package, Office 97, and (2) Pennebaker’s Linguistic Inquiry and Word Count
(LIWC) (Pennebaker, Francis, and Booth 2001). Office 97 provides a number
of statistics on various aspects of written text. LIWC counts the percentage of
words in seventy-one different categories (e.g., cognitive, social, personal,
etc). (Note: when making these computer-based measures, the subheadings
were removed from structured versions of the abstracts.)
The two reader-based measures were (1) the average scores on ratings of
the presence or absence of information in the abstracts and (2) the average
scores on ratings of the clarity of the abstracts given by authors of other articles in JEP. The items used for rating the information content are shown in the
appendix. It can be seen that respondents have to record a “yes” response (or
not) to each of fourteen questions. Each abstract was awarded a total score
based on the number of “yes” decisions recorded. In this study, two raters
independently made these ratings for the traditional abstracts and then met to
agree their scores. The ratings for the structured abstracts were then made by
adding in points for the extra information used in their creation.
The ratings of abstract clarity were made independently by forty-six
authors of articles in JEP from the year 2000 (and by two more authors of
articles in other educational journals). Each author was asked (by letter or
e-mail) to rate one traditional and one structured abstract for clarity (on a
scale of 0 to 10, where 10 was the highest score possible). To avoid bias, none
of these authors were personally known to the investigator, and none were the
authors of the abstracts used in this inquiry.
Forty-eight separate pairs of abstracts were created, each with a traditional
version of one abstract and a structured version of a different one. Twenty-
Downloaded from http://scx.sagepub.com at PENNSYLVANIA STATE UNIV on February 6, 2008
© 2003 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
370
SCIENCE COMMUNICATION
TABLE 1
A Traditional Abstract and a Structured
Abstract for a Review Article
Traditional abstract
Incidental and informal methods of learning to spell should replace more traditional and
direct instructional procedures, according to advocates of the natural learning approach.
This proposition is based on two assumptions: (1) spelling competence can be acquired
without instruction, and (2) reading and writing are the primary vehicles for learning to spell.
There is only partial support for these assumptions. First, very young children who receive
little or no spelling instruction do as well as their counterparts in more traditional spelling
programs, but the continued effects of no instruction beyond first grade are unknown. Second,
reading and writing contribute to spelling development, but their overall impact is relatively
modest. Consequently, there is little support for replacing traditional spelling instruction
with the natural learning approach.
Structured abstract
Background. Advocates of the “natural learning” approach propose that incidental
and informal methods of learning to spell should replace more traditional and direct
instructional procedures.
Aim. The aim of this article is to review the evidence for and against this proposition,
which is based on two assumptions: (1) spelling competence can be acquired without
instruction, and (2) reading and writing are the primary vehicles for learning to spell.
Method. A narrative literature review was carried out of more than fifty studies related to
these topics with school students, students with special needs, and older students.
Results. The data suggest that there is only partial support for these assumptions.
First, very young children who receive little or no spelling instruction do as well as their
counterparts in more traditional spelling programs, but the continued effects of no
instruction beyond the first grade are unknown. Second, reading and writing contribute to
spelling development, but their overall impact is relatively modest.
Conclusions. There is little support for replacing traditional spelling instruction with the
natural learning approach.
NOTE: The traditional abstract is reproduced with permission of the author and the American
Psychological Association.
four of these pairs had the traditional abstracts first and twenty-four the structured ones first. The fact that the abstracts in each pair were on different topics was deliberate. This was done to ensure that no order effects would arise
from reading different versions of the same abstract (as has been reported in
previous studies, e.g., Hartley and Ganier 2000). The forty-eight pairs of
abstracts were created by pairing each one in turn with the next one in the list,
with the exception of the ones for the two research reviews that were paired
together.
Downloaded from http://scx.sagepub.com at PENNSYLVANIA STATE UNIV on February 6, 2008
© 2003 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
Hartley / CLARITY OF JOURNAL ABSTRACTS IN PSYCHOLOGY
371
TABLE 2
The Average Scores and Standard Deviations for the Traditional and
the Structured Abstracts on the Main Measures Used in This Study
Data from Microsoft’s Office 97
Abstract length (in words)
Average sentence lengths
Percentage of passives
Flesch Reading Ease score
Data from Pennebaker’s
Linguistic Inquiry and Word Count
Use of longer words
Use of common words
Use of present tense
Reader-based measures
Information checklist score
Clarity ratings
Traditional
Format
(n = 24)
Structured
Format
(n = 24)
M
SD
M
SD
Paired t
p value
133
24.6
32.7
21.1
22
8.3
22.8
13.7
186
20.8
23.7
31.1
15
3.0
17.3
12.1
17.10
2.48
1.58
5.23
< .001
< .02
ns
< .001
40.0
57.7
2.7
5.3
8.6
2.8
35.8
61.1
4.1
4.6
6.3
1.9
4.69
3.43
2.90
< .001
< .01
< .01
5.5
6.2
1.0
2.0
9.7
7.4
1.4
2.0
13.72
3.22
< .001
< .01
Results
Table 2 shows the main results of this inquiry. It can be seen, except for the
average number of passives used, that the structured abstracts were significantly different from the traditional ones on all of the measures reported here.
Discussion
To some extent, these results speak for themselves and, in terms of this
article, provide strong support for structured abstracts. But there are some
qualifications to consider.
Abstract Length
The structured abstracts were, as expected, longer than the traditional
ones. Indeed, they were approximately 30 percent longer, which is 10 percent
more than the average 20 percent increase in length reported by Hartley
(2002) for nine studies. It is interesting to note, however, that the average
Downloaded from http://scx.sagepub.com at PENNSYLVANIA STATE UNIV on February 6, 2008
© 2003 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
372
SCIENCE COMMUNICATION
length of the traditional abstracts was also longer than the 120 words specified by the APA. Eighteen (i.e., 75 percent) of the twenty-four authors of the
traditional abstracts exceeded the stipulated length.
Hartley (2002) argued that the extra space required by introducing structured abstracts was a trivial amount for most journals, amounting at the most
to three or four lines of text. In many journals, new articles begin on righthand pages, and few articles finish exactly at the bottom of the previous lefthand one. In other journals, such as Science Communication, new articles
begin on the first left- or right-hand page available, but even here articles
rarely finish at the bottom of the previous page. (Indeed, inspecting the pages
in this issue of this journal will probably show that the few extra lines
required by structured abstracts can be easily accommodated.) Such concerns, of course, do not arise for electronic journals and databases.
More important, in this section, we need to consider cost-effectiveness
rather than just cost. With the extra lines comes extra information. It may be
that more informative abstracts might encourage wider readership, greater
citation rates, and higher journal impact factors—all of which authors and
editors might think desirable. Interestingly enough, McIntosh, Duc, and
Sedin (1999) suggested that both the information content and the clarity of
structured abstracts can still be higher than that obtained in traditional
abstracts, even if they are restricted to the length of traditional ones.
Abstract Readability
Table 2 shows the Flesch Reading Ease scores for the traditional and the
structured abstracts obtained in this inquiry. Readers unfamiliar with Flesch
scores might like to note that they range from 0 to 100 and are subdivided as
follows: 0 to 29, college graduate level; 30 to 49, grade 13 to 16 (i.e., eighteen
years and older); 50 to 59, grade 10 to 12 (i.e., fifteen to seventeen years); and
so forth—and that they are based on a formula that combines with a constant
measure of sentence lengths and numbers of syllables per word (Flesch 1948;
Klare 1963). Of course, it is possible that the finding of a significant difference in favor of the Flesch scores for the structured abstracts in this study
reflects the fact that the present author wrote all of the structured abstracts.
However, since this finding has also occurred in other studies in which the
abstracts have been written by different authors (e.g., see Hartley and
Benjamin 1998; Hartley and Sydes 1997), this finding is a relatively stable
one.
The Flesch Reading Ease score is of course a crude—as well as dated—
measure, and it ignores factors affecting readability such as type size, type
face, line length, and the effects of subheadings and paragraphs, as well as
Downloaded from http://scx.sagepub.com at PENNSYLVANIA STATE UNIV on February 6, 2008
© 2003 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
Hartley / CLARITY OF JOURNAL ABSTRACTS IN PSYCHOLOGY
373
readers’ prior knowledge. Nonetheless, it is a useful measure for comparing
different versions of the same texts, and Flesch scores have been quite widely
used—along with other measures—for assessing the readability of journal
abstracts (e.g., see Dronberger and Kowitz 1975; Hartley 1994; Hartley and
Benjamin 1998; Roberts, Fletcher, and Fletcher 1994; Tenopir and Jacso
1993).
The gain in readability scores found for the structured abstracts in this
study came, no doubt, from the fact that the abstracts had significantly shorter
sentences and, as the LIWC data showed, made a greater use of shorter words.
The LIWC data also showed that the structured abstracts contained significantly more common words and made a significantly greater use of the present tense. These findings seem to suggest that it is easier to provide information when writing under subheadings than it is when writing in a continuous
paragraph.
Such gains in readability should not be dismissed lightly, for a number of
studies have shown that traditional abstracts are difficult to read. Tenopir and
Jacso (1993), for instance, reported a mean Flesch score of 19 for more than
three hundred abstracts published in APA journals. (The abstract to this article has a Flesch score of 26 when the subheadings are excluded.)
Interestingly enough, there were no significant differences in the percentage of passives used in the two forms of abstracts studied in this article. This
finding is similar to one that we found when looking at the readability of wellknown and less well-known articles in psychology (Hartley, Sotto, and
Pennebaker 2002). The view that scientific writing involves a greater use of
passives, the third person, and the past tense is perhaps more of a myth than
many people suspect (see, e.g., Kirkman 2001; Riggle 1998; Swales and
Feak 1994). Indeed, the APA (2001) Publication Manual states, “Verbs are
vigorous, direct communicators. Use the active rather than the passive voice,
and select tense or mood carefully” (p. 41).
Information Content
The scores on the information checklist showed that the structured
abstracts contained significantly more information than did the traditional
ones. This is hardly surprising, given the nature of structured abstracts, but it
is important. Analyses of the information gains showed that most of the
increases occurred on questions 1 (50 percent), 3 (83 percent), 5 (63 percent),
and 12 (63 percent). Thus, it appears that in these abstracts, more information
was given on the reasons for making the study, where the participants came
from, the sex distributions of these participants, and the final conclusions
drawn.
Downloaded from http://scx.sagepub.com at PENNSYLVANIA STATE UNIV on February 6, 2008
© 2003 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
374
SCIENCE COMMUNICATION
These findings reflect the fact that few authors in American journals seem
to realize that not all of their readers will be American and that all readers
need to know the general context in which a study takes place to assess its relevance for their needs. Stating the actual age group of participants is also
helpful because different countries use different conventions for describing
people of different ages. The word student, for instance, usually refers to
someone studying in tertiary education in the United Kingdom, whereas the
same word is used for very young children in the United States.
Although the checklist is a simple measure (giving equal weight to each
item, and it is inappropriate for review papers), it is nonetheless clear from
the results that the structured abstracts contained significantly more
information than the original ones and that this can be regarded as an advantage for such abstracts. Advances in “text mining,” “research profiling,” and
computer-based document retrieval will be assisted by the use of such more
informative abstracts (Blair and Kimbrough 2002; Pinto and Lancaster 1999;
Porter, Kongthorn, and Lu 2002; Wilczynski et al. 1995).
Abstract Clarity
In previous studies of the clarity of abstracts (e.g., Hartley 1999a; Hartley
and Ganier 2000), the word clarity was not defined, and respondents were
allowed to respond as they thought fit. In this present study, the participants
were asked to “rate each of these abstracts out of 10 for clarity (with a higher
score meaning greater clarity).” This was followed by the explanation, “If
you have difficulty with what I mean by ‘clarity,’ the kinds of words I have in
mind are: ‘readable,’ ‘well-organized,’ ‘clear,’ and ‘informative.’ ” (This
phraseology was based on wording used by a respondent in a previous study
who had explained what she had meant by “clarity” in her ratings.) Also in
this present study, as noted above, the participants were asked to rate different
abstracts rather than the same abstract in the different formats. However, the
mean ratings obtained here of 6.2 and 7.4 for the traditional abstracts and the
structured ones, respectively, closely match the results of 6.0 and 8.0
obtained in the previous studies. Nonetheless, because the current results are
based on abstracts in general rather than on different versions of the same
abstract, these findings offer more convincing evidence for the superiority of
structured abstracts in this respect.
Finally, in this section, we should note that several of the respondents took
the opportunity to comment on the abstracts that they were asked to judge.
Table 3 contains a selection from these remarks.
Downloaded from http://scx.sagepub.com at PENNSYLVANIA STATE UNIV on February 6, 2008
© 2003 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
Hartley / CLARITY OF JOURNAL ABSTRACTS IN PSYCHOLOGY
375
TABLE 3
Some Comments Made by Judges on the Clarity of the
Pairs of Abstracts That They Were Asked to Judge
Preferences for the traditional abstracts
My ratings are 2 for the structured abstract and 1 for the traditional one. Very poor abstracts.
I have read the two abstracts that you sent for my judgment. I found the first one (traditional)
clearer than the second (structured) one. I would give the first about 9 and the second
about 8. Please note, however, that I believe that my response is affected more by the
writing style and content of the abstracts than by their organization. I would have felt
more comfortable comparing the two abstracts if they were on the same topic.
The first (structured) one was well organized, and the reader can go to the section of interest,
but the meaning of the abstract is broken up (I give it 8). The second (traditional) abstract
flowed more clearly and was more conceptual (I give it 10).
I rate the first (structured) abstract as a 7 and the second (traditional) one as an 8. I prefer the
second as it flows better and entices the reader to read the article more than the first, although I understand the purpose of the first to “mimic” the structure of an article, and
hence this should add to clarity.
No clear preference for either format
Both abstracts were clear and well organized. The format was different, but both told me the
information I wanted to know. I gave them both 8.
I found each of the abstracts in this pair to be very clear and without ambiguity. The structured abstract gives the explicit purposes and conclusions, whereas the traditional one
does not, but I believe that those are unrelated to “clarity” as you are defining and intending it—for me they represent a different dimension. I would give both abstracts a rating
of 9.
I did what you wanted me to do, and I did not come up with a clear preference. My rating for
the structured abstract was 9 compared to a rating of 8 for the traditional one.
Preferences for the structured abstracts
Overall, I thought that the structured abstract was more explicit and clearer than the traditional one. I would give 7 to the structured one and 5 to the traditional one.
I would rate the second (structured) abstract with a higher clarity (perhaps 9) and the first
(traditional) one with a lower score (perhaps 4) but not necessarily due to the structured/
unstructured nature of the two paragraphs. The structured abstract was longer and more
detailed (with information on sample size, etc.). If the unstructured abstract were of
equal length and had sample information to the same degree as the structured abstract, it
may have been equally clear.
My preference for the structured abstract (10) is strongly influenced by the fact that I could
easily reproduce the content of the abstract with a high degree of accuracy, compared to
the traditional abstract (which I give 6). I was actually quite impressed by the different
“feel” of the two formats.
I would give the traditional one 4 and the structured one 8. You inspired me to look up my
own recent Journal of Educational Psychology (JEP) article’s abstract. I would give it
5—of course an unbiased opinion!
Downloaded from http://scx.sagepub.com at PENNSYLVANIA STATE UNIV on February 6, 2008
© 2003 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
376
SCIENCE COMMUNICATION
TABLE 3 (continued)
I rated the traditional abstract 3 for clarity and the structured abstract 7. In general, the traditional abstract sacrificed clarity for brevity and the structured one was a touch verbose.
Both abstracts were too general.
In general, I prefer the structured layout. I have read many articles in health journals that use
this type of format, and I find the insertion of the organizer words a very simple, yet powerful way to organize the information.
The bold-faced headings for the structured abstract do serve an organizational function and
would probably be appreciated by students.
Overall, I think that the structured format is good, and I hope that JEP will seriously consider
adopting it.
Concluding Remarks
Abstracts in journal articles are an intriguing genre. They encapsulate, in a
brief text, the essence of the article that follows. And according to the APA
(2001) Publication Manual, “A well-prepared abstract can be the most
important paragraph in your article. . . . The abstract needs to be dense with
information but also readable, well organized, brief and self-contained”
(p. 12).
In point of fact, the nature of abstracts in scientific journals has been
changing over the years as more and more research articles compete for their
readers’ attention. Berkenkotter and Huckin (1995) have described how the
physical format of journal papers has changed to facilitate searching and
reading and how abstracts in scientific journal articles have been getting both
longer and more informative (pp. 34-35).
The current move toward adopting structured abstracts might thus be seen
as part of a more general move toward the use of more clearly defined structures in academic writing. Indeed, while preparing this article, I have come
across references to structured content pages (as in Contemporary Psychology and the Journal of Social Psychology and Personality), structured literature reviews (Ottenbacher 1983; Sugarman, McCrory, and Hubal 1998),
structured articles (Goldmann 1997; Hartley 1999b; Kircz 1998), and even
structured book reviews (in Medical Education Review).
These wider issues, however, are beyond the scope of this particular article. Here, I have merely reported the findings from comparing traditional
abstracts with their equivalent structured versions in one particular context.
My aim, however, has been to illustrate in general how structured abstracts
might make a positive contribution to scientific communication.
Downloaded from http://scx.sagepub.com at PENNSYLVANIA STATE UNIV on February 6, 2008
© 2003 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
Hartley / CLARITY OF JOURNAL ABSTRACTS IN PSYCHOLOGY
377
APPENDIX
The Abstract Evaluation Checklist Used in the Present Study
Abstract No. ________
1. Is anything said about previous research or research findings on the topic?
2. Is there an indication of what the aims/purposes of this study were?
3. Is there information on where the participants came from?
4. Is there information on the numbers of participants?
5. Is there information on the sex distribution of the participants?
6. Is there information on the ages of the participants?
7. Is there information on how the participants were placed in different groups (if
appropriate)?
8. Is there information on the measures used in the study?
9. Are the main results presented in prose in the abstract?
10. Are the results said to be (or not to be) statistically significant, or is a p value
given?
11. Are actual numbers (e.g. means, correlation coefficients, t-values) given in the
abstract?
12. Are any conclusions/implications drawn?
13. Are any limitations of the study mentioned?
14. Are suggestions for further research mentioned?
This checklist is not suitable for theoretical or review papers but can be
adapted to make it so. It would also be interesting to ask for an overall evaluation score (say out of 10) that could be related to the individual items.
References
American Psychological Association. 2001. Publication manual of the American Psychological
Association. 5th ed. Washington, DC: American Psychological Association.
Berkenkotter, C., and T. N. Huckin. 1995. Genre knowledge in disciplinary communication.
Mahwah, NJ: Lawrence Erlbaum.
Blair, D. C., and S. O. Kimbrough. 2002. Exemplary documents: A foundation for information
retrieval design. Information Processing and Management 38:363-79.
Booth, A., and A. J. O’Rourke. 1997. The value of structured abstracts in information retrieval
from MEDLINE. Health Library Review 14:157-66.
British Psychological Society. 2001. Centenary annual conference: Final programme and
abstracts. Leicester: British Psychological Society.
———. 2002. Proceedings of the British Psychological Society 10 (2): 37-131.
Dronberger, G. B., and G. T. Kowitz. 1975. Abstract readability as a factor in information systems. Journal of the American Society for Information Science 26:108-11.
Flesch, R. 1948. A new readability yardstick. Journal of Applied Psychology 32:221-23.
Downloaded from http://scx.sagepub.com at PENNSYLVANIA STATE UNIV on February 6, 2008
© 2003 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
378
SCIENCE COMMUNICATION
Froom, P., and J. Froom. 1993. Deficiencies in structured medical abstracts. Journal of Clinical
Epidemiology 46:591-94.
Goldmann, D. R. 1997. Medical writing: A new section for annals. Annals of Internal Medicine
126:83.
Harbourt, A. M., L. S. Knecht, and B. L. Humphries. 1995. Structured abstracts in MEDLINE
1989-91. Bulletin of the Medical Library Association 83:190-95.
Hartley, J. 1994. Three ways to improve the clarity of journal abstracts. British Journal of Educational Psychology 64:331-43.
———. 1999a. Applying ergonomics to applied ergonomics. Applied Ergonomics 30:535-41.
———. 1999b. From structured abstracts to structured articles: A modest proposal. Journal of
Technical Writing and Communication 29:255-70.
———. 2000a. Are structured abstracts more or less accurate than traditional ones? A study in
the psychological literature. Journal of Information Science 26:273-77.
———. 2000b. Typographic settings for structured abstracts. Journal of Technical Writing and
Communication 30:355-65.
———. 2002. Do structured abstracts take up more space? And does it matter? Journal of Information Science 28:437-42.
Hartley, J., and M. Benjamin. 1998. An evaluation of structured abstracts in journals published
by the British Psychological Society. British Journal of Educational Psychology 68:443-56.
Hartley, J., and F. Ganier. 2000. Which do you prefer? Some observations on preference measures in studies of structured abstracts. European Science Editing 26:4-7.
Hartley, J., E. Sotto, and J. W. Pennebaker. 2002. Style and substance in psychology articles: Are
influential articles more readable than less influential ones? Social Studies of Science
32:321-34.
Hartley, J., and M. Sydes. 1995. Structured abstracts in the social sciences: Presentation, readability and recall. R&D report no. 6211. London: British Library.
———. 1997. Are structured abstracts easier to read than traditional ones? Journal of Research
in Reading 20:122-36.
Hartley, J., M. Sydes, and A. Blurton. 1996. Obtaining information accurately and quickly: Are
structured abstracts more efficient? Journal of Information Science 22:349-56.
Haynes, R. B. 1993. More informative abstracts: Current status and evaluation. Journal of Clinical Epidemiology 46:595-97.
Haynes, R. B., C. D. Mulrow, E. J. Huth, D. G. Altman, and M. J. Gardner. 1990. More informative abstracts revisited. Annals of Internal Medicine 113:69-76.
Kircz, J. G. 1998. Modularity: The next form of scientific information presentation. Journal of
Documentation 54:210-35.
Kirkman, J. 2001. Third person, past tense, passive voice for scientific writing. Who says? European Science Editing 27:4-5.
Klare, G. R. 1963. The measurement of readability. Ames: Iowa State University Press.
McIntosh, N. 1995. Structured abstracts and information transfer. R&D report no. 6142. London: British Library.
McIntosh, N., G. Duc, and G. Sedin. 1999. Structure improves content and peer review of
abstracts submitted to scientific meetings. European Science Editing 25:43-47.
Mulrow, C. D., S. B. Thacker, and J. A. Pugh. 1988. A proposal for more informative abstracts of
review articles. Annals of Internal Medicine 108:613-15.
O’Rourke, A. J. 1997. Structured abstracts in information retrieval from biomedical databases:
A literature survey. Health Informatics Journal 3:17-20.
Ottenbacher, K. 1983. Quantitative reviewing: The literature review as scientific inquiry. American Journal of Occupational Therapy 37:313-19.
Downloaded from http://scx.sagepub.com at PENNSYLVANIA STATE UNIV on February 6, 2008
© 2003 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
Hartley / CLARITY OF JOURNAL ABSTRACTS IN PSYCHOLOGY
379
Pennebaker, J. W., M. E. Francis, and R. J. Booth. 2001. Linguistic inquiry and word count:
LIWC. Mahwah, NJ: Lawrence Erlbaum.
Pinto, M., and F. W. Lancaster. 1999. Abstracts and abstracting in knowledge discovery. Library
Trends 48:234-48.
Pitkin, R. M., and M. A. Branagan. 1998. Can the accuracy of abstracts be improved by providing specific instructions? A randomized controlled trial. Journal of the American Medical
Association 280:267-69.
Pitkin, R. M., M. A. Branagan, and L. Burmeister. 1999. Accuracy of data in abstracts of published research articles. Journal of the American Medical Association 281:1110-11.
Porter, A. L., A. Kongthorn, and J. C. Lu. 2002. Research profiling: Improving the literature
review. Scientometrics 53:351-70.
Riggle, K. B. 1998. Using the active and passive voice appropriately in on-the-job writing. Journal of Technical Writing and Communication 28:85-117.
Roberts, J. C., C. H. Fletcher, and S. W. Fletcher. 1994. Effects of peer reviewing and editing on
the readability of articles published in Annals of Internal Medicine. Journal of the American
Medical Association 272:119-21.
Siebers, R. 2000. How accurate is data in abstracts of research articles? New Zealand Journal of
Medical Laboratory Science 54:22-23.
———. 2001. Data inconsistencies in abstracts of articles in Clinical Chemistry. Clinical Chemistry 47:1.
Sugarman, J., D. C. McCrory, and R. C. Hubal. 1998. Getting meaningful informed consent from
older adults: A structured literature review of empirical research. Journal of the American
Geriatric Society 46:517-24.
Swales, J. M., and C. B. Feak. 1994. Academic writing for graduate students: A course for nonnative speakers of English. Ann Arbor: University of Michigan Press.
Taddio, A., T. Pain, F. F. Fassos, H. Boon, A. L. Ilersich, and T. R. Einarson. 1994. Quality of
nonstructured and structured abstracts of original research articles in the British Medical
Journal, the Canadian Medical Association Journal and the Journal of the American Medical Association. Canadian Medical Association Journal 150:1611-15.
Tenopir, C., and P. Jacso. 1993. Quality of abstracts. Online 17:44-55.
Trakas, K., A. Addis, D. Kruk, Y. Buczek, M. Iskedjian, and T. R. Einarson. 1997. Quality
assessment of pharmacoeconomic abstracts of original research articles in selected journals.
Annals of Pharmacotherapy 31:423-38.
Wilczynski, N. L., C. J. Walker, K. A. McKibbon, and R. B. Haynes. 1995. Preliminary assessment of the effect of more informative (structured) abstracts on citation retrieval from
MEDLINE. Medinfo 8:1457-61.
JAMES HARTLEY is a research professor in the Department of Psychology at the University of Keele in Staffordshire, England. His main interests lie in written communication and in teaching and learning in higher education. He is the author of Designing
Instructional Text (3rd ed., 1994) and Learning and Studying: A Research Perspective
(1998).
Downloaded from http://scx.sagepub.com at PENNSYLVANIA STATE UNIV on February 6, 2008
© 2003 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.