Evaluation of Fatigue Scales in Stroke Patients

Evaluation of Fatigue Scales in Stroke Patients
Gillian Mead, MD; Joanna Lynch, MA; Carolyn Greig, PhD; Archie Young, MD;
Susan Lewis, PhD; Michael Sharpe, MD
Downloaded from http://stroke.ahajournals.org/ by guest on June 17, 2017
Background and Purpose—There is little information on how to best measure poststroke fatigue. Our aim was to identify
which currently available fatigue scale is most valid, feasible, and reliable in stroke patients.
Methods—Fatigue scales were identified by systematic search, and the 5 with the best face validity were identified by
expert consensus. Feasibility (ie, did patients provide answers?) and internal consistency (an aspect of reliability) of
these scales were evaluated by interviewing 55 stroke patients. Test-retest reliability was assessed by reinterviewing 51
patients, interrater reliability was assessed by rerating audio recordings, and convergent validity was assessed by
measuring the correlation between scale scores.
Results—Of the 52 scales identified, the SF-36v2 (vitality component), the fatigue subscale of the Profile of Mood States,
the Fatigue Assessment Scale, the general subscale of the Multidimensional Fatigue Symptom Inventory, and the Brief
Fatigue Inventory had the best face validity. The Brief Fatigue Inventory was unfeasible to administer and was omitted.
Of the remaining 4 scales, the Fatigue Assessment Scale had the poorest internal consistency. Test-retest reliability for
individual scale questions ranged from fair to good; the Fatigue Assessment Scale had the narrowest limits of agreement
for the total score, indicating the best test-retest reliability. Interrater reliability for individual questions ranged from
good to very good, and there was no significant mean difference in total scores for any scale. Convergent validity was
moderate to high for the total scores of the 4 scales.
Conclusions—All four scales were valid and feasible to administer to stroke patients. The Fatigue Assessment Scale had
the best test-retest reliability but the poorest internal consistency. (Stroke. 2007;38:2090-2095.)
Key Words: complications 䡲 quality of life scales 䡲 stroke recovery 䡲 fatigue
F
atigue is defined as a feeling of lack of energy, weariness,
and aversion to effort. Although it has been well studied
in neurological diseases such as multiple sclerosis and Parkinson’s disease,1 emerging evidence indicates that fatigue is
also common after stroke but to date has been relatively
neglected.2– 8 Previous cross-sectional studies3– 8 have all used
scales devised for use in conditions other than stroke, and
consequently their validity in stroke patients is unknown. For
example, the question “Do you feel weak?” does not have
face validity for stroke because patients’ muscle weakness is
generally due to the local weakness of hemiparesis rather than
to generalized fatigue. Two scales9,10 ask patients to rate the
extent to which fatigue interferes with physical functioning,
even though some stroke patients cannot disentangle the
effect of their fatigue from that of their neurological deficit.8
The reliability (ie, does the measure as a whole, and do the
items included in the measure, give repeatable results when
used under the same circumstances11?) of existing fatigue
scales when used in stroke patients has not previously been
determined. Our aims were to determine the best scales for
measuring the severity of fatigue after stroke by determining
face validity, feasibility, reliability (internal consistency,
test-retest, and interrater), and convergent construct validity
of currently available scales.
Methods
Identification of Currently Available
Fatigue Scales
One researcher (G.M.) performed a search of MEDLINE (1966 to
February 2004) in July 2004 by using the search terms “fatigue” (and
related words such as “tiredness”), “instrument,” “assessment,”
“scale,” and “measurement.” Abstracts were screened, potentially
relevant publications were obtained, and reference lists of these
articles were screened. A second search was performed in October
2006 (after completion of the patient interviews) to check whether
the literature had changed significantly.
Face Validity of Fatigue Scales
Full-text articles were then independently scrutinized by 4 observers
to determine the face validity of each scale for stroke patients. One
of these observers (G.M.) was a practicing stroke physician with
research experience of fatigue in stroke patients; the other 3 all had
clinical or research experience in assessing patients with stroke
and/or fatigue. Face validity was assessed on the criteria that the
Received November 28, 2006; final revision received January 15, 2007; accepted January 24, 2007.
From the Geriatric Medicine (G.M., J.L., C.G., A.Y., S.L.), School of Clinical Sciences and Community Health, University of Edinburgh, New Royal
Infirmary of Edinburgh, and the Division of Psychiatry (M.S.), School of Molecular and Clinical Medicine, University of Edinburgh, Royal Edinburgh
Hospital, Edinburgh, Scotland.
Correspondence to Gillian Mead, MD, Geriatric Medicine, School of Clinical Sciences and Community Health, University of Edinburgh, New Royal
Infirmary of Edinburgh, Edinburgh, UK EH16 4SB. E-mail [email protected]
© 2007 American Heart Association, Inc.
Stroke is available at http://www.strokeaha.org
DOI: 10.1161/STROKEAHA.106.478941
2090
Mead et al
scale (1) captured the phenomenon of poststroke fatigue and (2) was
free from items indistinguishable from the effects of the stroke, eg,
“My limbs feel weak.” Each observer independently listed their 5
preferred scales and tabled their preferences at a consensus meeting.
Any differences were resolved by discussion. As a further check of
face validity, the 5 chosen scales were pilot tested in 13 stroke
inpatients.
Patient Interviews for Feasibility, Reliability, and
Convergent Construct Validity
Downloaded from http://stroke.ahajournals.org/ by guest on June 17, 2017
Ethics approval was granted by the local ethics committee. All
participating patients gave written consent. Patients were recruited
from hospital stroke wards (at least 1 week after stroke onset) and
from the community via stroke clinics or community nurses. Patients
with dysphasia or confusion severe enough to prevent them from
understanding the rationale for the study or giving informed consent
were excluded, as were those who were medically unstable because
of another condition. The side of the brain lesion, stroke subtype
(Oxfordshire Community Stroke Project classification), and computed tomography brain results were recorded from patient’s records.
After verifying that the patient understood the meaning of the word
“fatigue,” the scales were administered verbally and the interview
was audio recorded.
Feasibility
As a measurement of feasibility, we recorded whether the patient
provided an appropriate response to each item of each scale.
Reliability
Responses to individual scale items were used to determine the
internal consistency of each scale. For test-retest reliability, the
scales were readministered verbally by the same interviewer 3 days
later. Responses from both interviews were used to determine the
test-retest reliability of individual scale items and total scale scores.
For interrater reliability, the audio recordings of the first interview
were rerated by a second rater who was blinded to the initial rating
to determine interrater reliability of individual scale items and total
scale scores.
Internal consistency was calculated by Cronbach’s ␣. Test-retest
and interrater agreements between the individual items of each scale
were analyzed by percent agreement, the weighted kappa statistic
(for items with ⬎2 response levels, by Stat-Xact), and Cohen’s
kappa (for items with 2 response levels). Interpretation of kappa was
based on published guidelines.12 Test-retest and interrater agreement
for total test scores was analyzed by the Bland-Altman method13 and
intraclass correlation coefficients.
Convergent Construct Validity
Convergent construct validity was determined by the Spearman
correlation coefficient of ratings on each of the scales completed at
the first interview. To assess convergent construct validity, the total
scores for each scale obtained at the first interview were correlated
with each of the other scale scores.
Statistical Analysis
All analyses were done with SPSS version 11.0 unless stated
otherwise.
Results
Identification of Fatigue Scales
Pertinent citations (N⫽6122) were identified. From these
citations, 52 fatigue scales were identified, none of which had
been designed for use in stroke patients. A second search in
October 2006 identified 3 additional scales devised for
conditions other than stroke.
Fatigue Scales After Stroke
2091
Face Validity
Five scales were deemed to have adequate face validity: the
vitality subscale of the SF-36v2,14 the fatigue subscale of the
Profile of Mood States (POMS-fatigue),15 the Fatigue Assessment Scale (FAS),16 the general subscale of the Multidimensional Fatigue Symptom Inventory (MFSI-general),17 and the
Brief Fatigue Inventory (BFI).18 After pilot testing in 13
stroke inpatients, one question on the MFSI-general (“I feel
pooped”) was poorly understood and was changed to “I feel
exhausted.”
Patient Interviews for Feasibility, Reliability, and
Convergent Construct Validity
Sixty-four patients were invited to participate, of whom 55
consented. Their median age was 73 years (interquartile
range, 66 to 81 years), 31 (56%) were male, 40 were
inpatients, and 15 were resident in the community. Twentyseven (49%) had a right hemisphere stroke. Three (6%) had a
hemorrhagic stroke. Eleven (20%) had total anterior circulation syndromes, 22 (40%) had partial anterior circulation
syndromes, 16 (29%) had lacunar syndromes, and 6 (11%)
had posterior circulation syndromes. Forty-three (78%) had a
relevant stroke lesion on brain computed tomography scans.
The median time between stroke and first assessment was 23
days (interquartile range, 10 to 53 days) for inpatients and
137 days (interquartile range, 93 to 217 days) for community
patients.
All 55 patients were successfully interviewed at time 1, and
51 of these were interviewed at time 2. Reasons for not
having a second interview were discharge before it was due
(n⫽1), deterioration in medical condition (n⫽2), and refusal
(n⫽1). The mean duration between interviews (test-retest
reliability) was 3.9 days (range, 3 to 7 days).
Feasibility
At the first interview, every patient answered every item of
the SF-36v2 (vitality component), POMS-fatigue, and MFSIgeneral. Three patients could not complete the BFI, an
additional 8 patients could not answer the question on the
interference of fatigue with “walking ability,” and one was
unable to answer the question on the interference of fatigue
with “normal work” (both items in the BFI). Three patients
each could not answer one of three different items on the
FAS.
Of those attempting the second interview (n⫽51), all items
on the SF-36v2 (vitality component) and FAS were answered
by every patient. One patient was unable to answer “In the
past week do you feel sluggish?” (POMS-fatigue) and “In the
past week I feel sluggish” (MFSI-general). Three patients did
not attempt the BFI at the second interview, and an additional
three could not answer the BFI question on the interference of
fatigue with “walking ability.” These data demonstrated that
the BFI is the least feasible scale to administer, so the BFI
was dropped from further analysis for this reason.
Internal Consistency
With respect to internal consistency, for the first and second
interview, Cronbach’s ␣ was 0.91 and 0.93, respectively, for
the MFSI-general; 0.89 and 0.88, respectively, for the POMS-
2092
Stroke
July 2007
TABLE 1. Test-Retest and Interrater Agreement for Individual Questions in Each Scale: Reliability (Test-Retest and Interrater) for
Individual Scale Items
Test-Retest
Scale
SF36v2
(vitality)
POMS
(fatigue)
Downloaded from http://stroke.ahajournals.org/ by guest on June 17, 2017
FAS
MFSI
(general)
Interrater
Percentage
Agreement
Kappa (95% CI),* n
Level of
Agreement
Percentage
Agreement
Kappa (95% CI),* n
Level of
Agreement
In the past 4 weeks, did you feel
full of life?
43%
0.35 (0.07–0.63) 51
Fair
90%
0.89 (0.77–1.00) 49
Very good
In the past 4 weeks, did you
have a lot of energy?
59%
0.46 (0.14–0.77) 51
Moderate
89%
0.87 (0.74–0.99) 47
Very good
In the past 4 weeks, did you feel
worn out?
43%
0.47 (0.25–0.70) 51
Moderate
86%
0.72 (0.45–0.99) 47
Good
In the past 4 weeks, did you feel
tired?
43%
0.43 (0.22–0.63) 51
Moderate
87%
0.88 (0.75–1.00) 48
Very good
In the past week, do you feel
tired?
65%
0.51 (0.29–0.73) 51
Moderate
92%
0.89 (0.75–1.00) 48
Very good
In the past week, do you feel
fatigued?
57%
0.52 (0.32–0.72) 51
Moderate
85%
0.82 (0.66–0.98) 46
Very good
In the past week, do you feel
worn out?
55%
0.59 (0.41–0.76) 51
Moderate
87%
0.71 (0.45–0.97) 45
Good
In the past week, do you feel
sluggish?
56%
0.61 (0.42–0.80) 50
Good
90%
0.87 (0.74–1.00) 48
Very good
In the past week, do you feel
weary?
57%
0.52 (0.30–0.73) 51
Moderate
91%
0.85 (0.68–1.00) 46
Very good
In the past week, do you feel
sleepy?
55%
0.45 (0.19–0.72) 51
Moderate
88%
0.74 (0.46–1.00) 48
Good
I am bothered by fatigue
37%
0.37 (0.17–0.56) 51
Fair
91%
0.78 (0.51–1.00) 46
Good
I get tired very quickly
47%
0.49 (0.27–0.71) 51
Moderate
84%
0.67 (0.37–0.97) 48
Good
I don’t do much during the day
44%
0.50 (0.30–0.71) 50
Moderate
89%
0.87 (0.75–1.00) 43
Very good
I have enough energy for
everyday life
46%
0.44 (0.21–0.66) 50
Moderate
91%
0.82 (0.58–1.00) 44
Very good
Very good
Item
Physically I feel exhausted
51%
0.64 (0.48–0.81) 51
Good
94%
0.98 (0.95–1.00) 48
I have problems starting things
52%
0.53 (0.31–0.75) 50
Moderate
87%
0.84 (0.68–1.00) 47
Very good
I have problems thinking clearly
71%
0.77 (0.61–0.92) 51
Good
90%
0.80 (0.61–0.99) 48
Very good
I feel no desire to do anything
49%
0.33 (0.05–0.61) 51
Fair
89%
0.90 (0.78–1.00) 44
Very good
Mentally I feel exhausted
51%
0.65 (0.46–0.84) 51
Good
91%
0.91 (0.81–1.00) 44
Very good
When I am doing something, I
can concentrate quite well
47%
0.48 (0.25–0.71) 51
Moderate
91%
0.95 (0.90–1.00) 44
Very good
In the past week, I feel
exhausted
41%
0.58 (0.41–0.74) 51
Moderate
93%
0.94 (0.87–1.00) 45
Very good
In the past week, I feel run down
49%
0.64 (0.47–0.81) 51
Good
87%
0.81 (0.60–1.00) 46
Very good
In the past week, I feel worn out
45%
0.48 (0.27–0.69) 51
Moderate
89%
0.90 (0.81–1.00) 44
Very good
In the past week, I feel fatigued
39%
0.56 (0.37–0.74) 51
Moderate
84%
0.87 (0.75–0.97) 44
Very good
In the past week, I feel sluggish
60%
0.66 (0.47–0.84) 50
Good
89%
0.82 (0.63–1.00) 45
Very good
In the past week, I feel tired
59%
0.69 (0.53–0.85) 51
Good
89%
0.92 (0.83–1.00) 45
Very good
No. of possible responses: for SF36 (vitality), 5; for POMS (fatigue), 4; for FAS, 5; for MFSI (general), 5.
*Weighted kappa: all P values ⬍0.01.
fatigue; 0.76 and 0.78, respectively, for the SF-36v2 vitality
score; and 0.58 and 0.62, respectively, for the FAS.
Test-Retest Reliability
The kappa values for the individual items of all 4 scales
ranged from fair to good (Table 1). The agreement between
total scores for each scale is shown in Table 2 and as
Bland-Altman plots (the Figure). The horizontal axes of the
Bland-Altman plots span the entire range of the scale,
whereas the vertical axes cover the range of possible differences. The scale with the narrowest scatter of differences (ie,
limits of agreement) was the FAS (Figure, c). The 95% CIs
for the mean difference between the first interview and the
second for the total scores of POMS-fatigue and MFSIgeneral did not include zero, demonstrating that there was a
significant mean difference between the 2 interviews,
whereas there was no significant mean difference between
interviews for the FAS and SF-36v2-vitality (Table 2). The
Mead et al
Fatigue Scales After Stroke
2093
TABLE 2. Test-Retest* and Interrater† Agreement for Total Scale Scores, *Time 1ⴚTime 2, †Time 1ⴚRater
2: Reliability (Test-Retest and Interrater) of Total Scale Scores
Mean Difference
(95% CI)
No. of
Pairs
Limits of
Agreement
Intraclass Correlation
Coefficient (95% CI)
51
⫺0.14 (⫺1.07 to 0.79)
51
⫺6.76 to 6.48
0.51 (0.27 to 0.69)
11.3 (3.62)
44
⫺0.16 (⫺0.60 to 0.28)
44
⫺3.05 to 2.73
0.92 (0.86 to 0.96)
8.25 (4.25)
55
Scale
Mean (SD) Total Score
n
SF-36v2 (vitality)
Time 1
11.5 (3.45)
55
Time 2
11.6 (3.07)
Rater 2
Time 1
POMS (fatigue)
FAS
MFSI (general)
Time 2
7.18 (3.65)
50
1.02 (0.24 to 1.80)
50
⫺4.44 to 6.48
0.74 (0.56 to 0.85)
Rater 2
8.20 (3.99)
41
0.61 (⫺0.08 to 1.30)
41
⫺3.79 to 5.01
0.84 (0.72 to 0.91)
Time 1
24.8 (5.47)
52
Time 2
24.9 (5.36)
51
0.31 (⫺0.76 to 1.38)
48
⫺7.06 to 7.68
0.77 (0.62 to 0.86)
Rater 2
25.3 (5.05)
35
⫺0.06 (⫺0.93 to 0.82)
35
⫺5.15 to 5.03
0.88 (0.77 to 0.94)
Time 1
10.5 (6.46)
55
Time 2
8.54 (6.14)
50
2.02 (0.86 to 3.18)
50
⫺6.16 to 10.2
0.76 (0.55 to 0.87)
Rater 2
10.8 (6.44)
41
0.12 (⫺0.89 to 1.13)
41
⫺6.23 to 6.51
0.88 (0.78 to 0.93)
Downloaded from http://stroke.ahajournals.org/ by guest on June 17, 2017
intraclass correlation coefficient for total test-retest FAS
scores was higher than for SF-36v2-vitality (0.77 vs 0.51;
Table 2).
Interrater Reliability
Forty-three interviews were available for rerating. Interrater
reliability for the individual items of each scale is shown in
Table 1. The kappa values indicated good to very good
interrater reliability. For total test scores, there was no
significant mean difference between raters (Table 2).
Convergent Construct Validity
Convergent construct validity of the total scale scores for the
POMS-fatigue, FAS, and MFSI-general scale was moderate
to high (Table 3). The construct validity for FAS and
MFSI-general was higher than for SF36v2 and MFSI-general
(0.71 vs 0.47).
Discussion
This is the first evaluation of available fatigue scales to
determine the most valid, feasible to administer, and reliable
for use in stroke patients. Of the 52 fatigue scales identified
by our literature search, many contained items that could be
confused with the neurological effect of stroke. We selected
5 with the best face validity in stroke and tested their
psychometric properties.
We deliberately recruited both hospital and community
patients to improve the generalizability of our results. When
we applied our 5 chosen scales by interviewing stroke
patients, the least feasible was the BFI, perhaps because
patients had to grade their fatigue on a numeric scale of 0 to
10 and quantify the extent to which fatigue interfered with
different activities. Patients frequently had difficulty distinguishing the effect of their neurological impairment from the
effect of fatigue. The BFI was therefore dropped from further
analysis.
We assessed 3 aspects of reliability. First, internal consistency was lowest for the FAS, possibly because the FAS asks
about different facets of fatigue, whereas the other scales ask
several similar questions with only slight differences in
vocabulary. Although an “ideal” scale may have high internal
consistency, a scale measuring slightly different facets of a
symptom as complex as fatigue is arguably more useful in
practice. Second, test-retest reliability for individual items (as
determined by kappa values) was similar for all 4 scales
tested, indicating that no scale outperformed another in this
respect (Table 1). However, for total test scores, the FAS
performed the best, as it had the narrowest limits of agreement (Figure, c), there was no significant mean difference
between the first and second interviews (Table 2), and the
intraclass correlation coefficient was high, at 0.77 (Table 2).
The SF-36v2-vitality had the widest limits of agreement,
perhaps reflecting differences in the questions: the SF-36v2vitality asks about the previous 4 weeks, whereas the POMSfatigue and MFSI-general ask about the “last week” and the
FAS asks about the present time. Third, analysis of interrater
reliability indicated that no 1 scale outperformed another: the
proportion of kappa scores in each category was similar for
all scales (Table 1), and there was no significant mean
difference between observers for total test scores (Table 2).
Construct validity of the total scale scores was moderate to
high (Table 3). Because some of the scales contained the
same (or very similar) questions, this is not unexpected. We
noted that scores in our patients indicated higher levels of
fatigue than in nonstroke subjects, eg, SF-36v2-vitality and
FAS.16,19
There are some weaknesses to the study. First, our patients
were not consecutive and may not be representative of stroke
patients as a whole. Second, although a larger sample size
would have given more precise estimates, a sample size of 50
is usually considered sufficient for studies of agreement.12
Third, although we assessed face validity and convergent
construct validity, the absence of any “gold standard” for
fatigue after stroke means we could not assess criterion
validity. Fourth, when assessing test-retest reliability, the
interviewer may have remembered the results of the first
interview when performing the second, thereby artificially
increasing apparent reliability. Fifth, when assessing interrater reliability, we used audio recordings rather that repeat
2094
Stroke
July 2007
Downloaded from http://stroke.ahajournals.org/ by guest on June 17, 2017
Bland-Altman plots for test-retest reliability of total scale scores. a, Test-retest agreement for SF36v2 vitality component raw total score. b,
Test-retest agreement for POMS-fatigue total score. c, Test-retest agreement for FAS total score. d, Test-retest agreement for MFSI-general.
Reference lines show mean differences between time 1 and time 2, and 95% limits of agreement for the mean difference.
interviews, again potentially increasing apparent reliability.
Finally, not all of the interviews could be analyzed for
interrater reliability because of the poor quality of some
recordings, mainly due to background noise on hospital
wards.
The four scales are all usable. Our “best buy” depends to
some extent on the intended use, but on the basis of our data,
we would recommend the use of the FAS to measure fatigue
after stroke because it had face validity, it was feasible for
most patients, and it had the best test-retest reliability and
high construct validity. However, if high internal consistency
is a priority, one of the other three scales should be
considered.
Acknowledgments
We are grateful to the patients who participated and to the staff who
assisted us in identifying patients.
TABLE 3. Construct Validity* for Comparison of Total Scale
Scores: Convergent Construct Validity of Total Scale Scores
Scale
SF-36v2 (vitality)
POMS (fatigue)
⫺0.58 (⬍0.001)
n⫽55
POMS (fatigue)
FAS
⫺0.41 (0.003)
MFSI (general)
Source of Funding
The study received funding from the Chief Scientist Office of the
Scottish Executive Health Department (reference CZG/2/161).
⫺0.47 (⬍0.001)
n⫽52
0.59 (⬍0.001)
n⫽52
FAS
n⫽55
0.75 (⬍0.001)
n⫽55
0.71 (⬍0.001)
n⫽52
*Spearman correlation coefficient (␳) for time 1 (first rater).
Disclosures
None.
References
1. Wessely S, Hotopf M, Sharpe M. Chronic Fatigue and Its Syndromes.
New York: Oxford University Press; 1998.
2. Staub F, Bogousslavsky J. Fatigue after stroke: a major but neglected
issue. Cerebrovasc Dis. 2001;12:75– 81.
Mead et al
3. Leegaard OF. Diffuse cerebral symptoms in convalescents from cerebral
infarction and myocardial infarction Acta Neurol Scand. 1983;67:
348 –355.
4. Ingles JL Eskes GA Phillips SJ. Fatigue after stroke Arch Phys Med
Rehabil. 1999;80:173–178.
5. Van der Werf SP, Van den Broek HL, Anten HW, Bleijenberg G. Experience
of severe fatigue long after stroke and its relation to depressive symptoms and
disease characteristics. Eur Neurol. 2001;45:28–33.
6. Glader E-L, Stegmayr B, Asplund K. Poststroke fatigue: a 2-year follow-up
study of stroke patients in Sweden. Stroke. 2002;33:1327–1333.
7. Carlsson GE, Moller A, Blomstrand C. Consequences of mild stroke in
persons ⬍75 years—a 1-year follow-up. Cerebrovasc Dis. 2003;16:383–388.
8. Morley W, Jackson K, Mead G. Fatigue after stroke: neglected but
important. Age Ageing. 2005;34:313.
9. Krupp LB, LaRocca NG, Muir-Nash J, Steinberg AD. The fatigue
severity scale. Arch Neurol. 1989;46:1121–1123.
10. Fisk JD, Pontefract A, Ritvo PG, Archibald CJ, Murray TJ. The impact of
fatigue on patients with multiple sclerosis. Can J Neurol Sci. 1994;21:9–14.
11. Peck DF, Shapiro CM. Guidelines for the Construction, Selection and
Interpretation of Measurement Devices in Measuring Health: A Review of
12.
13.
14.
15.
16.
17.
18.
19.
Fatigue Scales After Stroke
2095
Quality of Life Scales and Measurement Scales. Bowling A, ed. Philadelphia: Open University Press Milton Keynes, 1990:1–12.
Altman DG. Practical Statistics for Medical Research. London: Chapman
and Hall; 1991.
Bland JM, Altman DG. Statistical methods for assessing agreement
between two methods of clinical measurement. Lancet. 1986;8:307–310.
Medical Outcomes Trust. SF-36 Health Survey: Scoring Manual for
English Language Adaptations. Boston: Medical Outcomes Trust; 1994.
McNair DM, Lorr M. An analysis of mood in neurotics. J Abnorm Social
Psychol. 1964;69:620 – 627.
Michielsen HJ, De Vries J, Van Heck GL. Psychometric qualities of a
brief self-rated fatigue measure: the Fatigue Assessment Scale. J Psychosom Res. 2003;54:345–353.
Stein KD, Martin SC, Hann DM, Jacobsen PB. A multidimensional
measure of fatigue for use with cancer patients. Cancer Pract. 1998;6:
143–152.
Mendoza TR, Wang XS, Cleeland CS, Morrissey M, Johnson BA, Wendt
JK, Huber SL. The rapid assessment of fatigue severity in cancer patients:
use of the Brief Fatigue Inventory. Cancer. 1999;85:1186 –1196.
Walters SJ, Munro JF, Brazier JE. Using the SF-36 with older adults: a
cross-sectional community based survey. Age Ageing. 2001;30:337–343.
Downloaded from http://stroke.ahajournals.org/ by guest on June 17, 2017
Evaluation of Fatigue Scales in Stroke Patients
Gillian Mead, Joanna Lynch, Carolyn Greig, Archie Young, Susan Lewis and Michael Sharpe
Downloaded from http://stroke.ahajournals.org/ by guest on June 17, 2017
Stroke. 2007;38:2090-2095; originally published online May 24, 2007;
doi: 10.1161/STROKEAHA.106.478941
Stroke is published by the American Heart Association, 7272 Greenville Avenue, Dallas, TX 75231
Copyright © 2007 American Heart Association, Inc. All rights reserved.
Print ISSN: 0039-2499. Online ISSN: 1524-4628
The online version of this article, along with updated information and services, is located on the
World Wide Web at:
http://stroke.ahajournals.org/content/38/7/2090
Permissions: Requests for permissions to reproduce figures, tables, or portions of articles originally published
in Stroke can be obtained via RightsLink, a service of the Copyright Clearance Center, not the Editorial Office.
Once the online version of the published article for which permission is being requested is located, click
Request Permissions in the middle column of the Web page under Services. Further information about this
process is available in the Permissions and Rights Question and Answer document.
Reprints: Information about reprints can be found online at:
http://www.lww.com/reprints
Subscriptions: Information about subscribing to Stroke is online at:
http://stroke.ahajournals.org//subscriptions/