Test Accuracy of Informant-Based Cognitive Screening

Test Accuracy of Informant-Based Cognitive Screening
Tests for Diagnosis of Dementia and Multidomain Cognitive
Impairment in Stroke
Aine McGovern, MBChB; Sarah T. Pendlebury, DPhil; Nishant K. Mishra, MBBS, PhD;
Yuhua Fan, PhD; Terence J. Quinn, MD
Downloaded from http://stroke.ahajournals.org/ by guest on July 31, 2017
Background and Purpose—Poststroke cognitive assessment can be performed using standardized questionnaires designed
for family or care givers. We sought to describe the test accuracy of such informant-based assessments for diagnosis of
dementia/multidomain cognitive impairment in stroke.
Methods—We performed a systematic review using a sensitive search strategy across multidisciplinary electronic databases.
We created summary test accuracy metrics and described reporting and quality using STARDdem and Quality Assessment
of Diagnostic Accuracy Studies (QUADAS) tools, respectively.
Results—From 1432 titles, we included 11 studies. Ten papers used the Informant Questionnaire on Cognitive Decline in the
Elderly (IQCODE). Four studies described IQCODE for diagnosis of poststroke dementia (n=1197); summary sensitivity:
0.81 (95% confidence interval, 0.60–0.93); summary specificty: 0.83 (95% confidence interval, 0.64–0.93). Five studies
described IQCODE as tool for predicting future dementia (n=837); summary sensitivity: 0.60 (95% confidence interval,
0.32–0.83); summary specificity: 0.97 (95% confidence interval, 0.70–1.00). All papers had issues with at least 1 aspect
of study reporting or quality.
Conclusions—There is a limited literature on informant cognitive assessments in stroke. IQCODE as a diagnostic tool
has test properties similar to other screening tools, IQCODE as a prognostic tool is specific but insensitive. We found
no papers describing test accuracy of informant tests for diagnosis of prestroke cognitive decline, few papers on
poststroke dementia and all included papers had issues with potential bias. (Stroke. 2016;47:329-335. DOI: 10.1161/
STROKEAHA.115.011218.)
Key Words: cognition ◼ dementia ◼ sensitivity ◼ specificity ◼ stroke
I
nternational guidelines recommend that we assess all stroke
survivors for cognitive disorders, however, there is no consensus on the optimal method of assessment.1,2 A usual first
step is to assess with a direct cognitive screening tool. These
tools have use but diagnostic accuracy is imperfect.3 In the
context of stroke, completion and assessment of such tools
is complicated by stroke-related impairments and physical
illness.4,5
An alternative or complementary approach is to seek a history suggestive of cognitive problems from family or caregivers.6 Using collateral information sources to describe medium
to longer term cognitive change is particularly attractive for
stroke settings because the informant view should be less subject to variation from stroke-related impairments; should not
be biased by educational level or cultural factors and offers
potential for describing prestroke cognition in the acute phase
of stroke.7 Informant assessment can be operationalized using
a validated questionnaire, such as the Informant Questionnaire
for Cognitive Decline in the Elderly (IQCODE).8
Taking IQCODE as an exemplar, in the original validated
versions of the scale, it asks a series of questions about how
cognition and functioning have changed >10-year period.
Individual item scores are collated to give a summary score.
Various IQCODE cut-offs have been described to determine
clinically important cognitive decline.7,8 Some have suggested
that a high threshold, for example IQCODE >3.6, should be
used to determine dementia, whereas a more inclusive threshold should be chosen to define any cognitive impairment. This
Received August 15, 2015; final revision received October 27, 2015; accepted November 16, 2015.
From the Department of Academic Geriatric Medicine, NHS Greater Glasgow and Clyde, Glasgow, United Kingdom (A.M.G.); Oxford and Stroke
Prevention Research Unit, Nuffield Department of Clinical Neurosciences, Oxford NIHR Biomedical Research Centre, John Radcliffe Hospital, University
of Oxford, Oxford, United Kingdom (S.T.P.); Department of Academic Geriatric Medicine, Institute of Cardiovascular and Medical Sciences, University
of Glasgow, Glasgow, United Kingdom (N.K.M., T.J.Q.); and Department of Neurology, First Affiliated Hospital of Sun Yat-Sen University, Guangdong,
China (Y.F.).
Presented in part at the European Stroke Organisation Conference 2015, Glasgow, United Kingdom, April 17–19, 2015.
The online-only Data Supplement is available with this article at http://stroke.ahajournals.org/lookup/suppl/doi:10.1161/STROKEAHA.
115.011218/-/DC1.
Correspondence to Terry Quinn, MD, Institute of Cardiovascular and Medical Sciences, University of Glasgow, Room 2.44, New Lister Bldg, Glasgow
Royal Infirmary, 10–16 Alexandra Parade, Glasgow G31 2ER, United Kingdom. E-mail [email protected]
© 2015 American Heart Association, Inc.
Stroke is available at http://stroke.ahajournals.org
DOI: 10.1161/STROKEAHA.115.011218
329
330 Stroke February 2016
Downloaded from http://stroke.ahajournals.org/ by guest on July 31, 2017
approach has been used in several important stroke cognition
epidemiological studies.9–11
Other informant assessments include the 8-item interview to differentiate aging from dementia (AD-8).12 This
short questionnaire seems to have favorable properties compared with IQCODE.13 The Blessed Dementia Scale (BDS)
is a multidomain informant assessment that describes change
in functional ability, activities of daily living, and personality.14 Some assessments include an informant component.
For example, the GP-Cog is a screening test designed for
use in primary care, comprising a direct to patient assessment complemented by 6 informant questions.15 None of
these informant-based tests have been specifically designed
for use in stroke.
Informant-based assessments can be used for 3 broad purposes in stroke care: (1) tools can be used to assess for prestroke cognitive decline; (2) tools can be used to assist in the
process of diagnosing poststroke cognitive impairment; and
(3) tools can be used as a prognostic aid, identifying a period
of cognitive decline that may predict future dementia. In situation (3), the IQCODE is being used to detect early cognitive
change that is not sufficient to warrant a label of dementia but
that may predict risk of future dementia states (Figure 1).
We sought to collate the published evidence describing test properties of informant-based cognitive assessments
when used in stroke settings.
Methods
We performed a systematic review after best practice guidance
in conduct and reporting.16,17 All aspects of searching, data extraction, and quality assessment were performed by 2 independent
researchers (A.M. and N.M.) with access to a third arbitrator
(T.Q.) as needed. We created a protocol describing the search strategy and registered this with the PROSPERO database:PROSPERO
2014:CRD42014014554 (http://www.crd.york.ac.uk/PROSPERO/
display_record.asp?ID=CRD42014014554).
Our primary aim was to describe the accuracy of informant-based
questionnaires for diagnosis of dementia or multidomain cognitive
impairment in stroke survivors.
Our index test was any standardized informant-based cognitive
screening assessment. We did not specify how the assessment was
performed. We prespecified subgroup analyses based on possible
uses of informant assessment: (1) informant assessment (usually
performed in the acute stroke period) for prestroke cognitive issues;
(2) informant assessment for contemporaneous assessment of poststroke dementia; and (3) informant assessment (usually performed
in the acute period) for predicting future cognitive issues (delayed
verification).18
Our target condition was dementia or multidomain cognitive
impairment. This broad classification recognises that a diagnosis of
clinically important cognitive issues can be made without necessarily
assigning a dementia label. We accepted clinical diagnosis of dementia made using any recognized classification system and accepted a
diagnosis of cognitive impairment based on multidomain neuropsychological assessment.
Our population of interest was stroke survivors. We operated no
exclusions based on time since stroke or healthcare setting. In studies with mixed populations, we included those with >70% stroke
survivors.
We developed search terms using a concept-based approach,
combined with a validated stroke filter. We used Medical Subject
Headings and other controlled vocabulary. We included search terms
relating to commonly used informant-based cognitive assessments
(IQCODE, AD-8, GP-Cog, and BDS) as well as generic terms relating to cognitive testing. We searched across multiple, cross-disciplinary electronic databases from inception to April 2015. A collaborator
(Y.F.) performed a focused search of Chinese literature databases. We
had searched conference proceedings from international stroke meetings. Full search strategy was detailed in Table I in the online-only
Data Supplement.
We screened all titles generated by initial searches for relevance.
Abstracts were assessed and potentially eligible studies were reviewed as full manuscripts against inclusion criteria. We checked reference lists of relevant studies and reviews for further titles, repeating
the process until no new titles were found.
We extracted data to a study specific proforma. For informant
scales with various cut points, we extracted data for all thresholds
available, for studies with assessment at varying times poststroke
we extracted data for each time point. We created tables describing
characteristics of included studies, characteristics of informant assessments used, and characteristics of patients included. Where possible, for test accuracy data, we constructed 2 by 2 contingency tables
to allow calculation of sensitivity, specificity, and predictive values
against a dichotomous outcome of cognitive impairment/no cognitive impairment. Where data were not immediately accessible for the
paper, we contacted lead authors.
Where data allowed, we calculated sensitivity, specificity, and
corresponding 95% confidence intervals (95% CI) on test accuracy
forest plots (RevMan 5.1, Cochrane Collaboration) and pooled test
accuracy data using the bivariate approach with a bespoke macro.19
We did not formally assess publication bias, as standard tests such
as funnel plots are not appropriate and there is no consensus on the
optimal measure of such bias in test accuracy review.
We assessed risk of bias and generalizability using the Quality
Assessment of Diagnostic Accuracy Studies tool (QUADAS-2)20 tool.
QUADAS-2 assesses domains of patient selection, application of index test, application of reference standard, and patient flow/timing.
We previously created and validated a series of anchoring statements
for QUADAS-2 for test accuracy work with a dementia reference
standard.21 We assessed quality of reporting using the dementia-specific extension to the Standards for Reporting of Diagnostic Accuracy
STARDdem16 tool. STARDdem assesses reporting of key items for
dementia test accuracy work across the introduction, methods, results, and discussion sections of a manuscript. Full details of scoring
criteria are given in the online-only Data Supplement.
Results
After deduplication, we screened 1432 titles (Figure 2). After
assessment of abstract or full paper, we included 11 studies in
our review.22–32 Ten studies detailed the IQCODE, including
n=1994 participants (n=465 [23%] with dementia).22–31 One
Figure 1. Describing 3 differing approaches to informant assessment in stroke care. Figure describes
potential uses of informant questionnaires for cognitive assessment in stroke. Informant Questionnaire
on Cognitive Decline in the Elderly (IQCODE) can be
used in the acute stroke period to assess for prestroke cognitive decline or at any period poststroke
to assess for cognitive impairment at that time.
IQCODE has also been used to predict those likely
to have a later dementia diagnosis.
McGovern et al Informant-Based Cognitive Screening 331
Downloaded from http://stroke.ahajournals.org/ by guest on July 31, 2017
Figure 2. Flow diagram detailing search strategy and results.
study detailed the BDS, including n=220 patients with data
(n=93 [42%] with cognitive impairment).32
There was substantial study heterogeneity. Across the
included studies, informant assessments were used for a
variety of purposes across various settings, with varying cut
points (Table 1; Table II in the online-only Data Supplement).
The reporting around application of IQCODE was poor for
several studies and it was not clear if a validated form of
the questionnaire was used (n=3 studies); the time period
over which IQCODE retrospective review was performed
(n=4 studies) or which informants were used (n=3 studies;
Table III in the online-only Data Supplement). The generalizability of the patient populations used to assess IQCODE
was variable. Where data were reported there were substantial baseline exclusions and included patient were young
with relatively mild stroke (Table II in the online-only Data
Supplement).
Prestroke Dementia
No study described test properties of assessments for the diagnosis of prestroke cognitive decline versus a reference standard for clinical diagnosis (Table 1).
Contemporaneous Diagnosis of Poststroke
Dementia
For IQCODE (4 papers, n=1197 participants),23–26 summary
test metrics were sensitivity: 0.81 (95% CI, 0.60–0.93; range,
0.33–0.88) and specificity: 0.82 (95% CI, 0.64–0.92; range,
0.63–0.98; Figure 3; Table 2).
The BDS had sensitivity 60% and specificity 76% (positive predictive value: 0.73; negative predictive value: 0.64) for
diagnosis of any memory-related impairment.32
Delayed Verification/Prognosis
For IQCODE (5 papers, n=837 participants),26–30 summary
test metrics were sensitivity: 0.60 (95% CI, 0.32–0.83; range,
0.25–0.93) and specificity: 0.97 (95% CI, 0.70–1.00; range,
0.66–1.00; Figure 3; Table 2). Where the approach to analysis was described, most papers assessed cognitive decline in
the immediate period poststroke and described the association
with longer term dementia diagnosis (Figure I in the onlineonly Data Supplement).
No papers scored low risk of bias on all QUADAS-2 items.
Areas of concern were around patient flow/loss to follow-up
(with substantial loss of IQCODE data, where reported) and
the application of the IQCODE (risk of incorporation bias
and using IQCODE in a nonvalidated way; Figure 4; Table
IV in the online-only Data Supplement). There were issues
with reporting; no papers reported all details as recommended
in STARDdem, particular problems were around reporting of
results and details of statistical analyses. For example, only 2
papers described blinding between those applying the informant test and those performing clinical assessment; 1 paper
described how missing or indeterminate results were handled
and no papers described test reliability or reproducibility
(Table V and Figure II in the online-only Data Supplement).
Discussion
Although there are many informant-based cognitive assessment tools available,6 few have been validated in stroke. Only
the IQCODE had >1 paper describing stroke test properties and there was substantial heterogeneity in test accuracy
reported. Across the included studies, there were issues with
study quality and reporting and we need to be cautious in the
interpretation of these data.
332 Stroke February 2016
Table 1. Summary of Included Studies
Study
Timing of Diagnosis
(Poststroke)
Index Test(s)
Diagnostic Test(S)
Diagnostic Test Rater
Community
3 mo
IQCODE (Korean)
VCI-HS
Neuropsychologist
Setting (Recruitment)
Informant scale for contemporaneous diagnosis of dementia
Jung et al23*
Tang et al
ASU
3 mo
IQCODE (Chinese)
DSM-IV
Stroke neurologist
Starr et al22
Community
4.3 y
IQCODE
NPB
Neuropsychologist
Srikanth et al25
Community
3 mo
IQCODE16
DSM-IV
Neuropsychologist
N/A
N/A
IQCODE16 (Chinese)
DSM-IV
Adjudication panel
IQCODE (French)
ICD-10
Adjudication panel
24
Narasimhalu et al26
Informant scale for diagnosis of prestroke dementia
No relevant papers found on systematic literature review
Informant scale for prospective (delayed verification) diagnosis of dementia
Hénon et al27
ASU
6, 12, 24, and 36 mo
Klimkowicz et al
ASU
3 mo
IQCODE (Polish)
DSM-IV
Stroke neurologist
Serrano et al30
ASU
3, 12, and 24 mo
IQCODE16 (Spanish)
DSM, NPB
Neurologist
Wagle et al31
Rehab
13/12
IQCODE (Danish)
RBANS
N/A
Selim et al29
ASU
IQCODE (Egyptian)
ICD 10
Neurologist
28
Downloaded from http://stroke.ahajournals.org/ by guest on July 31, 2017
<18/12
ASU indicates acute stroke unit; DSM, diagnostic and statistical manual; GCF, general cognitive factor; ICD, International Classification of Disease; IQCODE, Informant
Questionnaire on Cognitive Decline in the Elderly; N/A, not available(unknown/not reported); NPB, neuropsychological battery; RBANS, repeatable battery for the
assessment of neuropsychological status; Rehab, rehabilitation setting; and VCI-HS, vascular cognitive impairment harmonisation standards.
*Data from extended abstract and parent studies.
Accepting these caveats, we can draw some conclusions on properties of IQCODE in stroke. Across all studies, IQCODE showed a pattern of specificity but high false
negative rate. IQCODE for assessing poststroke dementia
had reasonable test accuracy. In practice, IQCODE or similar would often be used along with another direct to patient
cognitive test; however, we found no studies validating this
approach.
IQCODE early after stroke has been used as a tool to
predict future dementia. We accept that the use of retrospective assessment to predict future dementia is counter intuitive, and it is interesting that a number of papers used this
approach. When used for this purpose, a positive IQCODE is
likely to be associated with dementia but many that develop
dementia will have an initial IQCODE assessment below the
threshold set for this purpose (specific but lacks sensitivity).
This pattern of trade-off between sensitivity and specify
changed when IQCODE was used at longer periods after
the stroke event. The interpretation of these data is complicated by limited reporting around how the IQCODE was
applied and this was apparent in our assessments of quality
and reporting.
Informant assessments such as IQCODE are often used in
practice and in research to assess prestroke cognition. There
are many excellent examples of describing prestroke cognition using IQCODE.33,34 Although this approach makes intuitive sense, we found no published reports that validate the test
accuracy of this use of IQCODE.
Our data adds to the literature on test properties of brief
cognitive assessments in stroke. The accuracy of IQCODE
for diagnosis of poststroke dementia was similar to summary metrics for other direct to patient cognitive assessments in stroke.1 There is no ideal cognitive test for the use
with stroke populations. Choice of test needs to consider
accuracy but also the purpose of the testing and issues
such as feasibility.35 Although based on a self-completion
questionnaire, we should not assume feasibility and acceptability of IQCODE. We note the high noncompletion rates
of IQCODE in many of these studies, reminding us that
IQCODE requires a suitable informant and that the informant has to understand and complete the questionnaire.
There are some data in nonstroke settings to suggest that
availability of an informant when accessing health care is
associated with abnormal cognition.36 IQCODE and other
informant assessments may be complicated where a spouse
or caregiver also has cognitive issues. In some centres,
informants have a brief cognitive screen to assess their suitability to complete questionnaires, such as IQCODE. It is
unfortunate that no assessment of informants was made in
any of the articles included in this review.
Interpreting our data on IQCODE, we should remember that IQCODE was developed for assessing dementia in
community dwelling older adults and there are many reasons
why the questionnaire may not work well in stroke. Certain
IQCODE items may be difficult to score in the context of
stroke, for example the IQCODE question on using gadgets
may give false positives for stroke survivors with physical
disability but no cognitive deficit. The memory-based focus
of IQCODE may be less appropriate in vascular cognitive
impairment syndromes, where executive function deficits
may predominate. Finally, early physical recovery that can
occur after stroke may be wrongly scored as positive cognitive change on IQCODE, a situation that would not be seen in
Alzheimer disease.
A particular issue in our quality assessment was around the
application of the IQCODE. An issue with IQCODE is that it
may be used in practice to inform the diagnostic formulation.
There were included papers that risked such incorporation
McGovern et al Informant-Based Cognitive Screening 333
Downloaded from http://stroke.ahajournals.org/ by guest on July 31, 2017
Figure 3. Forest plots of sensitivity/specificity in relation to Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE) for diagnosis of dementia. Forest plots detail IQCODE for contemporaneous diagnosis (top) and for prognosis (bottom). For delayed verification
studies (prognosis), where data were presented at varying time points we used 1 year data or closest. CI indicates confidence interval.
bias, comparing IQCODE with a reference standard of clinical diagnosis where the clinical diagnosis included IQCODE
data. The generalizability of included populations was limited and certain studies excluded exactly those patients where
IQCODE may be useful (aphasia and coma). Problems of
selection bias in studies of poststroke cognition are not unique
to IQCODE.4
It is interesting to note how the use of IQCODE in
stroke deviates from that described in the original IQCODE
derivation and validation work. The original IQCODE was
designed to describe cognitive decline >10-year period. This
approach is better suited to assess a progressive neurodegenerative condition such as Alzheimer disease, rather than an
acute event, such as stroke. In practice, assessors may use
the IQCODE questions to explore cognitive change since a
stroke event, which may not necessarily be 10 years past.
Some papers also use repeated IQCODE assessments at end
points. Altering the IQCODE in this way is not validated
and so many papers were scored as potential risk of bias/
poor external validity because of this issue. Where IQCODE
covers a 10-year period that includes a stroke event, the
IQCODE result gives information on general cognitive
decline but does not tell us that this decline relates purely
to stroke. These concerns should not dissuade clinicians and
researchers from using informant-based assessments but
suggest that more work is needed around tailoring informant
assessments to fit poststroke populations. The data from our
review do not allow us to give definitive guidance on when
IQCODE should be applied. To avoid recall biases, we would
suggest that informant assessment for prestroke dementia be
performed close to the stroke event but with sufficient time
to allow the informant to adjust to the diagnosis of stroke in
their relative or friend.
Strengths of our study were the comprehensive literature search, including access to non-English language
electronic databases. We acknowledge the between study
heterogeneity and issues with risk of bias, generalizability
and reporting. To allow for data synthesis, we accepted any
clinical diagnosis of a cognitive syndrome as our reference
standard. As a result of this, the summary estimates we
offer are illustrative and should not be interpreted as definitive test accuracy metrics. Number of included papers was
too small to allow meaningful sensitivity analyses around
these issues.
Our data do not allow us to make a definitive statement
on the best available informant assessment. Although there
334 Stroke February 2016
Table 2. Results of Included Studies
Study
“n” Included
“n” (%) With
Dementia
“n” (%) Not
Completing IQCODE
Index Test
(Threshold)
45
?
IQCODE ≥3.6
Sens: 45%; Spec: 96%
PPV: 0.63; NPV: 0.92
Sens: 33%; Spec: 98%
PPV: 0.34; NPV: 0.98
Summary Results
Informant scale for contemporaneous diagnosis of dementia
Jung et al23
353
Tang et al
24
189
24
108 (36%)
IQCODE ≥3.40
Starr et al22
35
N/A
14 (29%)
N/A
79
8
20 (20%)
IQCODE ≥3.30
Sens: 88%; Spec: 63%
PPV: 0.21; NPV: 0.98
576
169
?
IQCODE >3.38
Sens: 86%; Spec: 78%
PPV: 0.62; NPV: 0.93
56 (22%)
IQCODE ≥78
Sens: 30%; Spec:
100% (6/12)*
PPV: 1.0; NPV: 0.74
Srikanth et al25
Narasimhalu et al26
Correlation with GCF (r=−0.42, P=0.016)
Informant scale for diagnosis of prestroke dementia
No relevant papers found on systematic literature review
Informant scale for prospective (delayed verification) diagnosis of dementia
Hénon et al27
127 at 6/12;
104 at 36/12
36 at 36/12
142
26
?
IQCODE ≥104
Sens: 66%; Spec:100%
PPV: 1.0; NPV: 0.92
167 at 12/12;
142 at 24/12
44 at 12/12;
33 at 24/12
33 (10%)
IQCODE >3.35
Sens: 93%; Spec: 81%
(12/12)
PPV: 0.55; NPV: 0.98
Wagle et al31
104
52
8 (8%)
IQCODE >3.44
Sens: 25%; Spec: 92%
PPV: 0.76; NPV: 0.55
Selim et al29
66
28
8 (9%)
IQCODE ≥78
Sens: 69%; Spec: 66%
PPV: 0.39; NPV: 0.87
Klimkowicz et al28
Serrano et al30
Downloaded from http://stroke.ahajournals.org/ by guest on July 31, 2017
Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE) thresholds are given as presented in primary paper (some are average score and some are
raw score); where >1 IQCODE threshold employed, the primary cutpoint is described. PPV/NPV indicates positive/negative predictive value; sens, sensitivity; and spec,
specificity.
*Data from extended abstract and parent studies.
are major limitations to the use of IQCODE, using this tool
is probably better than no informant assessment at all. An
important finding is the clinical-research gap around this
form of cognitive testing. Future studies should consider
the test properties of IQCODE and the other available informant assessments. A focus on the use of informant assessments for prestroke cognitive states and on combining
informant questionnaires with other direct to patient tests
is needed.
Acknowledgments
We are grateful to authors who responded to queries or offered to
share data and to Doctor Rachael MacIsaac, Stroke Trials Statistician,
for running analyses.
Sources of Funding
Dr Quinn is supported by a Joint Stroke Association/Chief Scientist
Office Senior Clinical Lectureship. S.T. Pendlebury is supported
by the Oxford National Institute of Health Research Biomedical
Research Centre.
Disclosures
None.
References
Figure 4. Quality Assessment of Diagnostic Accuracy Studies
(QUADAS-2)–based assessment of bias and applicability. Explanations of QUADAS-2 domains are offered in the online-only Data
Supplement.
1. Hachinski V, Iadecola C, Petersen RC, Breteler MM, Nyenhuis DL,
Black SE, et al. National Institute of Neurological Disorders and
Stroke-Canadian Stroke Network vascular cognitive impairment harmonization standards. Stroke. 2006;37:2220–2241. doi: 10.1161/01.
STR.0000237236.88823.47.
2. Lees R, Fearon P, Harrison JK, Broomfield NM, Quinn TJ. Cognitive and
mood assessment in stroke research: focused review of contemporary studies.
Stroke. 2012;43:1678–1680. doi: 10.1161/STROKEAHA.112.653303.
3. Lees R, Selvarajah J, Fenton C, Pendlebury ST, Langhorne P, Stott DJ,
et al. Test accuracy of cognitive screening tests for diagnosis of dementia
and multidomain cognitive impairment in stroke. Stroke. 2014;45:3008–
3018. doi: 10.1161/STROKEAHA.114.005842.
4. Pendlebury ST, Chen PJ, Bull L, Silver L, Mehta Z, Rothwell PM; Oxford
Vascular Study. Methodological factors in determining rates of dementia in
transient ischemic attack and stroke: (I) impact of baseline selection bias.
Stroke. 2015;46:641–646. doi: 10.1161/STROKEAHA.114.008043.
5.Pendlebury ST, Klaus SP, Thomson RJ, Mehta Z, Wharton RM,
Rothwell PM; Oxford Vascular Study. Methodological factors in determining risk of dementia after transient ischemic attack and stroke:
McGovern et al Informant-Based Cognitive Screening 335
Downloaded from http://stroke.ahajournals.org/ by guest on July 31, 2017
(III) applicability of cognitive tests. Stroke. 2015;46:3067–3073. doi:
10.1161/STROKEAHA.115.010290.
6. Cherbuin N, Anstey KJ, Lipnicki DM. Screening for dementia: a review
of self- and informant-assessment instruments. Int Psychogeriatr.
2008;20:431–458. doi: 10.1017/S104161020800673X.
7. Jorm AF. The Informant Questionnaire on Cognitive Decline in the
Elderly (IQCODE): a review. Int Psychogeriatr. 2004;3:275–293.
8. Harrison JK, Fearon P, Noel-Storr AH, McShane R, Stott DJ, Quinn TJ.
Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE)
for the diagnosis of dementia within a secondary care setting. Cochrane
Database Syst Rev. 2015;3:CD010772. doi: 10.1002/14651858.
CD010772.pub2.
9. Appelros P, Nydevik I, Seiger A, Terént A. Predictors of severe stroke:
influence of preexisting dementia and cardiac disorders. Stroke.
2002;33:2357–2362.
10. Barba R, Morin MD, Cemillán C, Delgado C, Domingo J, Del Ser T.
Previous and incident dementia as risk factors for mortality in stroke
patients. Stroke. 2002;33:1993–1998.
11. Hénon H, Pasquier F, Durieu I, Godefroy O, Lucas C, Lebert F, et al.
Preexisting dementia in stroke patients. Baseline frequency, associated
factors, and outcome. Stroke. 1997;28:2429–2436.
12. Galvin JE, Roe CM, Powlishta KK, Coats MA, Muich SJ, Grant E, et
al. The AD8: a brief informant interview to detect dementia. Neurology.
2005;65:559–564. doi: 10.1212/01.wnl.0000172958.95282.2a.
13. Razavi M, Tolea MI, Margrett J, Martin P, Oakland A, Tscholl DW,
et al. Comparison of 2 informant questionnaire screening tools
for dementia and mild cognitive impairment: AD8 and IQCODE.
Alzheimer Dis Assoc Disord. 2014;28:156–161. doi: 10.1097/
WAD.0000000000000008.
14. Blessed G, Tomlinson BE, Roth M. The association between quantitative
measures of dementia and of senile change in the cerebral grey matter of
elderly subjects. Br J Psychiatry. 1968;114:797–811.
15. Brodaty H, Kemp NM, Low LF. Characteristics of the GPCOG, a screening tool for cognitive impairment. Int J Geriatr Psychiatry. 2004;19:870–
874. doi: 10.1002/gps.1167.
16. Noel-Storr AH, McCleery JM, Richard E, Ritchie CW, Flicker L, Cullum
SJ, et al. Reporting standards for studies of diagnostic test accuracy in
dementia: The STARDdem Initiative. Neurology. 2014;83:364–373. doi:
10.1212/WNL.0000000000000621.
17. Stroup DF, Berlin JA, Morton SC, Olkin I, Williamson GD, Rennie D,
et al. Meta-analysis of observational studies in epidemiology: a proposal
for reporting. Meta-analysis Of Observational Studies in Epidemiology
(MOOSE) group. JAMA. 2000;283:2008–2012.
18. Lees RA, Stott DJ, McShane R, Noel-Storr AH, Quinn TJ. Informant
Questionnaire on Cognitive Decline in the Elderly (IQCODE) for the
early diagnosis of dementia across a variety of healthcare settings
(delayed verification). Cochrane Database Syst Rev. 2014;10:CD011333.
19. Takwoingi Y, Deeks JJ. MetaDAS: A SAS Macro for Meta-Analysis
of Diagnostic Accuracy Studies. User Guide Version 1.3. http://srdta.
cochrane.org/. Accessed June 2015.
20. Whiting PF, Rutjes AW, Westwood ME, Mallett S, Deeks JJ, Reitsma
JB, et al; QUADAS-2 Group. QUADAS-2: a revised tool for the
quality assessment of diagnostic accuracy studies. Ann Intern Med.
2011;155:529–536. doi: 10.7326/0003-4819-155-8-201110180-00009.
21. Davis DHJ, Creavin ST, Noel-Storr A, Quinn TJ, Smailagic N, Hyde C et al.
Neuropsychological tests for the diagnosis of Alzheimer’s disease dementia
and other dementias:a generic protocol for cross-sectional and delayedverification studies. Cochrane Database Syst Rev. 2013;9:CD0104608.
22. Starr JM, Nicolson C, Anderson K, Dennis MS, Deary IJ. Correlates
of informant-rated cognitive decline after stroke. Cerebrovasc Dis.
2000;10:214–220.
23. Jung S, Sun Oh M, Yu K-H, Lee J-H, Kim C-H, Kang Y, et al. Is informant questionnaire on cognitive decline in the elderly (IQCODE) applicable for screening post-stroke dementia. Korean Stroke Society. http://
www.stroke.or.kr/journal/abstract/view.php?number=1384&page_
type=. Accessed October 2015.
24. Tang WK, Chan SS, Chiu HF, Wong KS, Kwok TC, Mok V, et al.
Can IQCODE detect poststroke dementia? Int J Geriatr Psychiatry.
2003;18:706–710. doi: 10.1002/gps.908.
25. Srikanth V, Thrift AG, Fryer JL, Saling MM, Dewey HM, Sturm JW,
et al. The validity of brief screening cognitive instruments in the diagnosis of cognitive impairment and dementia after first-ever stroke. Int
Psychogeriatr. 2006;18:295–305. doi: 10.1017/S1041610205002711.
26. Narasimhalu K, Lee J, Auchus AP, Chen CP. Improving detection of
dementia in Asian patients with low education: combining the MiniMental State Examination and the Informant Questionnaire on Cognitive
Decline in the Elderly. Dement Geriatr Cogn Disord. 2008;25:17–22.
doi: 10.1159/000111128.
27.Hénon H, Durieu I, Guerouaou D, Lebert F, Pasquier F, Leys D.
Poststroke dementia: incidence and relationship to prestroke cognitive
decline. Neurology. 2001;57:1216–1222.
28.Klimkowicz A, Słowik A, Dziedzic T, Polczyk R, Szczudlik A.
Post-stroke dementia is associated with alpha(1)-antichymotrypsin polymorphism. J Neurol Sci. 2005;234:31–36. doi: 10.1016/j.
jns.2005.02.012.
29. Selim HA. Prestroke cognitive decline and early epileptic seizures after
stroke as predictors of new onset dementia. Egyptian J Neurol, Psychiatr
Neurosurg. 2009;46:579–587.
30. Serrano S, Domingo J, Rodríguez-Garcia E, Castro MD, del Ser T.
Frequency of cognitive impairment without dementia in patients with
stroke: a two-year follow-up study. Stroke. 2007;38:105–110. doi:
10.1161/01.STR.0000251804.13102.c0.
31. Wagle J, Farner L, Flekkøy K, Wyller TB, Sandvik L, Eiklid KL, et
al. Cognitive impairment and the role of the ApoE epsilon4-allele
after stroke–a 13 months follow-up study. Int J Geriatr Psychiatry.
2010;25:833–842. doi: 10.1002/gps.2425.
32. Madureira S, Guerreiro M, Ferro JM. Dementia and cognitive impairment three months after stroke. Eur J Neurol. 2001;8:621–627.
33. Pendlebury ST, Rothwell PM. Prevalence, incidence, and factors associated with pre-stroke and post-stroke dementia: a systematic review
and meta-analysis. Lancet Neurol. 2009;8:1006–1018. doi: 10.1016/
S1474-4422(09)70236-4.
34. Tang WK, Chan SS, Chiu HF, Ungvari GS, Wong KS, Kwok TC, et al.
Frequency and determinants of poststroke dementia in Chinese. Stroke.
2004;35:930–935. doi: 10.1161/01.STR.0000119752.74880.5B.
35. Ritchie CW, Terrera GM, Quinn TJ. Dementia trials and dementia
tribulations: methodological and analytical challenges in dementia
research. Alzheimers Res Ther. 2015;7:31. doi: 10.1186/s13195015-0113-6.
36. Larner AJ. “Who came with you?” A diagnostic observation in patients
with memory problems? J Neurol Neurosurg Psychiatry. 2005;76:1739.
doi: 10.1136/jnnp.2005.068023.
Test Accuracy of Informant-Based Cognitive Screening Tests for Diagnosis of Dementia
and Multidomain Cognitive Impairment in Stroke
Aine McGovern, Sarah T. Pendlebury, Nishant K. Mishra, Yuhua Fan and Terence J. Quinn
Downloaded from http://stroke.ahajournals.org/ by guest on July 31, 2017
Stroke. 2016;47:329-335; originally published online December 17, 2015;
doi: 10.1161/STROKEAHA.115.011218
Stroke is published by the American Heart Association, 7272 Greenville Avenue, Dallas, TX 75231
Copyright © 2015 American Heart Association, Inc. All rights reserved.
Print ISSN: 0039-2499. Online ISSN: 1524-4628
The online version of this article, along with updated information and services, is located on the
World Wide Web at:
http://stroke.ahajournals.org/content/47/2/329
Data Supplement (unedited) at:
http://stroke.ahajournals.org/content/suppl/2015/12/17/STROKEAHA.115.011218.DC1
Permissions: Requests for permissions to reproduce figures, tables, or portions of articles originally published
in Stroke can be obtained via RightsLink, a service of the Copyright Clearance Center, not the Editorial Office.
Once the online version of the published article for which permission is being requested is located, click
Request Permissions in the middle column of the Web page under Services. Further information about this
process is available in the Permissions and Rights Question and Answer document.
Reprints: Information about reprints can be found online at:
http://www.lww.com/reprints
Subscriptions: Information about subscribing to Stroke is online at:
http://stroke.ahajournals.org//subscriptions/
Supplementary Materials
Supplemental Tables
I. Detailed summary of search strategy
II. Characteristics of included participants
III. Characteristics of informant assessment
IV.QUADAS-2 based assessment anchoring statements
V. STARDdem descriptors of reporting and results
Supplemental Figures
I. Test properties of IQCODE when applied at various time post stroke
II. STARDdem assessment of papers included in this review
Supplemental Table I.
Detailed summary of search strategy
We created sensitive search terms based on concepts of “informant assessment” and “dementia/cognition” with a validated “stroke” filter.
We searched multiple cross-disciplinary, international electronic databases.
ALOIS (Cochrane Dementia and Cognitive Improvement Group)
ARIF (University of Birmingham)
CINAHL (EBSCO)
Embase (OVID)
Lilacs (Bireme)
Medline (OVID
MEDION Diagnostics database (http://www.mediondatabase.nl/)
Psychinfo (EBSCO)
Web of Science Core Collection (Thomson Reuters)
For Chinese literature we searched VIP and WANGFAN databases
We hand searched conference proceedings for International Stroke Conference; European Stroke Conference; World Stroke Conference and UK
Stroke forum.
We outline the search as performed on Medline (OVID)
Ovid MEDLINE(R) 1946 to Present with Daily Update,
cerebrovascular disorders/ or exp basal ganglia cerebrovascular disease/ or exp brain ischemia/ or exp carotid artery diseases/ or exp intracranial arterial diseases/ or exp
1
intracranial arteriovenous malformations/ or exp "intracranial embolism and thrombosis"/ or exp intracranial hemorrhages/ or stroke/ or exp brain infarction/ or vasospasm,
932982
intracranial/ or vertebral artery dissection/
2
(stroke or poststroke or post-stroke or cerebrovasc$ or brain vasc$ or cerebral vasc$ or cva$ or apoplex$ or SAH).tw.
804232
3
((brain$ or cerebr$ or cerebell$ or intracran$ or intracerebral) adj5 (isch?emi$ or infarct$ or thrombo$ or emboli$ or occlus$)).tw.
295341
4
((brain$ or cerebr$ or cerebell$ or intracerebral or intracranial or subarachnoid) adj5 (haemorrhage$ or hemorrhage$ or haematoma$ or hematoma$ or bleed$)).tw.
183868
5
hemiplegia/ or exp paresis/
36146
6
(hemipleg$ or hemipar$ or paresis or paretic).tw.
104132
7
IQCODE.ti,ab.
430
8
"informant questionnaire on cognitive decline in the elderly".ti,ab.
406
9
"short IQCODE".ti,ab.
11
10 "IQ code".ti,ab.
25
11 ("informant* questionnair*" adj3 (dement* or screening)).ti,ab.
35
12 ("screening test*" adj2 (dement* or alzheimer*)).ti,ab.
422
13 7 or 8 or 9 or 10 or 11 or 12
993
14 1 or 2 or 3 or 4 or 5 or 6
1651596
15 13 and 14
260
16 "general practitioner assessment of cognition".ti,ab.
23
17 Blessed Dementia Scale ti.ab.
0
18 Revised Blessed Dementia Scale ti.ab.
0
19 "dementia questionnaire".ti,ab.
142
20 ("informant* questionnair*" adj3 (dement* or screening)).ti,ab.
35
21 *Neuropsychological Tests/
18152
22 *Questionnaires/
49990
23 Neuropsychological Tests/mt, st
1888
24 "neuropsychological test*".ti,ab.
27770
25 (neuropsychological adj (assess* or evaluat* or test*)).ti,ab.
39049
26 (neuropsychological adj (assess* or evaluat* or test* or exam* or battery)).ti,ab.
44466
27 13 or 16 or 17 or 18 or 19 or 20 or 21 or 22 or 23 or 24 or 25 or 26
107900
28 informant.mp. [mp=ti, ab, ot, nm, hw, kf, px, rx, ui, tx, bt, ct, sh, tn, dm, mf, dv, kw]
22005
29 interview.mp. [mp=ti, ab, ot, nm, hw, kf, px, rx, ui, tx, bt, ct, sh, tn, dm, mf, dv, kw]
531611
30 interview*.mp. [mp=ti, ab, ot, nm, hw, kf, px, rx, ui, tx, bt, ct, sh, tn, dm, mf, dv, kw]
891856
31 28 or 29 or 30
898901
32 14 and 27
7975
33 31 and 32
1339
Data were limited to human studies and de-duplicated using Endnote reference management software (version X7, Thomson Reuters Scientific,
PA, USA)
Supplemental Table II. Characteristics of included participants
study
number
baseline/included
mean age
(years)
TACS n (%)
stroke
severity
informant
inclusion
other baseline
exclusions
Informant scale for contemporaneous diagnosis of dementia
Jung 20151
899/363
Tang 20032
484/189
Starr 20003
49/35
Srikanth20064
99/79
Narasimhalu 20085
843/576
63.9
(SD:12.4)
67.8
(SD:11.8)
74
(SD:7.9)
N/A
NIHSS:3
Yes
N/A
N/A
NIHSS:6 (SD:5)
Yes
Nil
N/A
BI:17
(range:7-20)
No
Nil
N/A
2 (3%)
N/A
No
aphasia
N/A
N/A
N/A
No
dementia
Informant scale for diagnosis of pre-stroke dementia
Informant scale for prospective (delayed verification) diagnosis of dementia
Henon 20016
258/127
Klimkowicz 20057
163/142
Serrano 20078
327/321
Wagle 20109
152/104
Selim 200910
91/66
N/A
68.7
(SD:11.5)
70.9
(20-98)
75.9
(SD:10.9)
63
(40-86)
N/A
N/A
Yes
N/A
N/A
No
N/A
16 (15%)
N/A
BI:69.1
(SD:34)
BI:16
(SD:9)
N/A
No
No
Yes
IQCODE >104
excluded
Ischaemic
stroke only
Nil
Hearing/visual
impairment
IQCODE >104
excluded
TACS=Total Anterior Circulation Stroke; N/A=not available; informant exclusion=study only included participants who had an informant available;
NIHSS=National Institutes of Health Stroke Scale; BI=Barthel Index; SD=standard deviation.
Supplemental Table III. Characteristics of informant assessment
Study
no of items
language
time
period
assessed
Time IQCODE
administered
cutpoint(s)
informant
used
Informant scale for contemporaneous diagnosis of dementia
N/A
3/12
3.07-3.26
3.6
N/A
10 year
3/12
>3.4
family
English
10 year
4.3 years
N/A
carer
16
English
10 year
1 year
≥3.30
reliable
informant
16
Chinese
(valid)
N/A
N/A
3.38
N/A
Jung 20151
26
Tang 20032
26
Starr 20003
26
Srikanth20064
Narasimhalu 20085
Korean
(valid)
Chinese
(valid)
Informant scale for diagnosis of pre-stroke dementia
Informant scale for prospective (delayed verification) diagnosis of dementia
10 year
2/7
104(dementia)
78 (VCI)
family or
friend
N/A
2/7
104
N/A
Spanish
(valid)
5 year
3,12,24/12
>3.35
relative
26
Danish
10 year
18/7
>3.44
proxy
26
Egyptian
N/A
2/7
104(dementia)
78 (VCI)
Closest
relative
Henon 20016
26
Klimkowicz 20057
26
Serrano 20078
16
Wagle 20109
Selim 200910
French
(valid)
Polish
Time period assessed=retrospective time over which change in cognition was assessed using IQCODE questions; N/A=not available; valid=a
reference to a validation study of the translated IQCODE is available; VCI=vascular cognitive impairment.
Supplemental Table IV QUADAS-2 based assessment anchoring statements
DOMAIN
Description
PATIENT SELECTION
INDEX TEST
REFERENCE STANDARD
FLOW AND TIMING
Describe methods of patient
Describe any patients who did not receive the index
selection: Describe included
test(s) and/or reference standard or who were
patients (prior testing,
Describe the index test
Describe the reference
excluded from the 2x2 table (refer to flow diagram):
presentation, intended use of
and how it was conducted
standard and how it was
Describe the time interval and any interventions
index test and setting):
and interpreted:
conducted and interpreted:
between index test(s) and reference standard:
Were the index test results
interpreted without
Was a consecutive or random
knowledge of the results of to correctly classify the target
sample of patients enrolled?
the reference standard?
Was a case-control design
If a threshold was used,
avoided?
was it pre-specified?
Signalling questions
(yes/no/unclear)
Is the reference standard likely
condition?
Was there an appropriate interval between index
test(s) and reference standard?
Did all patients receive a reference standard?
Were the reference standard
Did all patients receive the same reference
results interpreted without
standard?
Did the study avoid
knowledge of the results of the
inappropriate exclusions?
index test?
Could the conduct or
Could the reference standard,
its conduct, or its interpretation
Risk of bias:
Could the selection of patients
interpretation of the index
High/low/ unclear
have introduced bias?
test have introduced bias? have introduced bias?
Were all patients included in the analysis?
Could the patient flow have introduced bias?
Are there concerns that the
Are there concerns that
target condition as defined by
the reference standard does
Concerns regarding
Are there concerns that the
the index test, its conduct,
applicability:
included patients do not match
or interpretation differ from not match the review
High/low/ unclear
the review question?
the review question?
question?
We provide some core anchoring statements for quality assessment of diagnostic test accuracy reviews of IQCODE in dementia. These statements are
designed for use with the QUADAS-2 tool and were derived during a two-day, multidisciplinary focus group.
During the focus group and the piloting/validation of this guidance, it was clear that certain issues were key to assessing quality, while other issues were
important to record but less important for assessing overall quality. To assist, we describe a system wherein certain items can dominate. For these dominant
items, if scored “high risk” then that section of the QUADAS-2 results table is likely to be scored as high risk of bias regardless of other scores. For example, in
dementia diagnostic test accuracy studies, ensuring that clinicians performing dementia assessment are blinded to results of index test is fundamental. If this
blinding was not present then the item on reference standard should be scored “high risk of bias”, regardless of the other contributory elements.
We have detailed how QUADAS-2 has been operationalised for use with dementia reference standard studies below. In these descriptors dominant items are
labelled as "high risk of bias for total section regardless of other items".
In assessing individual items, the score of unclear should only be given if there is genuine uncertainty. In these situations review authors will contact the
relevant study teams for additional information.
Anchoring statements to assist with assessment for risk of bias
Selection
Was a case-control or similar design avoided?
Designs similar to case-control that may introduce bias are those designs where the study team deliberately increase or decrease the proportion with the
target condition. For example, a population study may be enriched with extra dementia patients from a secondary care setting. Such studies will be
automatically labelled high risk of bias and this will be assessed as a potential source of heterogeneity.
If case-control used then grading will be high risk of bias for total section regardless of other items (in fact case-control studies will not be included in this
review)
Was the sampling method appropriate?
Where sampling is used, the designs least likely to cause bias are consecutive sampling or random sampling. Sampling that is based on volunteers or
selecting participants from a clinic or research resource is prone to bias.
Are exclusion criteria described and appropriate?
The study will be automatically graded as unclear if exclusions are not detailed (pending contact with study authors). Where exclusions are detailed, the study
will be graded as low risk of bias if exclusions are felt to be appropriate by the review authors. Certain exclusions common to many studies of dementia are:
medical instability; terminal disease; alcohol/substance misuse; concomitant psychiatric diagnosis; other neurodegenerative condition.
Post hoc exclusions will be labelled high risk of bias for total section regardless of other items.
Index Test
Was IQCODE assessment performed without knowledge of clinical dementia diagnosis?
Terms such as “blinded” or “independently and without knowledge of” are sufficient and full details of the blinding procedure are not required. This item may
be scored as low risk of bias if explicitly described or if there is a clear temporal pattern to order of testing that precludes the need for formal blinding i.e. all
IQCODE assessments performed before dementia assessment.
If there is no attempt at blinding grading will be high risk of bias for total section regardless of other items.
Were IQCODE thresholds prespecified?
For scales there is often a reference point (in units or categories) above which participants are classified as “test positive”; this may be referred to as
threshold; clinical cut-off or dichotomisation point. A study is classified high risk of bias if the authors define the optimal cut-off post-hoc based on their own
study data. Certain papers may use an alternative methodology for analysis that does not use thresholds and these papers should be classified as low risk of
bias.
Were sufficient data on IQCODE application given for the test to be repeated in an independent study?
Particular points of interest for IQCODE include method of administration (for example, self-completed questionnaire versus direct questioning interview);
nature of informant; language of assessment. If a novel form of IQCODE is used, details of the scale should be included or a reference given to an appropriate
descriptive text. Where IQCODE is used in a novel manner, for example, a translated questionnaire, there should be evidence of validation work.
Reference Standard
Is the assessment used for clinical diagnosis of dementia acceptable?
Commonly used international criteria to assist with clinical diagnosis of dementia include those detailed in DSM-IV and ICD-10. Criteria specific to dementia
subtypes include but are not limited to NINCDS-ADRDA criteria for Alzheimer’s dementia; McKeith criteria for Lewy Body dementia; Lund criteria for
frontotemporal dementias; and the NINDS-AIREN criteria for vascular dementia. Where the criteria used for assessment are not familiar to the review authors
or the Cochrane Dementia and Cognitive Improvement Group this item should be classified as high risk of bias.
Was clinical assessment for dementia performed without knowledge of IQCODE?
Terms such as “blinded” or “independent” are sufficient and full details of the blinding procedure are not required. This may be scored as low risk of bias if
explicitly described or if there is a clear temporal pattern to order of testing, i.e. all dementia assessments performed before IQCODE testing.
Informant rating scales and direct cognitive tests present certain problems. It is accepted that informant interview and cognitive testing is a usual component of
clinical assessment for dementia, however, specific use of the scale under review in the clinical dementia assessment should be scored as high risk of bias.
We have prespecified that dementia diagnosis that explicitly uses IQCODE will be classified as high risk of bias for total section regardless of other items.
Were sufficient data on dementia assessment method given for the assessment to be repeated in an independent study?
The criteria used for clinical assessment are discussed in another item. Particular points of interest for dementia assessment include the background of the
assessor, training/expertise of the assessor; additional information available to inform diagnosis (neuroimaging; neuropsychological testing).
Flow
Was there an appropriate interval between IQCODE and clinical dementia assessment?
For a cross-sectional study design, there is potential for change between assessments. The ideal would be same day assessment but this is not always
feasible. We have set an arbitrary maximum interval of one month between tests, although this may be revised depending on the test and the stability of the
condition of interest.
Did all get the same assessment for dementia regardless of IQCODE result?
There may be scenarios where only those who score “test positive” on IQCODE have a more detailed assessment. Where dementia assessment (or other
reference standard) differs depending on the IQCODE result this should be classified as high risk of bias.
Were all who received IQCODE assessment included in the final analysis?
If the study has drop outs these should be accounted for; a maximum proportion of drop outs to remain low risk of bias has been specified as 20%.
Were missing IQCODE results or un-interpretable IQCODE results reported?
Where missing results are reported if there is substantial attrition (we have set an arbitrary value of 50% missing data) this should be scored as high risk of
bias for total section regardless of other items.
Applicability
Were those included representative of the general population of interest?
Those included should match the intended population as described in the review question. If not already specified in the review inclusion criteria, setting will
be particularly important – the review authors should consider population in terms of symptoms; pre-testing; potential disease prevalence. Studies that use
very selected groups or subgroups will be classified as poor applicability.
Was IQCODE performed consistently and in a manner similar to its use in clinical practice?
IQCODE studies will be judged against the original description of its use.
Was clinical diagnosis of dementia (or other reference standard) made in a manner similar to current clinical practice?
For many reviews, inclusion criteria and assessment for risk of bias will already have assessed the dementia diagnosis. For certain reviews an applicability
statement relating to reference standard may not be applicable. There is the possibility that a form of dementia assessment, although valid, may diagnose a
far larger proportion with disease than would be seen in usual clinical practice. In this instance the item should be rated poor applicability.
Supplemental Table V.STARDdem descriptors of reporting
STARD checklist item
Points of particular relevance to dementia
Title/Abstract/
1
Identify the article as a study of diagnostic accuracy (recommend
Studies reporting a sensitivity/specificity or 2x2 data derivable, fall within the scope
MeSH heading 'sensitivity and specificity')
of STARDdem and should be indexed accordingly.
State the research questions or study aims, such as estimating
Some studies describing aims related to 'prognosis' or 'prediction' may also fall
diagnostic accuracy or comparing accuracy between tests or across
within the remit of STARDdem.
Introduction
2
participant groups
Report test purpose: 'stand-alone' test or as an addition to other tests or criteria.
Methods
Participants:
3
The study population: The inclusion and exclusion criteria, setting and Key inclusion criteria: (a) demographic, especially age; (b) cognition- or diseaselocations where data were collected
related criteria. Accurate description of the target sample is required including
reporting criteria used to define the study population.
See also Item 4 on recruitment and Item 5 on sampling
Report referral pathways, precise locations of patient recruitment, where index test
and reference standard were performed. For secondary/tertiary settings helpful to
report the medical subspecialty or hospital department (e.g. psychiatry, neurology).
Diagnostic accuracy studies in dementia are often nested within larger cohort
studies. If this is the case, then the targeted population for the cohort study and the
method of selection into the cohort should be described and/or the parent study
cited.
4
Participant recruitment: Was recruitment based on presenting
Report whether those in intermediate categories (e.g. possible AD or possible DLB)
symptoms, results from previous tests, or the fact that the participants were excluded.
had received the index tests or the reference standard?
See also Item 5 on sampling and Item 16 on participant loss at each
stage of the study
5
Participant sampling: Was the study population a consecutive series Planned analyses showing how characteristics of the subgroup entering the study
of participants defined by the selection criteria in item 3 and 4? If not, differ from the eligible population are strongly recommended (i.e. if a convenience
specify how participants were further selected
sample has been used due to invasive nature of test/s).
See also Item 4 on recruitment and Item 16 on participant loss
6
Data collection: Was data collection planned before the index test
Authors should report the timing of the analysis plan with respect to data collection:
and reference standard were performed (prospective study) or after
was the analysis plan set out in a protocol before index and reference standards
(retrospective study)?
were performed? If not, when was the analysis plan created?
The reference standard and its rationale
For neuropathological and clinical reference standards the diagnostic criteria used
Test methods:
7
should be specified. Where relevant, reference should be made to studies validating
the criteria.
Report if standard consensus clinical criteria incorporate the index test (incorporation
bias rendering blinding of index test impossible).
8
Technical specifications of material and methods involved including
Use of scales: specify details of administration, which version.
how and when measurements were taken, and/or cite references for
index tests and reference standard
Clinical diagnostic criteria: what information was available to inform the diagnoses;
how the criteria were applied (e.g. by individual clinicians, by consensus conference,
See also Item 10 concerning the person(s) executing the tests
by semi-automated algorithm).
Imaging and laboratory tests: specify materials and instruments, including sample
handling and concordance with any harmonisation criteria. In new assays describe
all steps in detail. Any particular preparation of participants should be described.
9
Definition of and rationale for the units, cut-offs and/or categories of
Justify any cut-offs used, as these may vary with clinical context.
the results of the index tests and the reference standard
10
The number, training and expertise of the persons executing and
Especially where subjective judgments are involved, e.g. the interpretation of
reading the index tests and the reference standard
neuroimaging results.
Report inter- and intra-rater agreement.
See also Item 8
Reference or describe the content of training materials used.
Reference or describe details of lab certification and harmonised biomarker assays.
11
Whether or not the readers of the index tests and reference standard Also, the index test may form a part of the reference standard. This is often referred
were blind (masked) to the results of the other test and describe any to as incorporation bias and renders blinding of the index test impossible.
other clinical information available to the readers
See also Item 7
Statistical
methods:
12
Methods for calculating or comparing measures of diagnostic
accuracy, and the statistical methods used to quantify uncertainty
(e.g. 95% confidence intervals)
13
Methods for calculating test reproducibility, if done
Applies to the reference standard as well as to the index test. Both should be
reported/adequately referenced. Report inter-rater and test-retest reliability of
reference standard as established in the study being reported, rather than simply
referring to other studies where reproducibility has been established.
The training which image readers receive should be carefully described. Studies in
which the accuracy of ‘majority’ judgements are reported should also report data for
the minority judgements. Reports of the impact of training should clearly describe
the characteristics of the sample used for training and whether it is representative of
the group to which the test will be applied.
Results
Participants:
14
When study was performed, including beginning and end dates of
Pertinent particularly to longitudinal (delayed verification) studies, authors should
recruitment
report recruitment dates of the study (not to be confused with recruitment dates of
the wider cohort study from which it might be drawn), and the beginning (first
participant) and end (last participant) dates of the periods during which index test/s
and reference standard were performed.
Report the period for the index test and period for the reference standard separately
if it is not clear.
15
Clinical and demographic characteristics of the study population (at
Report key demographic variables: age, sex and education.
least information on age, gender, spectrum of presenting symptoms)
Report age distribution of sample in detail. Ethnicity and genetic factors (e.g. APOE
See also Item 18
genotype) may also be particularly important. The cognitive characteristics are
covered in Item 18.
16
The number of participants satisfying the criteria for inclusion who did
or did not undergo the index tests and/or the reference standard;
describe why participants failed to undergo either test (a flow diagram
is strongly recommended)
See also Item 3, Item 4 and Item 5
Test results:
17
Time-interval between the index tests and the reference standard,
Specify the follow-up period for all subjects in relation to their outcomes. It should be
and any treatment administered in between
specified whether or not participants had received any treatments which might affect
disease progression.
18
Distribution of severity of disease (define criteria) in those with the
Include a description of the severity of the target condition at the time the index test
target condition; other diagnoses in participants without the target
is performed. Usually captured by a cognitive score and/or duration of symptoms.
condition
For delayed verification studies report distribution of severity of disease and the
degree of certainty (such as probable/possible) about the diagnosis at time of case
ascertainment.
Report other diagnoses (not target condition). Report relationship of test to other
diagnoses.
19
A cross tabulation of the results of the index tests (including
indeterminate and missing results) by the results of the reference
standard; for continuous results, the distribution of the test results by
the results of the reference standard
20
Any adverse events from performing the index tests or the reference Report all adverse events, even if unlikely to be related to the diagnostic test
standard
Estimates:
performed.
21
Estimates of diagnostic accuracy and measures of statistical
uncertainty (e.g. 95% confidence intervals)
See also Item 12
22
How indeterminate results, missing data and outliers of the index
tests were handled
23
Estimates of variability of diagnostic accuracy between subgroups of
participants, readers or centers, if done
24
Estimates of test reproducibility, if done
See also Item 13
Discussion
25
Discuss the clinical applicability of the study findings
Discuss differences in age and comorbidity between the study population and the
patients typically seen in clinical practice.
Discuss whether the reported data demonstrate 'added’ or ‘incremental’ value of the
index test over and above other routine diagnostic tests.
Identify stage of development of the test (e.g. proof of concept; defining accuracy in
a typical spectrum of patients);
Discuss the further research needed to be done to make test applicable to
population in whom likely to be applied in practice.
Supplemental Figure I
Test properties of IQCODE when applied at various time post stroke
Supplemental Figure II STARDdem assessment of papers included in this review
Study
Henon 2001 Jung 2015
Klim kow icz 2005
Narasim halu 2008 Selim 2009 Serrano 2007 Srikanth 2006
Starr 2000
Tang 2003 Wagle 2010
STARdem Item
Selection and Topic
Methods
Title
1
N
Y
N
Y
N
N
Y
N
Y
N
Intro
2
N
Y
Y
Y
N/A
N/A
Y
Y
Y
N/A
Discussion
25
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
N
Participants
Test Methods
Stats
Results
Participants
Test Results
Estim ates
3
Y
N
Y
N
Y
Y
N
N
Y
4
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
5
Y
N
Y
N
Y
Y
Y
N
Y
Y
6
Y
Y
Y
N
Y
Y
Y
N
Y
Y
7
Y
Y
Y
Y
Y
Y
Y
N
Y
Y
8
Y
Y
Y
N
Y
Y
Y
N
Y
N
N
9
Y
N
N
N
N
N
Y
N
N
10
N
Y
N
N
N
Y
Y
N
Y
N
11
N
Y
N
N
N
N
Y
N
N
N
12
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
13
N
N
N
N
N
N
N
N
N
N
14
Y
N
Y
N
Y
Y
N
N
N
Y
15
Y
Y
N
N
Y
Y
N
N
Y
Y
16
Y
N
N
N
Y
N
Y
N
Y
N
17
N
N
N
N
Y
Y
N
Y
N
N
18
N
N
N
N
N
Y
N
N
N
N
19
N
N
N
N
N
Y
N
N
N
N
20
N
N
N
N
N
N
N
N
N
N
21
Y
Y
Y
Y
Y
Y
Y
N
Y
Y
22
Y
N
N
N
N
N
N
N
N
N
23
N
N
N
N
N
N
N
N
N
N
24
N
N
N
N
N
N
N
N
N
N
Y (green) = This item reported; N (red) = This item not reported; N/A = not applicable to this paper
Supplemental References
1. Jung S, Sun Oh M, Yu K-H, Lee J-H, Kim C-H, Kang Y et al.Is informant questionnaire on cognitive decline in the elderly (IQCODE) applicable for screening
post-stroke dementia. Korean Stroke Society. www.stroke.or.kr/journal/abstract/view.php?number=1384&page_type= Accessed October 2015.
2. Tang WK, Chan SSM, Chiu HFK, Wong KS, Kwok TCY, Mok V et al.Can IQCODE detect poststroke dementia?International Journal of Geriatric Psychiatry.
2003;18:706-710.
3. Starr JM, Nicolson C, Anderson K, Dennis MS, Deary IJ.Correlates of informant-rated cognitive decline after stroke.Cerebrovascular Diseases.2000;10:214220.
4. Srikanth V, Thrift AG, Fryer JL, Saling MM, Dewey HM, Sturm JW, et al.The validity of brief screening cognitive instruments in the diagnosis of cognitive
impairment and dementia after first-ever stroke.International Psychogeriatrics.2006;18:295-305.
5. Narasimhalu K, Lee J, Auchus AP, Chen CPLH.Improving detection of dementia in asian patients with low education:Combining the mini-mental state
examination and the informant questionnaire on cognitive decline in the elderly.Dementia and Geriatric Cognitive Disorders.2008;25:17-22.
6. Henon H, Pasquier F, Durieu I.Pre-existing dementia in stroke patients: baseline frequency, associated factors, and outcome.Stroke.1997,28:2429–2436.
7. Klimkowicz A, Slowik A, Dziedzic T, Polczyk R, Szczudlik A.Post-stroke dementia is associated with alpha1- antichymotrypsin polymorphism.Journal of the
Neurological Sciences.2005;234:31-36.
8. Serrano S, Domingo J, Rodríguez-Garcia E, Castro M, del Ser T.Frequency of cognitive impairment without dementia in patients with stroke:A two-year
follow-up study.Stroke.2007;38:105-110.
9. Wagle J, Farner L, Flekkoy K, Wyller TB, Sandvik L, Eiklid KL et al.Cognitive impairment and the role of the ApoE epsilon4-allele after stroke-a 13 months
follow-up study.International Journal of Geriatric Psychiatry.2010;25:833-842.
10. Selim HA.Prestroke cognitive decline and early epileptic seizures after stroke as predictors of new onset dementia.Egyptian Journal of Neurology,
Psychiatry and Neurosurgery.2009;46:579-587.