A structured collateral interview regarding cognition and function (the IQCODE) for assessment of possible dementia

Numbers of people with dementia and other cognitive problems are increasing globally. Early diagnosis of dementia is recommended but there is no agreement on the best approach or how non-memory specialists should assess patients. A potential strategy is to interview friends or family of the subject to assess for change in function or cognition. Various methods of this collateral interview are available and the most commonly used is the Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE). We searched differing databases of published research for all papers relating to the accuracy of IQCODE for selecting those with dementia. We found eleven studies that tested diagnostic accuracy of IQCODE in community dwelling individuals, we were able to combine their findings to give a summary result. We compared two forms of IQCODE questionnaire and found that a short form with fewer questions was just as accurate as the original longer questionnaire. The overall accuracy of IQCODE was reasonable although not perfect. If IQCODE were to be used on its own for assessing large populations of older adults, it would label many people with dementia who do not have the disease and also miss the diagnosis in a substantial proportion.

Authors' conclusions: 

Published data suggest that if using the IQCODE for community dwelling older adults, the 16 item IQCODE may be preferable to the traditional scale due to lesser test burden and no obvious difference in accuracy. Although IQCODE test accuracy is in a range that many would consider 'reasonable', in the context of community or population settings the use of the IQCODE alone would result in substantial misdiagnosis and false reassurance. Across the included studies there were issues with heterogeneity, several potential biases and suboptimal reporting quality.

Read the full abstract...
Background: 

Various tools exist for initial assessment of possible dementia with no consensus on the optimal assessment method. Instruments that use collateral sources to assess change in cognitive function over time may have particular utility. The most commonly used informant dementia assessment is the Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE).

A synthesis of the available data regarding IQCODE accuracy will help inform cognitive assessment strategies for clinical practice, research and policy.

Objectives: 

Our primary obective was to determine the accuracy of the informant-based questionnaire IQCODE for detection of dementia within community dwelling populations.

Our secondary objective was to describe the effect of heterogeneity on the summary estimates. We were particularly interested in the traditional 26-item scale versus the 16-item short form; and language of administration. We explored the effect of varying the threshold IQCODE score used to define 'test positivity'.

Search strategy: 

We searched the following sources on 28 January 2013: ALOIS (Cochrane Dementia and Cognitive Improvement Group), MEDLINE (OvidSP), EMBASE (OvidSP), PsycINFO (OvidSP), BIOSIS Previews (ISI Web of Knowledge), Web of Science with Conference Proceedings (ISI Web of Knowledge), LILACS (BIREME). We also searched sources relevant or specific to diagnostic test accuracy: MEDION (Universities of Maastrict and Leuven); DARE (York University); ARIF (Birmingham University). We used sensitive search terms based on MeSH terms and other controlled vocabulary.

Selection criteria: 

We selected those studies performed in community settings that used (not necessarily exclusively) the IQCODE to assess for presence of dementia and, where dementia diagnosis was confirmed with clinical assessment. Our intention with limiting the search to a 'community' setting was to include those studies closest to population level assessment. Within our predefined community inclusion criteria, there were relevant papers that fulfilled our definition of community dwelling but represented a selected population, for example stroke survivors. We included these studies but performed sensitivity analyses to assess the effects of these less representative populations on the summary results.

Data collection and analysis: 

We screened all titles generated by the electronic database searches and abstracts of all potentially relevant studies were reviewed. Full papers were assessed for eligibility and data extracted by two independent assessors. For quality assessment (risk of bias and applicability) we used the QUADAS 2 tool. We included test accuracy data on the IQCODE used at predefined diagnostic thresholds. Where data allowed, we performed meta-analyses to calculate summary values of sensitivity and specificity with corresponding 95% confidence intervals (CIs). We pre-specified analyses to describe the effect of IQCODE format (traditional or short form) and language of administration for the IQCODE.

Main results: 

From 16,144 citations, 71 papers described IQCODE test accuracy. We included 10 papers (11 independent datasets) representing data from 2644 individuals (n = 379 (14%) with dementia). Using IQCODE cut-offs commonly employed in clinical practice (3.3, 3.4, 3.5, 3.6) the sensitivity and specificity of IQCODE for diagnosis of dementia across the studies were generally above 75%.

Taking an IQCODE threshold of 3.3 (or closest available) the sensitivity was 0.80 (95% CI 0.75 to 0.85); specificity was 0.84 (95% CI 0.78 to 0.90); positive likelihood ratio was 5.2 (95% CI 3.7 to 7.5) and the negative likelihood ratio was 0.23 (95% CI 0.19 to 0.29).

Comparative analysis suggested no significant difference in the test accuracy of the 16 and 26-item IQCODE tests and no significant difference in test accuracy by language of administration. There was little difference in sensitivity across our predefined diagnostic cut-points.

There was substantial heterogeneity in the included studies. Sensitivity analyses removing potentially unrepresentative populations in these studies made little difference to the pooled data estimates.

The majority of included papers had potential for bias, particularly around participant selection and sampling. The quality of reporting was suboptimal particularly regarding timing of assessments and descriptors of reproducibility and inter-observer variability.