Why is this question important?
Dementia is a chronic and progressive condition that affects peoples' memory and ability to function day-to-day. A clinical diagnosis of dementia usually involves brain scans, physical examinations and history taking. As a first step, we often use memory and thinking tests to identify people who need further assessment. Traditionally these tests are performed in-person, but modifications of the tests allow them to be used over the telephone or via video calls – sometimes called 'remote assessment'.
The need for remote assessment has become particularly urgent due to COVID-19. However, there are potential benefits of remote assessment beyond the COVID-19 pandemic. Physically attending appointments can be difficult for some people and remote assessments offer greater convenience. Remote assessments are also useful in research, as a large number of people can be reached in a fairly short amount of time.
A test delivered by telephone may not be as good as the in-person equivalent, and getting these tests right is important. One the one hand, If a test suggests someone has dementia when they do not (called a false positive), this can have an emotional impact on the person and their family. On the other hand, not identifying memory and thinking problems when they are present (called a false negative), mean that the person does not get the treatment and support that they need.
What was the aim of this review?
We aimed to assess whether memory and thinking tests carried out by telephone or video call can detect dementia.
What was studied in this review?
We looked at various memory and thinking tests. Many tests have been developed over time and they differ in their content and application, but most are based on a modification of a traditional in-person test.
What were the main results of this review?
The review included 31 studies, using 19 different memory tests, with a total of 3075 participants.
Only seven tests were relevant to our question regarding accuracy of remote testing. With the limited number of studies, estimates on the accuracy of these tests are imprecise. Our review suggests that remote tests could correctly identify people with dementia between 26% and 100% of the time, and could correctly rule out dementia 65% to 100% of the time.
The remaining 24 studies compared a remote test with the face-to-face equivalent. These studies suggested that remote test scores usually agreed with in-person testing, but this was not perfect.
How reliable are the results of the studies in this review?
In these studies, a clinical diagnosis of dementia was used as the reference (gold) standard. We identified a number of issues in the design, conduct and reporting of the studies. A particular issue was around the selection of participants for the studies. Studies often did not include people with hearing or language impairments that may have complicated remote testing.
Who do the results of this study apply to?
Most studies investigated older adults (over 65 years). The findings may not be representative of all older adults with dementia, as some studies only examined specific groups of people, for example, after stroke. The studies were usually performed in specialist centres by experts. So, we do not know how well these tests identify dementia in routine community practice.
What are the implications of this review?
The review highlights the lack of high-quality research describing accuracy of telephone- and video call-based memory and thinking tests. There were many differences between the studies included in this review such as the type of test used, participants included, the setting in which the study is carried out and language studied. This made comparisons between studies difficult. Our review suggests that remote assessments and in-person assessments are not always equivalent. In situations where access to in-person assessment is difficult, remote testing could be used as a useful first step. Ideally, this should be followed up with an in-person assessment before a diagnosis is made. Due to limited studies, and differences in the way studies were carried out, we cannot recommend one particular remote test for the assessment of dementia.
How up to date is this review?
This search was performed in June 2021.
Despite the common and increasing use of remote cognitive assessment, supporting evidence on test accuracy is limited. Available data do not allow us to suggest a preferred test. Remote testing is complex, and this is reflected in the heterogeneity seen in tests used, their application, and their analysis. More research is needed to describe accuracy of contemporary approaches to remote cognitive assessment. While data comparing remote and in-person use of a test were reassuring, thresholds and scoring rules derived from in-person testing may not be applicable when the equivalent test is adapted for remote use.
Remote cognitive assessments are increasingly needed to assist in the detection of cognitive disorders, but the diagnostic accuracy of telephone- and video-based cognitive screening remains unclear.
To assess the test accuracy of any multidomain cognitive test delivered remotely for the diagnosis of any form of dementia.
To assess for potential differences in cognitive test scoring when using a remote platform, and where a remote screener was compared to the equivalent face-to-face test.
We searched ALOIS, the Cochrane Dementia and Cognitive Improvement Group Specialized Register, CENTRAL, MEDLINE, Embase, PsycINFO, CINAHL, Web of Science, LILACS, and ClinicalTrials.gov (www.clinicaltrials.gov/) databases on 2 June 2021. We performed forward and backward searching of included citations.
We included cross-sectional studies, where a remote, multidomain assessment was administered alongside a clinical diagnosis of dementia or equivalent face-to-face test.
Two review authors independently assessed risk of bias and extracted data; a third review author moderated disagreements. Our primary analysis was the accuracy of remote assessments against a clinical diagnosis of dementia. Where data were available, we reported test accuracy as sensitivity and specificity. We did not perform quantitative meta-analysis as there were too few studies at individual test level.
For those studies comparing remote versus in-person use of an equivalent screening test, if data allowed, we described correlations, reliability, differences in scores and the proportion classified as having cognitive impairment for each test.
The review contains 31 studies (19 differing tests, 3075 participants), of which seven studies (six telephone, one video call, 756 participants) were relevant to our primary objective of describing test accuracy against a clinical diagnosis of dementia. All studies were at unclear or high risk of bias in at least one domain, but were low risk in applicability to the review question. Overall, sensitivity of remote tools varied with values between 26% and 100%, and specificity between 65% and 100%, with no clearly superior test.
Across the 24 papers comparing equivalent remote and in-person tests (14 telephone, 10 video call), agreement between tests was good, but rarely perfect (correlation coefficient range: 0.48 to 0.98).