Most trials follow people up to collect data through personal contact after they have been recruited. Some trials get data from other sources, such as routine collected data or disease registers. There are many ways to collect data from people in trials, and these include using letters, the internet, telephone calls, text messaging, face-to-face meetings or the return of medical test kits. Most trials have missing data, for example, because people are too busy to reply, are unable to attend a clinic, have moved or no longer want to participate. Sometimes data has not been recorded at study sites, or are not sent to the trial co-ordinating centre. Researchers call this 'loss to follow-up', 'drop out' or 'attrition' and it can affect the trial's results. For example, if the people with the most or least severe symptoms do not return questionnaires or attend a follow-up visit, this will bias the findings of the trial. Many methods are used by researchers to keep people in trials. These encourage people to send back data by questionnaire, return to a clinic or hospital for trial-related tests, or be seen by a health or community care worker.
This review identified methods that encouraged people to stay in trials. We searched scientific databases for randomised studies (where people are allocated to one of two or more possible treatments in a random manner) or quasi-randomised studies (where allocation is not really random, e.g. based on date of birth, order in which they attended clinic) that compared methods of increasing retention in trials. We included trials of participants from any age, gender, ethnic, cultural, language and geographic groups.
The methods that appeared to work were offering or giving a small amount of money for return of a completed questionnaire and enclosing a small amount of money with a questionnaire with the promise of a further small amount of money for return of a filled in questionnaire. The effect of other ways to keep people in trials is still not clear and more research is needed to see if these really do work. Such methods are shorter questionnaires, sending questionnaires by recorded delivery, using a trial design where people know which treatment they will receive, sending specially designed letters with a reply self addressed stamped envelope followed by a number of reminders, offering a donation to charity or entry into a prize draw, sending a reminder to the study site about participants to follow-up, sending questionnaires close to the time the patient was last followed-up, managing peoples' follow-up, conducting follow-up by telephone and changing the order of questionnaire questions.
Quality of evidence
The methods that we identified were tested in trials run in many different disease areas and settings and, in some cases, were tested in only one trial. Therefore, more studies are needed to help decide whether our findings could be used in other research fields.
Most of the retention trials that we identified evaluated questionnaire response. There were few evaluations of ways to improve participants returning to trial sites for trial follow-up. Monetary incentives and offers of monetary incentives increased postal and electronic questionnaire response. Some other strategies evaluated in single trials looked promising but need further evaluation. Application of the findings of this review would depend on trial setting, population, disease area, data collection and follow-up procedures.
Loss to follow-up from randomised trials can introduce bias and reduce study power, affecting the generalisability, validity and reliability of results. Many strategies are used to reduce loss to follow-up and improve retention but few have been formally evaluated.
To quantify the effect of strategies to improve retention on the proportion of participants retained in randomised trials and to investigate if the effect varied by trial strategy and trial setting.
We searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, PreMEDLINE, EMBASE, PsycINFO, DARE, CINAHL, Campbell Collaboration’s Social, Psychological, Educational and Criminological Trials Register, and ERIC. We handsearched conference proceedings and publication reference lists for eligible retention trials. We also surveyed all UK Clinical Trials Units to identify further studies.
We included eligible retention trials of randomised or quasi-randomised evaluations of strategies to increase retention that were embedded in 'host' randomised trials from all disease areas and healthcare settings. We excluded studies aiming to increase treatment compliance.
We contacted authors to supplement or confirm data that we had extracted. For retention trials, we recorded data on the method of randomisation, type of strategy evaluated, comparator, primary outcome, planned sample size, numbers randomised and numbers retained. We used risk ratios (RR) to evaluate the effectiveness of the addition of strategies to improve retention. We assessed heterogeneity between trials using the Chi2 and I2 statistics.For main trials that hosted retention trials, we extracted data on disease area, intervention, population, healthcare setting, sequence generation and allocation concealment.
We identified 38 eligible retention trials. Included trials evaluated six broad types of strategies to improve retention. These were incentives, communication strategies, new questionnaire format, participant case management, behavioural and methodological interventions. For 34 of the included trials, retention was response to postal and electronic questionnaires with or without medical test kits. For four trials, retention was the number of participants remaining in the trial. Included trials were conducted across a spectrum of disease areas, countries, healthcare and community settings. Strategies that improved trial retention were addition of monetary incentives compared with no incentive for return of trial-related postal questionnaires (RR 1.18; 95% CI 1.09 to 1.28, P value < 0.0001), addition of an offer of monetary incentive compared with no offer for return of electronic questionnaires (RR 1.25; 95% CI 1.14 to 1.38, P value < 0.00001) and an offer of a GBP20 voucher compared with GBP10 for return of postal questionnaires and biomedical test kits (RR 1.12; 95% CI 1.04 to 1.22, P value < 0.005). The evidence that shorter questionnaires are better than longer questionnaires was unclear (RR 1.04; 95% CI 1.00 to 1.08, P value = 0.07) and the evidence for questionnaires relevant to the disease/condition was also unclear (RR 1.07; 95% CI 1.01 to 1.14). Although each was based on the results of a single trial, recorded delivery of questionnaires seemed to be more effective than telephone reminders (RR 2.08; 95% CI 1.11 to 3.87, P value = 0.02) and a 'package' of postal communication strategies with reminder letters appeared to be better than standard procedures (RR 1.43; 95% CI 1.22 to 1.67, P value < 0.0001). An open trial design also appeared more effective than a blind trial design for return of questionnaires in one fracture prevention trial (RR 1.37; 95% CI 1.16 to 1.63, P value = 0.0003).
There was no good evidence that the addition of a non-monetary incentive, an offer of a non-monetary incentive, 'enhanced' letters, letters delivered by priority post, additional reminders, or questionnaire question order either increased or decreased trial questionnaire response/retention. There was also no evidence that a telephone survey was either more or less effective than a monetary incentive and a questionnaire. As our analyses are based on single trials, the effect on questionnaire response of using offers of charity donations, sending reminders to trial sites and when a questionnaire is sent, may need further evaluation. Case management and behavioural strategies used for trial retention may also warrant further evaluation.