Healthcare decision-makers have to deal with a large volume of evidence and may or may not have the capacity to use this evidence in decision-making. Systematic reviews can help with this process by bringing together the relevant evidence and using clear methods which seek to minimise bias. Interventions to help improve the uptake of systematic review evidence have been developed to help with improving the use of evidence. We identified eight studies which evaluated the effectiveness of these types of interventions. There was some evidence to show that a clear message, based on systematic review evidence, which is targeted and disseminated to the relevant healthcare professionals may improve evidence-based practice. If the aim is to help decision-makers increase their awareness and knowledge of systematic review evidence, and evidence-based practice, then interventions that address some of these aims have been evaluated but have shown little effect on practice.
Mass mailing a printed bulletin which summarises systematic review evidence may improve evidence-based practice when there is a single clear message, if the change is relatively simple to accomplish, and there is a growing awareness by users of the evidence that a change in practice is required. If the intention is to develop awareness and knowledge of systematic review evidence, and the skills for implementing this evidence, a multifaceted intervention that addresses each of these aims may be required, though there is insufficient evidence to support this approach.
Systematic reviews provide a transparent and robust summary of existing research. However, health system managers, national and local policy makers and healthcare professionals can face several obstacles when attempting to utilise this evidence. These include constraints operating within the health system, dealing with a large volume of research evidence and difficulties in adapting evidence from systematic reviews so that it is locally relevant. In an attempt to increase the use of systematic review evidence in decision-making a number of interventions have been developed. These include summaries of systematic review evidence that are designed to improve the accessibility of the findings of systematic reviews (often referred to as information products) and changes to organisational structures, such as employing specialist groups to synthesise the evidence to inform local decision-making.
To identify and assess the effects of information products based on the findings of systematic review evidence and organisational supports and processes designed to support the uptake of systematic review evidence by health system managers, policy makers and healthcare professionals.
We searched The Cochrane Library, MEDLINE, EMBASE, CINAHL, Web of Science, and Health Economic Evaluations Database. We also handsearched two journals (Implementation Science and Evidence and Policy), Cochrane Colloquium abstracts, websites of key organisations and reference lists of studies considered for inclusion. Searches were run from 1992 to March 2011 on all databases, an update search to March 2012 was run on MEDLINE only.
Randomised controlled trials (RCTs), interrupted time-series (ITS) and controlled before-after studies (CBA) of interventions designed to aid the use of systematic reviews in healthcare decision-making were considered.
Two review authors independently extracted the data and assessed the study quality. We extracted the median value across similar outcomes for each study and reported the range of values for each median value. We calculated the median of the two middlemost values if an even number of outcomes were reported.
We included eight studies evaluating the effectiveness of different interventions designed to support the uptake of systematic review evidence. The overall quality of the evidence was very low to moderate.
Two cluster RCTs evaluated the effectiveness of multifaceted interventions, which contained access to systematic reviews relevant to reproductive health, to change obstetric care; the high baseline performance in some of the key clinical indicators limited the findings of these studies. There were no statistically significant effects on clinical practice for all but one of the clinical indicators in selected obstetric units in Thailand (median effect size 4.2%, range -11.2% to 18.2%) and none in Mexico (median effect size 3.5%, range 0.1% to 19.0%). In the second cluster RCT there were no statistically significant differences in selected obstetric units in the UK (median effect RR 0.92; range RR 0.57 to RR 1.10). One RCT evaluated the perceived understanding and ease of use of summary of findings tables in Cochrane Reviews. The median effect of the differences in responses for the acceptability of including summary of findings tables in Cochrane Reviews versus not including them was 16%, range 1% to 28%. One RCT evaluated the effect of an analgesic league table, derived from systematic review evidence, and there was no statistically significant effect on self-reported pain. Only one RCT evaluated an organisational intervention (which included a knowledge broker, access to a repository of systematic reviews and provision of tailored messages), and reported no statistically significant difference in evidence informed programme planning.
Three interrupted time series studies evaluated the dissemination of printed bulletins based on evidence from systematic reviews. A statistically significant reduction in the rates of surgery for glue ear in children under 10 years (mean annual decline of -10.1%; 95% CI -7.9 to -12.3) and in children under 15 years (quarterly reduction -0.044; 95% CI -0.080 to -0.011) was reported. The distribution to general practitioners of a bulletin on the treatment of depression was associated with a statistically significant lower prescribing rate each quarter than that predicted by the rates of prescribing observed before the distribution of the bulletin (8.2%; P = 0.005).