Systematic reviews: Catalyst for change

Valid and reliable research is becoming increasingly important for influencing best practice in healthcare. And an understanding of systematic reviews and how they can be implemented into practice is becoming mandatory for all healthcare professionals. Systematic reviews can be defined as “explicitly formulated, reproducible and up-to-date summaries of the effects of healthcare interventions”.1


Systematic reviews are thought to be a cornerstone of evidence-based medicine. Mark Petticrew advises that this has led to the misconception about their purpose and methods, including the belief that they apply only to randomised controlled trials (RCTs), and cannot cope with other forms of evidence such as that offered by qualitative research.2 This article considers the use of systematic reviews as methodology in forming guidelines for practice in public health and occupational health (OH). In the article, a systematic review is defined as a “scientific tool that can be used to summarise, appraise, and communicate the results and implications of otherwise unmanageable quantities of research”.3


It is known that practitioners and decision-makers are encouraged to make use of the latest research and information about best practice and to ensure that decisions are demonstrably rooted in this knowledge.4 For the purposes of this article, the National Institute for Health and Clinical Excellence (Nice) on managing long-term sickness absence and incapacity for work will be used an examplar of a systematic review in public health.5 The use of systematic reviews in the realms of public health is scant, and this may be due to the complexity of the daunting array of theoretical and practical problems.6 Public health promotes not simply the absence of disease, but mental, physical, and emotional wellbeing, and that this is akin to the definition of OH, and the two are intrinsically linked.



Recommendations for the use of systematic reviews


Some consider that systematic reviews are needed to inform policy and decision-making about a particular organisation and in relation to delivery of effective health and social care. More importantly, they are deemed to be of particular use when there is uncertainty regarding the potential benefits or harm of an intervention.7


The Nice report was conducted at the request of the Department of Health to produce empirical public health guidance to both employers and primary care services – including OH professionals.5 There is a recognition that being employed can help improve a person’s health and well-being, and can help reduce inequalities.7 An estimated 175 million working days are lost in Britain each year because of sickness absence8 and the review of the health of Britain’s working-age population by Dame Carol Black estimated that the annual cost of sickness absence and worklessness associated with working-age ill health was more than £100bn.8


Reading some reviews it seems that the sponsor for a particular piece of work can influence the outcome, and we must be careful to ensure that elements of a systematic review are incorporated to improve the validity and reliability of outcomes. Oakley has suggested that in the US, RCTs of social programmes were funded until they began to show repeatedly negative results.9


The Centre for Reviews and Dissemination says there are seven steps in preparing and maintaining a systematic review.10 In the Nice study, the identification of the problem, relating to the impact of worklessness, has been identified and key questions formulated. The location and selection criteria are noted. There can be limitations as a result of certain criteria being added or omitted.1 More than 25,000 articles were assessed, with about 60 meeting the inclusion criteria.5 Those studies included were critically appraised for their data collection and analysis and methodological rigour and quality using the Nice methodology checklist to interpret results.


Although systematic reviews are seen as a robust methodology, a review of more than 300 studies found that not all such reviews were equally reliable, and that their reporting could be improved by a universally agreed set of standards.11 The same study identified that of 100 guidelines reviewed, 4% required updating within a year, and in a similar review 7% of systematic reviews needed updating at the time of publication.12


As there is a phenomenal amount of time and resource needed to carry out a systematic review, the possibility that the findings might need to be reviewed at the time of publication raises the question whether this methodology should be a first choice for a researcher despite its claims of validity.2 The fast-changing paradigm within health and wellness could make this method problematical.


In a similar systematic review to that of the Nice review,5 conducted by the Canadian Institute for Work and Health in 2007, the question to consider was whether it was worthwhile investing in health and safety programmes from an economic perspective. From its investigation, 12,903 articles were identified, and after further review, 67 of these were selected.


Concerns were raised about the information and varying degree of quality provided in the documents. Assumptions had been made in some of the literature in relation to the size of health and financial effects of particular programmes without sufficient statistical analysis to validate them. Despite the concerns, the review reached several conclusions, including the view that there was a need to invest in health and safety programmes for the benefits of employees and employers alike.13


The key findings from their use of a systematic review was that they clearly identified the need for economic analysis and felt that this was a vital part of evaluating health interventions. This view was incorporated in the Nice study, and it has been identified as an integral component by others.14


Limitations of systematic reviews


It has been argued that there are important aspects of evidence related to public health interventions that are not covered by the established criteria for the evaluation of medical evidence, and therefore it must be factored into a study that a quality assessment tool should be used. Some would argue that qualitative and quantitative research are very different, and that it is not possible to judge qualitative research by using the general criteria such as reliability and validity.15


But if we are to consider systematic reviews as a means to guide policy in public and occupational health, then we must look at ways of measuring this qualitative data. As with all methodologies, systematic reviews are not all produced to the same standard. Many focus on the cost implications of the intervention, and this can bias their findings. Grey areas are not always considered, which may be an untapped pool of data.6 It is known that systematic reviews can be a demanding activity, and adequate time and resource is needed.


While one school of thought is that qualitative reviews have less credibility, there are ways of enhancing the validity of qualitative research, and these include the use of triangulation – whereby the results from the different data collection methods are considered to look at the patterns of convergence.


Other methods of improving validity are validation or member checking, and some would advise that as in quantative research, there is a need to ensure a systematic, self-conscious research design, with data collection, interpretation and communication.16


Systematic reviews in public and occupational health


The current literature relating to systematic reviews shows the method is often used to evaluate RCTs, for example. However, there are few reviews on health and well-being, which may be due to several factors, including the view that it is harder to do reviews on this diverse subject matter. In support of their use in public and occupational health research, one study advocates that there is a benefit of bringing together what was thought of as the ‘soft’ data with the quantative data to gain a greater breadth of perspectives and a deeper understanding of public and occupational health issues.17


Influence on current practice


Systematic reviews can help promote best practice. They offer more than a single case study outcome. The recent Boorman report18 is an example of a document published and disseminated as best practice in OH in the NHS, yet this report is founded on an analysis of a staff perception report, case studies and focus groups. Although these may have a place in the spectrum of research, can we accept that the findings are truly reflective and as the report alleges, expect to see sickness absence cut by a third with an estimated annual direct cost saving of £555m?18 Boorman hopes the report is a catalyst for change.


Perhaps the use of a systematic reviews as in the Nice example on management of long-term sickness5 would eliminate the use of anecdotal evidence as the main driver in shaping best practice.


Conclusions


Despite negatives, there is a role for systematic reviews in forming guidelines for practice in occupational health. The incorporation of both qualitative and quantative studies can help our understanding and demonstrate the collation of the best available evidence. Furthermore, synthesis of qualitative research adds value by providing a more insightful understanding of phenomena.


There are systems in place that can reduce the potential for bias and subjectivity17 and tools to assist in quality improvements in systematic reviews.16 This methodology lends itself to large-scale data review and analysis. The complexity of public health may require a process that is much more repetitive. There is a need for a critical appraisal with the use of meta analysis (QUOROM and PRISMA checklists)19 and consideration of heterogeneity influences where applicable, underpinned by a transparent method.


Systematic reviews are an efficient scientific technique.20 This is reflected in the views of the Cochrane collaboration cited in a report on the evaluation of effectiveness of public health interventions, which aims to increase the quality and quantity of public health systematic reviews through a range of activities.21


This article was produced for Diane Haddock’s MSc Advanced Professional Development at Sheffield Hallam University.


References




  1. Shea B, Grimshaw J, Wells G, Boers M, Andersson N, Hamel C, Porter A, Tugwell P, Moher D and Bouter L.(2007) “Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews”. BMC Medical Research Methodology;7:10


  2. Petticrew M.(2001)( Systematic reviews from astronomy to zoology:myths and misconceptions BMJ; 322:98-101


  3. NHS Centre for Reviews and Dissemination. Undertaking Systematic Reviews of Research on Effectiveness: (1996) CRD Guidelines for those carrying out or commissioning reviews (CRD Report No. 4); York: NHS CRD


  4. Feinstein AR. Misguided efforts and future challenges for research on “diagnostic tests” (2002) J Epidemiol Community health;56:330-2


  5. NICE (2009). http://guidance.nice.org.uk/PH19/Guidance/pdf/English


  6. Petticrew M (2003) Why certain systematic reviews reach uncertain conclusions. BMJ 326: 756-8


  7. Waddell G, Burton A (2006). Is work good for your health and wellbeing?; The Stationery Office:London


  8. Health, Work and Wellbeing Programme (2008). Dame Carol Black’s review of the health of Britain’s working age population. Working for a healthier tomorrow; The Stationery Office: London


  9. Oakley A(2000). Experiments in knowing: gender and method in the social sciences. Cambridge;: Polity Press


  10. Centre for Reviews and Dissemination (CRD)(2009); Published by CRD: University of York ISBN 978-1-900640-47-3


  11. Moher D, Tetzlaff J, Tricco AC, Sampson M, Altman DG (2007). “Epidemiology and reporting characteristics of systematic reviews” PLoS Med. 4(3):e78


  12. Shojania KG, Sampson M, Ansari MT, Ji J, Douchette S, Moher D (2007). “How quickly do systematic reviews go out of date? A survival analysis”. Ann. Intern Med. 147 (4); 224-33


  13. Institute for Work and Health (2007). Sharing best practice – is it worthwhile investigating in health and safety programs? Institute for work and health. www.iwh.on.ca


  14. Nixon J, Khan KS, Kleijnen J(2001). Summarising economic evaluations in systematic reviews: a new approach. BMJ; 322:1596-8


  15. Tones K Tilford S. Health Education(2001): Effectiveness, Efficiency andEquity. Chapman & Hall, London


  16. Mays N, Pope C(2000). Qualitative research in health care: Assessing quality in qualitative research. BMJ ;320:50-52


  17. Harden A, Garcia J, Oliver S, Rees R, Sheperd J, Brunton G, Oakley A (2004). Applying systematic review methods to studies of people’s views: an example from public health research. J Epidemiol Community Health; 58 794-800


  18. Boorman S. NHS Health and Well-being Final report November 2009. www.dh.gov.uk/publications


  19. Moher D(2008). Producing clear, accurate and transparent reports of systematic reviews:an attainable goal. BMJ clinical evidence 2008: 1-4


  20. Mulrow C D(1994). Education and debate Systematic reviews: Rationale for systematic reviews. BMJ ;309:597-599


  21. Waters E, Doyle J, Jackson N, Howes F, Brunton G, Oakley A (2006). Evaluating the effectivess if public health interventions: the role and activities of the Cochrane Collaboration. J Epidemiol Community Health; 60:285-289

Comments are closed.