Evidence-based practice is self-evidently the ideal that occupational health practitioners should be striving for day to day. But, especially in our current internet age, being able to find, sift and critically evaluate evidence is not as straightforward as you may think, as Jo Rhodes and Professor Anne Harriss explain in the first of a two-part series.
Evidence-based (EBP) practice is essential if nurses are to practice both safely and effectively. Indeed, EBP is the foundation of effective healthcare (Nursing and Midwifery Council, 2018 a; Nursing and Midwifery Council, 2018 b; British Medical Association, 2014).
EBP is at the core of the NMC’s standards of proficiency for registered nurses (NMC 2018 a). Occupational health nurses (OHNs), as NMC registrants, require a fundamental understanding of the importance of research in assessing, evaluating and improving practice as well as having the ability to interpret research evidence.
Evidence-based practice
Indeed, when considering the recently published future nurse and midwifery standards (NMC, 2019) there is a commitment to ensure that practice standards respond to changing models of care. Accelerating the use of research and evidence into practice must be viewed as a priority as it improves the care and advice given to clients and to managers.
EBP involves “doing the right things right”, using the best available evidence to complement clinical expertise. The ethical and legal responsibilities of OHNs to deliver EBP workplace interventions are not only an NMC requirement, it is also stipulated by the International Commission on Occupational Health (2012).
In order to add value to their employing organisations and not just be seen as an unnecessary cost, OH professionals must provide cost-effective initiatives achieving this objective by utilising EBP to inform best practice, improve client experiences, and map clinical delivery against healthcare governance.
Additionally, EBP embraces a patient-centred approach. Evidence-based OH nursing practice must be based on appropriate research-based material. In order to undertake EBP, practitioners must access and interpret relevant, quality evidence from multiple sources, then utilise critical thinking skills to determine quality evidence, crucial for professional accountability.
This article, the first of two covering facets of evidence-based practice, highlights the process that underpins effective literature searching skills. The second of the series will then consider the assessment of practice guidelines for clinical implementation and then manipulation of evidence.
Addressing clinical problems
Addressing clinical problems underpins the practice of OHNs. This requires the practitioner to consider, interrogate and evaluate the available research evidence. Systematic reviews aim to synthesise all the high-quality evidence relating to a given question. They are therefore generally appropriate evidence to address clinical questions.
As PICO questions (an acronym detailed in figure 1 below) are used in the development of EBP, it is important to understand what constitutes an appropriate PICO research question.
Clarifying the key elements of the question is a critical first step towards providing an answer to inform a decision, and for a researcher to frame the research to be done. PICO is a mnemonic that represents Population, Intervention, Comparator and Outcomes. It captures the key elements and is a good strategy to provide answerable questions.
P | Patient, problem or population | Identifies the most important features of the patient (client in OH), problem or population |
I | Intervention | What is the intervention under consideration? |
C | Comparison | Is the aim to identify the most effective of two different interventions? For example, comparing the use of counselling or CBT in the support of a person with a mental health deficit. |
O | Outcome | What are you intending to measure, achieve, or improve for the client? |
Figure 1. The PICO acronym. This is the first stage that underpins effective evidence-based practice.
Using this approach aids in the development of well-focused clinical questions and assists in the identification of key terms. This aids in the formulation of a searchable clinical question, a strategy used as the first stage within EBP.
Integral to OH practice is the promotion, improvement and maintenance of employee health, safety and wellbeing. To be effective, OHNs require a significant breadth of knowledge regarding the effect of work on health and health on work.
OHNs frequently provide advice on managing musculoskeletal problems in order to support employees to stay in the workplace, or return to the workplace following a period of sickness absence.
Interest in sit/stand and treadmill desks is increasing (Allwork, 2017), perhaps in response to information linking prolonged sitting with musculoskeletal disorders and media reports.
Workers may request that their employer provide them with such a workstation. The employer may in turn seek advice from OH before investing in such workstations. This will require the OH professional to consider the evidence before recommending such a change.
Rees (2010, p.150) acknowledges challenges locating applicable evidence and recommends using PICO questions to identify relevant research and inform practice. A PICO question utilised to instigate a quality search on which to base advice regarding such desks could be: “For sedentary workers, do sit/stand and treadmill desks have measurable physiological/psychological or productivity benefits compared to traditional office desks?”.
Effective use of internet search engines
However, internet search engines cannot interpret PICO sentences (Aslam and Emmanuel, 2010) so key words can be entered to databases such as CINAHL. Additionally, thesaurus use provides synonyms to increase results.
De Brun (2013 a) recommends the application of Boolean logic operators, “AND/NOT/OR” to expand then reduce parameters, enhancing communication to the database search programme.
So, to stick with our sit/stand desk question, you might search for: “standing desk OR treadmill” inserted to “full text” terms, which found 9,627 hits.
Application of truncation (asterisk *) captures variations or “roots” of similar words (Craig and Smyth, 2012 p.367). “Stand*” and “sit*” now finds “standing” and “sitting” regardless of authors’ terminology preference.
Further Boolean logic was applied using “AND work* AND sedentary AND office”. In order to focus on working adults “NOT” “child*”, “school*”, “class*” was used to dismiss childhood education literature.
Further result refinement was achieved using the advanced search option limiting age to “all adults” (18-65 years). Application of the “wildcard” whereby a question mark is added at the point of variations in spelling, captures idiosyncrasies/Americanisms (Rees, 2010, p 159).
When applied to the term “NOT” “p?diatric”, immaterial paediatric/pediatric studies are deducted. Although recognising the importance of older seminal studies, (Rees, 2010, p 160) recommends focusing on contemporary texts so the date parameters of 2009-2019 were applied.
A search using such terms will lead to sourcing a number of articles, including a randomised control trial (RCT) study promoting sit-stand office desks (Graves et al 2015) and a systematic review (SR) for standing/treadmill desks with inconclusive evidence to support use (MacEwen et al 2015). Beaven and Craig (2012, p. 90) recommend searching more than one database, with De Brun (2013 b) endorsing Medline, PubMed and Cochrane Library.
The same search strategy to the Cochrane Library, which critically appraises SRs (Cochrane UK, 2019 a), located two papers encompassing 15 RCTs and 34 other studies (total populations 2,165 and 3,397 respectively).
The first SR concludes evidence for promoting sit-stand desks remains weak due to poor quality research literature availability (Shrestha et al 2018). The second provides related information regarding effectiveness of additional ergonomic interventions (Hoe et al 2018).
Research methods are critically assessed to “evidence levels” (Centre for Evidence-based Medicine, 2009) from “high” (grade 1) to poor (grade 5) quality, with aligned “levels of trust” relating to methodology rigour and bias reduction (Rees, 2010 p.28). Joanna Briggs Institute (2013) as cited by Flinders University (2019) represents this in pyramid form (as shown in the main image accompanying this article).
SRs and meta-data evaluation (meta-analysis) of comparative (homogenetic) studies provide “gold-standard” evidence (Ingham-Broomfield, undated).
Challenges arise from increasing evidence availability, with potential reviewer bias influencing outcomes. Additionally, long timelines for SR completion may result in evidence becoming outdated (University of Canberra, 2018).
Borgerson (2009) questions evidence hierarchies, arguing SRs ignore external “powerful”, “economic” forces from pharmaceutical industries wanting positive outcomes influencing research results. McDonagh et al (2013) suggest comparable SRs may contain conflicting recommendations which may result in variations in best practice.
Nevertheless, Smyth and Jones (2012) promote SRs as “shortcuts”, identifying consistencies through expert research summarisations for practice application. Studies rely on suitable design and implementation for credibility (Petrisor and Bhandari, 2007).
Quality indicators include appropriate participant numbers, defined aims, tools and methodology (for data capture), well-presented findings, conclusions and recommendations (Rees, 2010 p.178).
When SRs are unavailable, RCTs are superior to non-randomised trials (De Brun, 2013 a). Bias is reduced as individuals are indiscriminately assigned to control or experimental groups (Rees, 2010 p.188).
Double-blinding, whereby neither participant nor researcher know who receives interventions or not, also increases validity (Lewis and Warlow, 2004). Researchers may compromise studies through conflicting interests, including pharmaceutical company sponsorship, poor data collection, analysis bias and interpretation of results to confirm hypothesises.
Avoiding bias
Researchers have ethical responsibilities to avoid bias. Ethics committee approval, proficient study protocols, peer reviews and assessment tool application including the Critical Appraisal Skills Programme (Sheffield University, 2015) increase research credibility (Craig and Smyth, 2012 p.147). Song et al. (2013) describe publication bias, promoting positive findings, or “spinning” results to create positives from negatives (Goldacre, 2013 and Healthcare Triage, 2018). Bias reduction is achieved by disclosing all study result outcomes.
Hierarchically lower case-controlled and cross-sectional studies, provide valuable information of disease prevalence and risk factors to similar populations (British Medical Journal, 2019). Non-experimental qualitative studies relying on observation, understanding and connections are open to individual interpretation (Carter and Goodacre, 2012 p.107).
Careful consideration of question format (open/closed) and interview techniques (in-person/questionnaire) reduces discrepancies (Barrett and Twycross, 2018).
Quantitative and qualitative study discrepancies may transpire from participants’ idiosyncrasies (health, expectations, culture, cognition and motives and challenges occur securing comparative research populations (Deaton, 2017).
Carter and Goodacre (2012) also question “how many” participants are needed for establishing reliable conclusions. Important information may also be lost from “drop-outs” with reasons for leaving necessitating follow-up as implications may directly affect outcomes.
Petrisor and Bhandari (2007) argue that systems for validating evidence quality vary, and classifications demote qualitative studies for descriptive viewpoints rather than quantitative research’s numerical/statistical outcomes.
Klose et al (2016) advise individuals’ thoughts, preferences and feelings are empirical to healthcare decision-making and quantitative research cannot be interpreted in isolation.
Case studies and “expert” opinions are useful for interpreting research findings for guideline development. However, such evidence is largely anecdotal, relying on “unscientific reports” and is therefore delegated to the lowest level of the pyramid.
Practice guidelines
Practice guidelines can be used to underpin OHN practice and are available from a range of sources including the National Institute for Health and Care Excellence (NICE) and medical royal colleges.
Back pain, for example, accounts for a significant amount of absence from work annually, reducing individuals’ life quality. OHNs must consider effective patient management to reduce personal and industrial costs; back pain practice guidelines (PGs) assist with this.
Robust PGs enhance care delivery, utilising high-quality evidence to standardise clinical options, reducing treatment variations and providing frameworks for health governance assurance. Additionally, patient harm and cost inefficiencies are reduced.
Conflicting or poorly compiled guidelines should be subject to scrutiny. Despite some consistency issues noted by Siering et al (2013), guideline assessment tools assist PG critical evaluation. Brouwers et al (2010) recommend the AGREE II (2010) tool comprising six domains and 23 characteristics. This will be developed further in the second of this two-part series.
Conclusions
This article has considered the importance of evidence-based practice and how to start the process of collecting the materials to underpin this.
As Ruth May observes, “creating an evidence-informed profession involves a number of roles, from researchers and clinical academics, and all nurses and midwives each embedding evidence in everyday practice in whatever role they undertake, in every area of practice” (NHS England and NHS Improvement, 2020 p.2).
In support of the promotion of evidence-based practice, the National Institute for Health Research (NIHR) was established in 2006 and is the largest funder of health and care research in the United Kingdom. It focuses on issues relevant to nursing generally and the funding of high-quality studies.
Once research projects have been completed, the NIHR works with practitioners to make sense of the evidence and promote its use in practice through evidence summaries and themed reviews.
One of the four principles of the Nursing and Midwifery Code (NMC, 2018) is to practise effectively requiring registrants to practise in line with the best available evidence.
Sign up to our weekly round-up of HR news and guidance
Receive the Personnel Today Direct e-newsletter every Wednesday
References
AGREE (2010) AGREE II. Available at: https://www.agreetrust.org/agree-ii/
Allwork.Space (2017). “Why Google searches for ‘standing desks’ have more than tripled in the last 12 months.” Available at: https://allwork.space/2017/05/why-google-searches-for-standing-desks-have-more-than-tripled-in-the-last-12-months/
Aslam S and Emmanuel P (2010). “Formulating a researchable question: A critical step for facilitating good clinical research.” Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3140151/
Barrett D and Twycross A. (2018). “Data collection in qualitative research”. Evidence Based Nursing, 21(3), pp.63-64.
Beaven O and Craig J V (2012). “Searching the literature”, in Craig J V and Smyth R L (eds). “The Evidence-Based Practice Manual for Nurses”. Third edition. China: Churchill Livingstone, p.90.
Borgerson K (2009). “Valuing Evidence: bias and the evidence hierarchy of evidence-based medicine”. Perspectives in Biology and Medicine, 52(2): pp.21-33.
British Medical Journal (2019). “Best Practice: What is GRADE?”. Available at: https://bestpractice.bmj.com/info/toolkit/learn-ebm/what-is-grade
British Medical Journal (2019). “Epidemiology for the uninitiated.” Chapter 8. Case-control and cross sectional studies.” Available at: https://www.bmj.com/about-bmj/resources-readers/publications/epidemiology-uninitiated/8-case-control-and-cross-sectional
Brouwers M, Kho M, Browman G, Burgers J, Cluzeau F, Feder G, and Fervers B (2010). “Development of the AGREE II, part 2: Assessment of validity of items and tools to support application”. Canadian Medical Association Journal, 182(10), pp.472-478.
Centre for Evidence-Based Medicine (2009). “Oxford Centre for Evidence-based Medicine – Levels of Evidence” (March 2009). Available at: https://www.cebm.net/2009/06/oxford-centre-evidence-based-medicine-levels-evidence-march-2009/
Cochrane UK (2019 a). “About us.” Available at: https://uk.cochrane.org/about-us; (2019 b) “Resources.” Available at: https://uk.cochrane.org/resources
Craig J and Smyth R (2012). “Glossary – Truncation” in The Evidence-Based Practice Manual For Nurses. Third edition. China: Churchill Livingstone, p.367.
De Brun C (2013 a). “Finding the Evidence. A key step in information production process”. The Information Standard Guide. NHS England.
De Brun C (2013 b). “Useful databases”. Available at: https://www.england.nhs.uk/wp-content/uploads/2017/02/tis-evidence-wrkshp-caroline-de-brun.pdf
Deaton A (2017). “The trouble with randomised-control trials”. Available at: https://www.youtube.com/watch?v=UB1A62u9fBE
Flinders University (2019). “Evidence Based Medicine”. Available at: http://flinders.libguides.com/c.php?g=203799&p=1719021
Goldacre B (2013). “Marketing” in Bad Pharma. London: Fourth Estate. p.341. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3635613/
Graves L at al (2015). “Evaluation of sit-stand workstations in an office setting: a randomised controlled trial”. Available at: https://pubmed.ncbi.nlm.nih.gov/26584856/
Healthcare Triage (2018). “Only Tell Me the Good News: Bias in Research Publication.” Available at: https://www.youtube.com/watch?v=9fm6wHTYjAA
Hoe V et al (2018). “Ergonomic interventions for preventing work-related musculoskeletal disorders of the upper limb and neck among office workers.” Available at: https://www.cochranelbrary.com/cdsr/doi/10.1002/14651858.CD008570.pub3/full?highlightAbstract=evaluation%7Cwithdrawn%7Cstand%7Cof%7Csit%7Cevalu
International Commission on Occupational Health (ICOH) (2014). “International Code of Ethics for Occupational Health Professionals”. Third edition. Available at: http://www.icohweb.org/site/multimedia/code_of_ethics/code-of-ethics-en.pdf
Joanna Briggs Institute (2013). “New JBI Levels of Evidence.” Available at: https://joannabriggs.org/sites/default/files/2019-05/JBI-Levels-of-evidence_2014_0.pdf
Klose K at al (2006). “Patient and person reports on healthcare: Preferences, outcomes. Experiences and satisfaction – an essay”. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4875930/
Lewis S and Warlow C (2004). “How to spot bias and other potential problems in randomised controlled trials.” Available at: https://jnnp.bmj.com/content/75/2/181
MacEwen B T, MacDonald D J, and Burr J F (2015). “A systematic review of standing and treadmill desks in the workplace”. Preventative Medicine vol. 70. pp 50-58. Available at: https://pubmed.ncbi.nlm.nih.gov/25448843/
McDonagh M et al (2013). “Avoiding bias in selective studies.” Available at: https://www.ncbi.nlm.nih.gov/books/NBK126701/
NHS England and NHS Improvement (2020). “Leading the acceleration of evidence into practice: a guide for executive nurses”. London: NHS England and NHS Improvement
Available from: https://www.england.nhs.uk/wp-content/uploads/2020/03/leading-the-acceleration-of-evidence-into-practice-guide.pdf
Nursing and Midwifery Council (2018 a). “Standards of proficiency for specialist community public health nurses.” https://www.nmc.org.uk/globalassets/sitedocuments/standards/nmc-standards-of-proficiency-for-specialist-community-public-health-nurses.pdf; (NMC 2018 b) “The Code: Professional standards of practice and behaviour for nurses and midwives and nursing associates”; (NMC 2019) “Standards framework for nursing and midwifery education”, available at: https://www.nmc.org.uk/standards-for-education-and-training/standards-framework-for-nursing-and-midwifery-education/
Petrisor B A and Bhandri M (2007). “The Hierarchy of evidence: Levels and grades of recommendation”. Indian Journal of Orthopaedics 41 (1) pp.11-15. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2981887/
Rees C (2010). “Searching and retrieving evidence to underpin nursing practice”, in Holland, K. (ed.) and Rees, C. Nursing: Evidence-Based Practice Skills. Oxford. Oxford University Press, pp. 150-154, p.p. 159-160.
Shrestha N et al (2018). “Workplace interventions for reducing sitting at work.” Available at: https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD010912.pub5/full?highlightAbstract=withdrawn%7Cstanding%7Cstand%7Cdesk
Siering, U et al (2013). “Appraisal Tools for Clinical Practice Guidelines: A Systematic Review.” Available at: https://pubmed.ncbi.nlm.nih.gov/24349397/
Smith R (2018). “The hypocrisy of medical journals over transparency.” Available at: https://blogs.bmj.com/bmj/2018/01/24/richard-smith-the-hypocrisy-of-medical-journals-over-transparency/
Smyth R and Jones L (2012). “Using evidence from systematic reviews”, in Craig J V and Smyth R L (eds) (2012). “The Evidence-Based Practice Manual For Nurses”. Third edition. China: Churchill Livingstone, p188.
Song F, Hooper L, and Yoke Y (2013). “Publication bias: what is it? How do we
measure it? How do we avoid it?”. Available at: https://www.dovepress.com/publication-bias-what-is-it-how-do-we-measure-it-how-do-we-avoid-it-peer-reviewed-article-OAJCT
University of Canberra (2018). “Evidence-Based Practice in Health.” Available at: https://canberra.libguides.com/c.php?g=599346&p=4149721
University of Sheffield (2015). “Critical Appraisal with CASP part 1.” Available at: https://www.youtube.com/watch?v=CsM44oSPzlM