Now that you’ve identified evidence to address your client’s problem or situation, the next step in the EBP process is to assess the internal and external evidence. When assessing the evidence, keep in mind that each type of evidence serves a unique purpose for your clinical decision making.
Internal evidence, or the data and observations collected on an individual client, is collected both for the accountability of your session and for tracking a client’s performance. When assessing the internal evidence, you are determining whether an intervention has impacted your client. You may analyze your data to address the following questions (adapted from Higginbotham & Satchidanand, 2019):
External evidence, found in scientific research literature, answers clinical questions such as whether an assessment measures what it intended to measure or whether a treatment approach is effective in causing change in individuals. Because the quality of external evidence is variable, this step of assessing the evidence is crucial and includes determining the reliability, importance, and applicability of the relevant scientific research to your client’s condition and needs.
Critically appraising the external evidence can help you determine if the conclusions from one or more studies can help guide your clinical decision. To assess the external evidence, you should
Relevance refers to how closely connected the study's elements (e.g., study aim, participants, method, results) are to your clinical question and how well the external evidence fits your needs. Relevant research literature increases the likelihood that you can generalize the results and outcomes to your client.
Ask yourself:
Use your clinical judgment to decide whether the study's elements are comparable and/or generalizable to the population, intervention, comparison, and/or outcome in your PICO question.
Example: You are providing cognitive intervention to a teenager with traumatic brain injury, and most studies you’ve found examine cognitive treatments for veterans with blast injuries. You will need to decide whether these studies are clinically relevant and applicable to your client, despite their focus on a somewhat different population.
Quick Tip:
If there's no relevant research available, you may need to reconsider your PICO question and return to your search, or continue to Step 4 of the EBP process.
Appraising the validity of the external evidence means that you have considered whether the study effectively investigates its aim. The study should be transparent about its methodology―the research procedure, the data collection methods, and the analysis of data and outcomes. This helps you decide whether the research evidence is trustworthy and whether you can have confidence in its results.
Ask yourself:
To appraise the validity of the external evidence for a clinical question, it is necessary to consider both the study design and the methodological quality of the study. Because certain research designs offer better controls against bias, many EBP hierarchies rank study quality solely based on study design. However, these hierarchies often fall short because research design alone does not necessarily equate to good external evidence. Moreover, as noted in Step 2, no one study design can answer all types of PICO questions. The chart below details the types of study designs that are best suited for various types of clinical questions.
Type of Question | Example | Preferred Study Design(s) | Other Relevant Study Design(s) |
---|---|---|---|
Screening/Diagnosis |
Is an auditory brainstem response screening more accurate than an otoacoustic emissions screening in identifying newborns with hearing loss? | Prospective, blind comparison to reference standard | Cross-sectional |
Treatment/Service Delivery Efficacy of an intervention |
What is the most effective treatment to improve cognition in adults with traumatic brain injury? | Randomized, controlled trial | Controlled trial; single-subject/single-case experimental design |
Etiology Identify causes or risk factors of a condition |
What are the risk factors for speech and language disorders? | Cohort | Case control; case series |
Quality of Life/Perspective Understand the opinions, experiences, and perspectives of clients, caregivers, and other relevant individuals |
How do parents feel about implementing parent-mediated interventions? | Qualitative studies (e.g., case study, case series) | Ethnographic interviews or surveys of the opinions, perspectives, and experiences of clients, their caregivers, and other relevant individuals |
In addition to considering research design, you should also consider study methodology to identify any limitations of the external evidence. Limitations are the shortcomings or external influences for which the investigators of a study could not, or did not, control. Because study limitations can influence the outcomes of an investigation, it is crucial to identify any sources of bias or systematic errors in methodology.
To help determine what limitations exist, you can appraise the methodological quality of each study using one of many available research design–specific checklists. Depending on the checklist, you can appraise some or all of the following features:
Although other sources of bias exist, they are not typically assessed as part of these checklists. Other sources of bias to consider include conflicts of interest and publication bias.
When an investigator takes steps to minimize bias, clinicians can have greater confidence in the study findings.
Quick Tip:
Information is abundant and easy to find, but it may not always be trustworthy or valid. Save time by using resources that have reviewed the included studies for quality and bias:
Once you determine that the research is applicable and valid, you are ready to examine the findings. The results can tell you if the desired outcome of the study was achieved (i.e., “Was there a benefit from the intervention or assessment, or was there no effect?”) and whether any adverse events occurred (i.e., harm). Knowing the extent of the effects ultimately determines if the results of a study are clinically meaningful and important.
Ask yourself:
When examining the results and conclusions, consider the study's
Review the data and/or the statistical outcomes reported in the study to determine the magnitude of the results (i.e., “How large is the treatment effect?”) and whether the results are significant and clinically important. That is, whether the results are due to chance—and, if not, whether they are meaningful enough to consider in clinical practice. Information such as sample size, confidence interval, and effect size allow you to decide how large and precise the intervention effect is. A p value can help you determine whether the results of a study are statistically significant (in other words, they likely did not occur by chance), but it cannot tell you whether the results are clinically significant or clinically important. For example, a study may find a statistically significant difference between the outcomes of two groups, yet the real-life impact for the individuals in each group could be similar. Researchers can use measures such as relative risks and minimally clinically important difference (also referred to as minimally important difference) to report clinical significance.
Consider the results from individual studies and determine whether the overall conclusions across studies are similar. For example, taken together, are the results from the body of external evidence similarly positive or negative? Does the direction and consistency of the evidence support a change in clinical practice?
Be sure to factor in any details (e.g., participant sample size and heterogeneity of participants) that you identified in the individual studies that may limit the applicability of the results.
Although studies reporting definitive outcomes are ideal, sometimes the results from individual studies or the body of external evidence are inconclusive. In other cases, there may be very little to no scientific evidence available. In these instances, it may be valuable to consider research evidence from similar populations or interventions and to determine whether the results are generalizable to your client or clinical situation. In this circumstance, it is even more critical to collect and consider data taken from your client’s performance to determine whether the approach you are taking is having the intended effect.
Quick Tip:
Research results and conclusions require careful consideration to determine whether they could be clinically meaningful to your client.
Higginbotham, J., & Satchidanand, A. (2019, April). From triangle to diamond: Recognizing and using data to inform our evidence-based practice. Academics and Research in Context. https://academy.pubs.asha.org/2019/04/from-triangle-to-diamond-recognizing-and-using-data-to-inform-our-evidence-based-practice/.