Bias can be introduced at any part of the research process—including study design, research implementation or execution, data analysis, or even publication. Any time you undertake research, there is a risk that bias, or a systematic error, will impact the study’s results and lead to conclusions that tell an incomplete or inaccurate story.
Understanding how to assess and critically appraise published research to identify potential sources of bias is an essential skill for clinicians. Through critical appraisal, clinicians can determine the methodological quality of the research―the extent to which the author designed, conducted, and reported the results of their research to prevent systematic errors or bias. When you appraise the research you read, you can identify rigorously designed, high-quality studies that are more likely to yield results that are closer to the "truth."
Learn about:
"Critical appraisal is the course of action for watchfully and systematically examining research to assess its reliability, value, and relevance in order to direct professionals in their vital clinical decision making."
The boxes below provide an overview of the most common types of bias that can occur in communication sciences and disorders (CSD) research and their potential impact on a study's findings. In addition, the boxes highlight questions you can consider to determine whether the authors took steps to mitigate bias in their research. For more information, go to Oxford University Centre for Evidence-Based Medicine (CEBM)'s catalog of bias.
Types of bias include selection, assignment, performance, detection, attrition, reporting, and publication.
Occurs when the individuals examined differ from the group of people the investigators aim to study.
What impact can it have?
Participants may not represent the population that the study seeks to examine—making the results less generalizable.
Did the authors...
...select participants using clearly defined criteria from the same general population they wish to study?
Occurs when experimental groups have significantly different characteristics due to a faulty assignment process.
What impact can it have?
Outcomes may be skewed due to inherent differences between the groups—not due to the treatment.
Did the authors...
...use random assignment to ensure an equal spread of characteristics in the participants in the experimental and control groups (randomization)?
...conceal the randomization process from the investigators and participants (allocation concealment)?
Occurs when study participants know whether they’ve been assigned to the experimental or control group.
What impact can it have?
Participants can change their responses or behavior if they know which group they are in.
Did the authors...
...ensure that the participants were unaware if they were receiving the treatment under investigation or a placebo (blinded)?
Occurs when assessors know the participant’s group assignment.
What impact can it have?
Assessor may rate participants in one group differently than the other.
Did the authors...
...ensure that the clinicians assessing the study outcomes were blinded as to who received treatment or a placebo?
Occurs when participants leave a study prior to its completion, leading to incomplete outcomes data.
What impact can it have?
Outcome effect may be due to one treatment being more burdensome. Participants who drop out may have different outcomes than participants who completed the study.
Did the authors...
...account for all participants who entered the study?
...analyze the subjects based on the groups to which they were originally assigned, even if they didn’t complete the study (intention-to-treat analysis)?
Occurs when dissemination of the study findings is influenced by the direction of the results and includes the concept that authors are more likely to report outcomes that show statistically significant and positive effects.
What impact can it have?
Study doesn’t tell the whole story and provides only one side of the real evidence.
Did the authors...
...report on all pre-specified outcomes, regardless of whether the findings were positive, negative, or neutral?
Occurs when the outcomes of a study influence the decision to disseminate the results and includes the concept that positive findings are more likely to be published than negative findings.
What impact can it have?
Underreporting of negative results can skew meta-analysis and overestimate true effect size.
Did the authors...
...do a comprehensive search, including unpublished research (trial registries, regulatory documents, and contacting researchers of known or suspected unpublished work)?
Critical appraisal can help you determine sources of bias in research. While there are a number of biases that can occur regardless of study design (e.g., reproducibility, conflict of interest), there are some study types that are more susceptible to certain types of bias because of their design. It is important to know what questions to ask when assessing different types of research. There are several critical appraisal checklists or tools developed for specific study designs.
Quick Tip:
In addition, you can save time by using resources that have already been reviewed for quality and bias, such as ASHA's Evidence Maps.Levels of evidence is a framework for classifying research on any number of criteria, including study design, validity, and/or methodological quality. Several organizations have developed their own hierarchies depicting levels of evidence; one example is from the Center for Evidence-Based Management (CEBMa).
In general, well-designed, synthesized evidence (e.g., systematic reviews, meta-analyses) is at the top of the hierarchy because of the methodological quality control characteristics included in those designs. Expert opinion or uncontrolled case series are often at the bottom of the hierarchy because these designs do not include strong methodological steps to protect against bias or systematic error. Ideally, when deciding whether evidence is strong and trustworthy, you should consider both the study’s design AND the appraised methodological quality.