What is “the best available evidence?”

We are taught that evidence-based research provides the basis for sound clinical practice guidelines and recommendations. Practicing Evidence Based Dentistry (EBD) requires the blending of the best available knowledge derived from the literature with our best experience as providers, as well as acknowledging our patient’s expectations.

Bruce Donoff,1 the former Dean of the Harvard School of Dental Medicine noted: “It is unrealistic and unreasonable assuming that everything in dentistry and medicine can be based upon evidence only.” He asks: “Why do patients get second and third opinions?” And he answers: “This is both a reflection of human nature and the uncertainty of the decision making by clinicians, not only disagreeing about a diagnosis, but also in recommending a treatment.”

The hierarchy of evidence is a core principle of EBD and was created to address this question. The evidence pyramid was designed to take a top-down approach to locate the best evidence. At the top of this pyramid is placed a well-conducted and unbiased systematic review or meta-analysis. If such evidence is not available, then evidence at the next tier is searched for, and if that is unavailable then one continues to step down to the next level of evidence in order to answer the question. See Figure 1.

Figure 1
Figure 1.The pyramid of evidence is a hierarchy of evidence that has been carefully obtained. However, sometimes disagreement occurs at some of the levels complicating the process.

Note Recreated from: Ackley, B. J., Swan, B. A., Ladwig, G., & Tucker, S. (2008). Evidence-based nursing care guidelines: Medical-surgical interventions. (p. 7). St. Louis, MO: Mosby Elsevier.

One limitation of Systematic Reviews and Meta-analyses is the presence of human bias in selecting/rejecting the studies that are reviewed. When reading a systematic review, or any scientific paper, it is important that the reader scrutinizes it thoroughly. This is no easy task, and the one factor that the reader may have most difficulty recognizing is bias.

Bias is defined as an attitude that prevents unprejudiced consideration of a question. In research, bias can occur when a systematic error is introduced into sampling or testing. It can also occur when the author of the systematic review or meta-analysis limits the selection of studies to one outcome or answer, while ignoring opposing conclusions. Thus, in all previous systematic reviews or meta-analyses, when the studies chosen for inclusion have been biased, they have led to misleading conclusions.

Another significant issue with systematic reviews and meta-analyses is that they often compare good quality but not the most recent existing data. Meaning the data analyzed have been around for some time already. This subsequently means that knowledge about new trends and insights or treatments is often absent.

In his book Where Good Ideas Come from: The Natural History of Innovation, Steven Johnson notes2: “When one looks at innovation in nature and in culture, environments that build walls around good ideas tend to be less innovative in the long run than more open-ended environments.” Our duty to our patients is to look past some of this older data and seek out and evaluate new information.

M. Hassan Murad, the director of the Mayo Clinic evidence-based practice research program, suggests removing systematic reviews from the top of the pyramid. Instead, he proposes to use them as a lens through which other types of studies should be seen (E.g. appraised and applied).3 He suggested that any clinician who practices EBD should consider the systematic review and meta-analysis as simply tools to evaluate and apply the evidence. Thus, when poorly done they should be discarded altogether.

In order to bring to light new treatment modalities, we need to publish what we are discovering in our own practices that significantly improve the quality of life in our patients. Only by removing systematic reviews from the top of the evidence-based pyramid, and focusing on the critical evaluation of well documented cohort and case studies can we decide for ourselves if we should pursue these tested new treatments for our patient population. To do this effectively, we need to recognize bias and improve our critical appraising skills.

If we continue to accept unquestioningly that the best evidence for patient care decisions is limited to the current hierarchy of evidence, we may cheat patients out of rightfully making their own treatment decisions based on the most contemporary diagnostic and/or treatment modalities.