Название: Practitioner's Guide to Using Research for Evidence-Informed Practice
Автор: Allen Rubin
Издательство: John Wiley & Sons Limited
Жанр: Психотерапия и консультирование
isbn: 9781119858584
isbn:
Now that you understand the importance and nature of the evidence-informed practice (EIP) process, it's time to examine in more detail how to critically appraise the quality of the evidence you'll encounter when engaged in that process. We take a look at that in this chapter and in several chapters that follow. As we do that, you should keep in mind that our aim is not to learn how to find the perfect study. No such study exists. Every study has some limitations. Instead, we examine how to distinguish between evidence that, despite its relatively minor limitations, merits guiding our practice versus more seriously flawed evidence that should be viewed more cautiously.
Chapter 2 alludes to differentiating between studies with reasonable limitations versus studies with fatal flaws. If you appraise many studies, however, you'll soon realize that things aren't black and white. That is, the universe of practice-relevant studies contains not only exemplary studies and fatally flawed ones; there are many shades of gray. You'll want to exercise some degree of caution in being guided by any evidence you find, and the various types of evidence you find will reside along a continuum with regard to how much caution is warranted. Moreover, it will not always be easy to conclude that one intervention has the best evidence. You might encounter some ties for the best. Also, as is discussed in Chapter 2, your practice expertise and client attributes and preferences often will influence your course of action – sometimes even swaying you toward a course based on evidence that is less than best.
3.1 More than One Type of Hierarchy for More than One Type of EIP Question
If you've read much of the literature about EIP, or have discussed it with many colleagues, you probably have encountered some misconceptions about EIP. One misconception is that EIP implies an overly restrictive hierarchy of evidence – one that values evidence produced only by tightly controlled quantitative studies employing experimental designs. In those designs, clients are assigned randomly to different treatment conditions. In one treatment condition, some clients receive the intervention being tested and other clients are assigned to a no-treatment or routine treatment (sometimes called “services as usual” or treatment as usual) control condition. Treatment effectiveness is supported if the intervention group's outcome is significantly better than the no-treatment or routine treatment's outcome.
It is understandable that some perceive EIP as valuing evidence only if it is produced by experimental studies. That's because tightly controlled experiments actually do reside at the top of one of the research hierarchies implicit in EIP. That is, when our EIP question asks about whether a particular intervention really is effective in causing a particular outcome, the most conclusive way to rule out alternative plausible explanations for the outcome is through tightly controlled experiments. The chapters in Part II of this book examine those alternative plausible explanations and how various experimental and quasi-experimental designs attempt to control for them.
When thinking about research hierarchies in EIP, however, we should distinguish the term research hierarchy from the term evidentiary hierarchy. Both types of hierarchies imply a pecking order in which certain types of studies are ranked as more valuable or less valuable than others. In an evidentiary hierarchy, the relative value of the various types of studies depends on the rigor and logic of the research design and the consequent validity and conclusiveness of the inferences – or evidence – that it is likely to produce.
In contrast, the pecking order of different types of studies in a research hierarchy may or may not be connected to the validity or conclusiveness of the evidence associated with a particular type of study. When the order does depend on the likely validity or conclusiveness of the evidence, the research hierarchy can also be considered to be an evidentiary hierarchy. However, when the pecking order depends on the relevance or applicability of the type of research to the type of EIP question being asked, the research hierarchy would not be considered an evidentiary hierarchy. In other words, different research hierarchies are needed for different types of EIP questions because the degree to which a particular research design attribute is a strength or a weakness varies depending on the type of EIP question being asked and because some EIP questions render some designs irrelevant or infeasible.
Experiments get a lot of attention in the EIP literature because so much of that literature pertains to questions about the effectiveness of interventions, programs, or policies. However, not all EIP questions imply the need to make causal inferences about effectiveness. Some other types of questions are more descriptive or exploratory in nature and thus imply research hierarchies in which experiments have a lower status because they are less applicable. Although nonexperimental studies might offer less conclusive evidence about cause and effect, they can reside above experiments on a research hierarchy for some types of EIP questions. For example, Chapter 1 discusses how some questions that child welfare administrators might have could best be answered by nonexperimental studies. It also discusses how a homeless shelter administrator who is curious about the reasons for service refusal might seek answers in qualitative studies. Moreover, even when we seek to make causal inferences about interventions, EIP does not imply a black-and-white evidentiary standard in which evidence has no value unless it is based on experiments. For example, as interventions and programs are developed and refined, there is a general progression of research from conceptual work to pilot testing for feasibility and acceptability, toward larger and more rigorous efficacy and effectiveness studies. Oftentimes smaller, less tightly controlled intervention studies are conducted when interventions, programs, and policies are in development. These designs don't reflect poor-quality research, but rather a common progression across the development of new policies, programs, and interventions. Again, there are various shades of gray, and thus various levels on a hierarchy of evidence regarding the effects of interventions, as you will see throughout this book.
3.2 Qualitative and Quantitative Studies
Qualitative studies tend to employ flexible designs and subjective methods – often with small samples of research participants – in seeking to generate tentative new insights, deep understandings, and theoretically rich observations. In contrast, quantitative studies put more emphasis on producing precise and objective statistical findings that can be generalized to populations or on designs with СКАЧАТЬ