Practitioner's Guide to Using Research for Evidence-Informed Practice. Allen Rubin
Чтение книги онлайн.

Читать онлайн книгу Practitioner's Guide to Using Research for Evidence-Informed Practice - Allen Rubin страница 27

СКАЧАТЬ 3.3 Which Types of Research Designs Apply to Which Types of EIP Questions? 3.3.1 What Factors Best Predict Desirable and Undesirable Outcomes? 3.3.2 What Can I Learn about Clients, Service Delivery, and Targets of Intervention from the Experiences of Others? 3.3.3 What Assessment Tool Should Be Used? 3.3.4 What Intervention, Program, or Policy Has the Best Effects? 3.3.5 Matrix of Research Designs by Research Questions 3.3.6 Philosophical Objections to the Foregoing Hierarchy: Fashionable Nonsense Key Chapter Concepts Review Exercises Additional Readings

      Now that you understand the importance and nature of the evidence-informed practice (EIP) process, it's time to examine in more detail how to critically appraise the quality of the evidence you'll encounter when engaged in that process. We take a look at that in this chapter and in several chapters that follow. As we do that, you should keep in mind that our aim is not to learn how to find the perfect study. No such study exists. Every study has some limitations. Instead, we examine how to distinguish between evidence that, despite its relatively minor limitations, merits guiding our practice versus more seriously flawed evidence that should be viewed more cautiously.

      If you've read much of the literature about EIP, or have discussed it with many colleagues, you probably have encountered some misconceptions about EIP. One misconception is that EIP implies an overly restrictive hierarchy of evidence – one that values evidence produced only by tightly controlled quantitative studies employing experimental designs. In those designs, clients are assigned randomly to different treatment conditions. In one treatment condition, some clients receive the intervention being tested and other clients are assigned to a no-treatment or routine treatment (sometimes called “services as usual” or treatment as usual) control condition. Treatment effectiveness is supported if the intervention group's outcome is significantly better than the no-treatment or routine treatment's outcome.

      It is understandable that some perceive EIP as valuing evidence only if it is produced by experimental studies. That's because tightly controlled experiments actually do reside at the top of one of the research hierarchies implicit in EIP. That is, when our EIP question asks about whether a particular intervention really is effective in causing a particular outcome, the most conclusive way to rule out alternative plausible explanations for the outcome is through tightly controlled experiments. The chapters in Part II of this book examine those alternative plausible explanations and how various experimental and quasi-experimental designs attempt to control for them.

      When thinking about research hierarchies in EIP, however, we should distinguish the term research hierarchy from the term evidentiary hierarchy. Both types of hierarchies imply a pecking order in which certain types of studies are ranked as more valuable or less valuable than others. In an evidentiary hierarchy, the relative value of the various types of studies depends on the rigor and logic of the research design and the consequent validity and conclusiveness of the inferences – or evidence – that it is likely to produce.

      In contrast, the pecking order of different types of studies in a research hierarchy may or may not be connected to the validity or conclusiveness of the evidence associated with a particular type of study. When the order does depend on the likely validity or conclusiveness of the evidence, the research hierarchy can also be considered to be an evidentiary hierarchy. However, when the pecking order depends on the relevance or applicability of the type of research to the type of EIP question being asked, the research hierarchy would not be considered an evidentiary hierarchy. In other words, different research hierarchies are needed for different types of EIP questions because the degree to which a particular research design attribute is a strength or a weakness varies depending on the type of EIP question being asked and because some EIP questions render some designs irrelevant or infeasible.

      Qualitative studies tend to employ flexible designs and subjective methods – often with small samples of research participants – in seeking to generate tentative new insights, deep understandings, and theoretically rich observations. In contrast, quantitative studies put more emphasis on producing precise and objective statistical findings that can be generalized to populations or on designs with СКАЧАТЬ