Practitioner's Guide to Using Research for Evidence-Informed Practice. Allen Rubin
Чтение книги онлайн.

Читать онлайн книгу Practitioner's Guide to Using Research for Evidence-Informed Practice - Allen Rubin страница 29

СКАЧАТЬ – not just clients – to inform your practice decisions. For example, gaining insight into practitioners' experiences using a new caregiver support intervention or family members' experiences caring for an elderly client can help inform your practice decisions about implementing a caregiver support intervention in your own practice.

      When we seek to describe and understand people's experiences – particularly when we want to develop a deep empathic understanding of what it's like to walk in their shoes or to learn about their experiences from their point of view – qualitative studies reside at the top of the research hierarchy. Qualitative research can provide rich and detailed information that is difficult, or even impossible, to capture accurately or fully in a quantitative study. Gambrill (2006) illustrated the superiority of qualitative studies for this EIP purpose via a study by Bourgois et al., (2003), which examined the kinds of risks taken by street addicts. Bourgois immersed himself in the “shooting galleries and homeless encampments of a network of heroin addicts living in the bushes of a public park in downtown San Francisco” (p. 260). Virtually all of the addicts reported that when they are surveyed with questionnaires, they distort their risky behavior. Often, they underreport it so that it takes less time to complete the questionnaire. Also, they may deceive themselves about the risks they take because they don't want to think about the risks. Consequently, quantitative methods like surveys would rank lower on a hierarchy for this type of EIP question.

      As we've already noted, tightly controlled experimental designs are the gold standard when we are seeking evidence about whether a particular intervention – and not some alternative explanation – is the real cause of a particular outcome. Suppose, for example, we are employing an innovative new therapy for treating survivors of a very recent traumatic event such as a natural disaster or a crime. Our aim would be to alleviate their acute trauma symptoms or to prevent the development of posttraumatic stress disorder (PTSD).

      If all we know is that their symptoms improve after our treatment, we cannot rule out plausible alternative explanations for that improvement. Maybe our treatment had little or nothing to do with it. Instead, perhaps most of the improvement can be attributed to the support they received from relatives or other service providers. Perhaps the mere passage of time helped. We can determine whether we can rule out the plausibility of such alternative explanations by randomly assigning survivors to an experimental group that receives our innovative new therapy versus a control group that receives routine treatment as usual. If our treatment group has a significantly better outcome on average than the control group, we can rule out contemporaneous events or the passage of time as plausible explanations, since both groups had an equal opportunity to have been affected by such extraneous factors.

      Suppose we did not randomly assign survivors to the two groups. Suppose instead we treated those survivors who were exhibiting the worst trauma symptoms in the immediate aftermath of the traumatic event and compared their outcomes to the outcomes of the survivors whom we did not treat. Even if the ones we treated had significantly better outcomes, our evidence would be more flawed than with random assignment. That's because the difference in outcome might have had more to do with differences between the two groups to begin with. Maybe our treatment group improved more simply because their immediate reaction to the trauma was so much more extreme that even without treatment their symptoms would have improved more than the less extreme symptoms of the other group.

      As another alternative to random assignment, suppose we simply compared the outcomes of the survivors we treated to the outcomes of the ones who declined our services. If the ones we treated had on average better outcomes, that result very plausibly could be due to the fact that the ones who declined our treatment had less motivation or fewer support resources than those who wanted to and were able to utilize our treatment.

      Not all quasi-experimental designs are equally vulnerable to selectivity biases. A design that compares treatment recipients to treatment decliners, for example, would be much more vulnerable to a selectivity bias than a design that provides the new treatment versus the routine treatment depending solely on whether the new treatment therapists have caseload openings at the time of referral of new clients. (The latter type of quasi-experimental design is called an overflow design.)

      So far we have developed a pecking order of four types of designs for answering EIP questions about effectiveness. Experiments are at the top, followed by quasi-experiments with relatively low vulnerabilities to selectivity biases. Next come quasi-experiments whose selectivity bias vulnerability represents a severe and perhaps fatal flaw. At the bottom are designs that assess client change without using any control or comparison group whatsoever.

      But our hierarchy is not yet complete. Various other types of studies are used to assess effectiveness. One alternative is called single-case designs. You may have seen similar labels, such as single-subject designs, single-system experiments, and so on. All these terms mean the same thing: a design in which a single client or group is assessed repeatedly at regular intervals before and after treatment commences. With enough repeated measurements in each phase, it can be possible to infer which explanation for any improvement in trauma symptoms is more plausible: treatment effects versus contemporaneous events or the passage of time. We examine this logic further later in this book. For now, it is enough to understand that when well executed, these designs can offer some useful, albeit tentative, evidence about whether an intervention really is the cause of a particular outcome. Therefore, these designs merit a sort of medium status on the evidentiary hierarchy for answering EIP questions about effectiveness.

      Next on the hierarchy come correlational studies. Instead of manipulating logical arrangements to assess intervention effectiveness, correlational studies attempt to rely on statistical associations that can yield preliminary, but not conclusive, СКАЧАТЬ