Название: Practitioner's Guide to Using Research for Evidence-Informed Practice
Автор: Allen Rubin
Издательство: John Wiley & Sons Limited
Жанр: Психотерапия и консультирование
isbn: 9781119858584
isbn:
When we seek to describe and understand people's experiences – particularly when we want to develop a deep empathic understanding of what it's like to walk in their shoes or to learn about their experiences from their point of view – qualitative studies reside at the top of the research hierarchy. Qualitative research can provide rich and detailed information that is difficult, or even impossible, to capture accurately or fully in a quantitative study. Gambrill (2006) illustrated the superiority of qualitative studies for this EIP purpose via a study by Bourgois et al., (2003), which examined the kinds of risks taken by street addicts. Bourgois immersed himself in the “shooting galleries and homeless encampments of a network of heroin addicts living in the bushes of a public park in downtown San Francisco” (p. 260). Virtually all of the addicts reported that when they are surveyed with questionnaires, they distort their risky behavior. Often, they underreport it so that it takes less time to complete the questionnaire. Also, they may deceive themselves about the risks they take because they don't want to think about the risks. Consequently, quantitative methods like surveys would rank lower on a hierarchy for this type of EIP question.
3.3.3 What Assessment Tool Should Be Used?
As is discussed in Chapter 1, common questions to ask in selecting the best assessment instrument pertain to whether the instrument is reliable, valid, sensitive to small changes, feasible to administer, and culturally sensitive. Most of the studies that assess reliability, validity, and cultural sensitivity use correlational designs. For reliability, they might administer a scale twice in a short period to a large sample of people and assess test-retest reliability in terms of whether the two sets of scale scores are highly correlated. Or they might administer the scale once and see if subscale scores on subsets of similar items correlate to each other. For validity, they might administer the scale to two groups of people known to be markedly different regarding the concept being measured and then see if the average scores of the two groups differ significantly. For sensitivity, they might use a pretest-posttest design with no control group and administer the scale before and after treatment to see if the scale can detect small improvements. Although experiments and quasi-experiments are rarely the basis for assessing a scale's validity or sensitivity, it is not unheard of for an experiment or a quasi-experiment to provide new or additional evidence about those features of a scale. That is, if a treatment group's average scores improve significantly more than the control group's, that provides evidence that the scale is measuring what the treatment intends to affect and that the scale is sensitive enough to detect improvements. W return to these issues and cover them in greater depth in Chapter 11. That entire chapter is devoted to critically appraising and selecting assessment instruments.
3.3.4 What Intervention, Program, or Policy Has the Best Effects?
As we've already noted, tightly controlled experimental designs are the gold standard when we are seeking evidence about whether a particular intervention – and not some alternative explanation – is the real cause of a particular outcome. Suppose, for example, we are employing an innovative new therapy for treating survivors of a very recent traumatic event such as a natural disaster or a crime. Our aim would be to alleviate their acute trauma symptoms or to prevent the development of posttraumatic stress disorder (PTSD).
If all we know is that their symptoms improve after our treatment, we cannot rule out plausible alternative explanations for that improvement. Maybe our treatment had little or nothing to do with it. Instead, perhaps most of the improvement can be attributed to the support they received from relatives or other service providers. Perhaps the mere passage of time helped. We can determine whether we can rule out the plausibility of such alternative explanations by randomly assigning survivors to an experimental group that receives our innovative new therapy versus a control group that receives routine treatment as usual. If our treatment group has a significantly better outcome on average than the control group, we can rule out contemporaneous events or the passage of time as plausible explanations, since both groups had an equal opportunity to have been affected by such extraneous factors.
Suppose we did not randomly assign survivors to the two groups. Suppose instead we treated those survivors who were exhibiting the worst trauma symptoms in the immediate aftermath of the traumatic event and compared their outcomes to the outcomes of the survivors whom we did not treat. Even if the ones we treated had significantly better outcomes, our evidence would be more flawed than with random assignment. That's because the difference in outcome might have had more to do with differences between the two groups to begin with. Maybe our treatment group improved more simply because their immediate reaction to the trauma was so much more extreme that even without treatment their symptoms would have improved more than the less extreme symptoms of the other group.
As another alternative to random assignment, suppose we simply compared the outcomes of the survivors we treated to the outcomes of the ones who declined our services. If the ones we treated had on average better outcomes, that result very plausibly could be due to the fact that the ones who declined our treatment had less motivation or fewer support resources than those who wanted to and were able to utilize our treatment.
In each of the previous two examples, the issue is whether the two groups being compared were really comparable. To the extent that doubt exists as to their comparability, the research design is said to have a selectivity bias. Consequently, when evaluations of outcome compare different treatment groups that have not been assigned randomly, they are called quasi-experiments. Quasi-experiments have the features of experimental designs, but without the random assignment.
Not all quasi-experimental designs are equally vulnerable to selectivity biases. A design that compares treatment recipients to treatment decliners, for example, would be much more vulnerable to a selectivity bias than a design that provides the new treatment versus the routine treatment depending solely on whether the new treatment therapists have caseload openings at the time of referral of new clients. (The latter type of quasi-experimental design is called an overflow design.)
So far we have developed a pecking order of four types of designs for answering EIP questions about effectiveness. Experiments are at the top, followed by quasi-experiments with relatively low vulnerabilities to selectivity biases. Next come quasi-experiments whose selectivity bias vulnerability represents a severe and perhaps fatal flaw. At the bottom are designs that assess client change without using any control or comparison group whatsoever.
But our hierarchy is not yet complete. Various other types of studies are used to assess effectiveness. One alternative is called single-case designs. You may have seen similar labels, such as single-subject designs, single-system experiments, and so on. All these terms mean the same thing: a design in which a single client or group is assessed repeatedly at regular intervals before and after treatment commences. With enough repeated measurements in each phase, it can be possible to infer which explanation for any improvement in trauma symptoms is more plausible: treatment effects versus contemporaneous events or the passage of time. We examine this logic further later in this book. For now, it is enough to understand that when well executed, these designs can offer some useful, albeit tentative, evidence about whether an intervention really is the cause of a particular outcome. Therefore, these designs merit a sort of medium status on the evidentiary hierarchy for answering EIP questions about effectiveness.
Next on the hierarchy come correlational studies. Instead of manipulating logical arrangements to assess intervention effectiveness, correlational studies attempt to rely on statistical associations that can yield preliminary, but not conclusive, СКАЧАТЬ