Название: Practitioner's Guide to Using Research for Evidence-Informed Practice
Автор: Allen Rubin
Издательство: John Wiley & Sons Limited
Жанр: Психотерапия и консультирование
isbn: 9781119858584
isbn:
Some scholars who favor qualitative inquiry misperceive EIP as devaluing qualitative research. Again, that misperception is understandable in light of the predominant attention given to causal questions about intervention effectiveness in the EIP literature, and the preeminence of experiments as the “gold standard” for sorting out whether an intervention or some other explanation is really the cause of a particular outcome. That misperception is also understandable because when the EIP literature does use the term evidentiary hierarchy or research hierarchy it is almost always in connection with EIP questions concerned with verifying whether it is really an intervention – and not something else – that is the most plausible cause of a particular outcome. Although the leading texts and articles on the EIP process clearly acknowledge the value of qualitative studies, when they use the term hierarchy it always seems to be in connection with causal questions for which experiments provide the best evidence.
A little later in this chapter, we examine why experiments reside so high on the evidentiary hierarchy for answering questions about intervention effectiveness. Right now, however, we reiterate the proposition that more than one research hierarchy is implicit in the EIP process. For some questions – like the earlier one about understanding homeless shelter experiences, for example – we'd put qualitative studies at the top of a research hierarchy and experiments at the bottom.
Countless specific kinds of EIP questions would be applicable to a hierarchy where qualitative studies might reside at the top. We'll just mention two more examples: Are patient-care staff members in nursing homes or state hospitals insensitive, neglectful, or abusive – and if so, in what ways? To answer this question, a qualitative inquiry might involve posing as a resident in such a facility.
A second example might be: How do parents of mentally ill children perceive the way they (the parents) are treated by mental health professionals involved with their child? For example, do they feel blamed for causing or exacerbating the illness (and thus feel more guilt)? Open-ended and in-depth qualitative interviews might be the best way to answer this question. (Administering a questionnaire in a quantitative survey with a large sample of such parents might also help.) We cannot imagine devising an experiment for such a question, and therefore again would envision experiments at the bottom of a hierarchy in which qualitative interviewing (or quantitative surveys) would be at or near the top.
3.3 Which Types of Research Designs Apply to Which Types of EIP Questions?
Chapter 1 identifies and discusses six types of EIP questions. If research hierarchies were to be developed for each of these types of questions, experimental designs would rank high on the ones about effectiveness, but would either be infeasible or of little value for the others. Qualitative studies would rank low on the ones about effectiveness, but high on the one about understanding client experiences.
Let's now look further at some types of research studies that would rank high and low for some types of EIP questions. In doing so, let's save the question about effectiveness for last. Because the fifth and sixth types of EIP questions – about costs and potential harmful effects – tend to pertain to the same types of designs as do questions of effectiveness, we'll skip those two so as to avoid redundancy. The point in this discussion is not to exhaustively cover every possible type of design for every possible type of EIP question. Instead, it is just to illustrate how different types of EIP questions imply different types of research designs and that the research hierarchy for questions about effectiveness does not apply to other types of EIP questions. Let's begin with the question: What factors best predict desirable and undesirable outcomes?
3.3.1 What Factors Best Predict Desirable and Undesirable Outcomes?
Later in this chapter, we'll see that correlational studies rank relatively low on a research hierarchy for questions about effectiveness. We'll see that although they can have value in informing practice decisions about the selection of an intervention with the best chances of effectiveness, other designs rank higher. Experimental outcome studies, for example, rank much higher. But for questions about circumstances or attributes that best predict prognosis or risk, correlational studies are the most useful. With these studies, multivariate statistical procedures (statistics that account for multiple factors at once) can be employed to identify factors that best predict things we'd like to avoid or see happen.
Returning to the foster-care example discussed earlier, suppose you are a child welfare administrator or caseworker and want to minimize the odds of unsuccessful foster-care placements. One type of correlational study that you might find to be particularly useful would employ the case-control design. A study using this design to identify the factors that best predict whether foster-care placements will be successful or unsuccessful might proceed as follows:
1 It would define what case record information distinguishes successful from unsuccessful placements.
2 It would obtain a large and representative sample of foster-care placements depicted in case records.
3 It would then divide those cases into two groups: those in which the foster-care placement was successful and those in which it was unsuccessful.
4 It would enter all of the placement characteristics into a multivariate statistical analysis, seeking to identify which characteristics differed the most between the successful and unsuccessful placements (when all other factors are controlled) and thus best predicted success or failure.
If your previous research courses extolled the wonders of experiments, at this point you might exclaim, “Wait a minute! Why rank correlational studies above experiments here?” It's a good question, and we'll answer it with three others: Can you imagine the staff members of any child welfare agency permitting children to be assigned randomly to different types of foster placements? What would they say about the ethics and pragmatics of such an idea? What might they think of someone for even asking?
Correlational studies are not the only ones that can be useful in identifying factors that predict desirable or undesirable outcomes. Qualitative studies can be useful, too. For example, let's return to the question of why so many homeless people refuse to use shelter services. As is mentioned in Chapter 1, studies that employ in-depth, open-ended interviews of homeless people – or in which researchers themselves live on the streets among the homeless and experience what it's like to sleep in a shelter – can provide valuable insights as to what practitioners can do in designing a shelter program that might alleviate the resistance homeless people might have to utilizing the shelter.
3.3.2 What Can I Learn about Clients, Service Delivery, and Targets of Intervention from the Experiences of Others?
In Chapters 1 and 2, we can see that some studies suggest that one of the most important factors influencing service effectiveness is the quality of the practitioner-client relationship, and that factor might have more influence on treatment outcome than the choices practitioners make about what particular interventions to employ. We also know that one of the most important aspects of a practitioner's relationship skills is empathy. It seems reasonable to suppose that the better the practitioner's understanding of what it's like to have had the client's experiences – what it's like to have walked in the client's shoes, so to speak – the more empathy the practitioner is likely to convey in relating to the client. In other instances you may want to learn about the experiences СКАЧАТЬ