Practitioner's Guide to Using Research for Evidence-Informed Practice. Allen Rubin
Чтение книги онлайн.

Читать онлайн книгу Practitioner's Guide to Using Research for Evidence-Informed Practice - Allen Rubin страница 23

СКАЧАТЬ the most effective medication, for example), neither would you want to be treated by a physician who does a thorough diagnosis and takes a thorough health history, and then provides a treatment solely based on her comfort level and experience in providing that treatment, ignorant of or dismissing research on its effectiveness solely on the basis of her own predilections.

      One of the thornier issues in making your intervention decision concerns the number of strong studies needed to determine which intervention has the best evidence. For example, will 10 relatively weak, but not fatally flawed, studies with positive results supporting Intervention A outweigh one very strong study with positive results supporting Intervention B? Will one strong study suggesting that Intervention C has moderate effects outweigh one or two relatively weak studies suggesting that Intervention D has powerful effects? Although we lack an irrefutable answer to these questions, many EIP experts would argue that a study that is very strong from a scientific standpoint, such as one that has only a few minor flaws, should outweigh a large number of weaker studies containing serious (albeit perhaps not fatal) flaws. Supporting this viewpoint is research that suggests that studies with relatively weaker methodological designs can overestimate the degree of effectiveness of interventions (e.g., Cuijpers et al., 2010; Wykes et al., 2008). If you find that Intervention A is supported by one or two very strong studies and you find no studies that are equally strong from a scientific standpoint in supporting any alternative interventions, then your findings would provide ample grounds for considering Intervention A to have the best evidence.

      However, determining that Intervention A has the best evidence is not the end of the story. Future studies might refute the current ones or might show newer interventions to be more effective than Intervention A. Although Intervention A might have the best evidence for the time being, you should remember that EIP is an ongoing process. If you continue to provide Intervention A for the next 10 or more years, your decision to do so should involve occasionally repeating the EIP process and continuing to find it to have the best supportive evidence.

      There may be reasons why Intervention A – despite having the best evidence – is not the best choice for your client. As discussed, your client's characteristics or your practice context might contraindicate Intervention A and thus influence you to select an alternative intervention with the next best evidence base. And even if you conclude that Intervention A is the best choice for your client, you should inform the client about the evidence and involve the client in making decisions about which interventions to use. We are not suggesting that you overwhelm clients with lengthy, detailed descriptions of the evidence. You might just tell them that based on the research so far, Intervention A appears to have the best chance of helping them. Be sure to inform them of what their participation in the intervention would require of them (e.g., time commitment, modality, and homework) as well as any undesirable side effects or discomfort they might experience with that intervention as well as the possibility that the treatment may not work for them. With this information, the client might not consent to the treatment, in which case you'll need to consider an alternative intervention with the next best evidence base. A side benefit of engaging the client in making an informed decision is that doing so might improve the client's commitment to the treatment process, which, in turn, might enhance the prospects for a successful treatment outcome. Recall from our discussion in Chapter 1 that some of the most important factors influencing service effectiveness are related to the quality of the client-practitioner relationship.

      Before you begin to provide the chosen intervention, you and the client should identify some measurable treatment goals that can be monitored to see if the intervention is really helping the client. This phase is important for several reasons. One reason, as noted previously, is even our most effective interventions don't help everybody. Your client may be one of the folks who doesn't benefit from it. Another reason is that even if your client could benefit from the intervention, perhaps there is something about the way you are providing it – or something about your practice context – that is making it less effective than it was in the research studies. When interventions are implemented in usual practice, they may not be implemented with fidelity. In other words, interventions are often changed by practitioners in response to the particulars of their practice context, client characteristics, or their own preferences. Unfortunately, these changes can compromise the effectiveness of the intervention. We discuss more about issues related to intervention fidelity in Chapter 12.

      By monitoring client progress, you'll also be better equipped to determine whether you need to continue or alter the intervention in light of goal attainment or lack thereof. Monitoring client progress additionally might enable you to share with clients on an ongoing basis charted graphs or dashboards displaying their treatment progress. This sharing might further enhance client commitment to treatment. It also provides more chances for clients to inform you of things that they might have experienced at certain points outside of treatment that coincide with blips up or down on the graphs. Learning these things might enhance your ability to help the client. Chapters 7 and 12 of this book pertain to this phase of the EIP process.

      Having research evidence inform your practice decisions is a lot easier said than done. For example, searching for and finding the best scientific evidence to inform practice decisions can be difficult and time consuming. Your caseload demands may leave little time to search for evidence, appraise it, and then obtain the needed skills to provide the intervention you'd like to provide. Moreover, in some areas of practice there may be very little rigorous research evidence available. This can be especially true outside of the health and mental health fields of practice. As you engage in the EIP process, you might identify important gaps in the research.

      Another problem is that even when you find the best evidence, it might not easily guide your practice decisions. Perhaps, for example, equally strong studies reach conflicting conclusions. In the vast literature evaluating the effectiveness of exposure therapy versus EMDR therapy in treating PTSD, for example, Rubin (2003) found approximately equal numbers of rigorous clinical outcome experiments favoring the effectiveness of exposure therapy over EMDR and favoring EMDR over exposure therapy.

      Some searches will fail to find any rigorous studies that clearly supply strong evidence supporting the effectiveness of a particular intervention approach. Perhaps, instead, you might find many seriously flawed studies, each of which supports the effectiveness of a different intervention approach. Some searches might just find what interventions are ineffective (at least those searches might help you in deciding what not to do).

      Likewise, some interventions with the best evidence might never have СКАЧАТЬ