Practitioner's Guide to Using Research for Evidence-Informed Practice. Allen Rubin
Чтение книги онлайн.

Читать онлайн книгу Practitioner's Guide to Using Research for Evidence-Informed Practice - Allen Rubin страница 12

СКАЧАТЬ and suggest strategies that you might choose to try and improve your success.

      Practitioners often must select an assessment tool in their practice. Many times it is for the purpose of diagnosing clients or assessing their chances of achieving a goal or their level of risk regarding an undesirable outcome. Other purposes might be to survey community residents as to their service needs, to survey agency clients regarding their satisfaction with services, or to monitor client progress during treatment. Thus, another type of EIP question pertains to selecting the assessment tool that is the best fit for their practice setting and clientele.

      Common questions to ask in selecting the best assessment instrument are:

       Is the instrument reliable? An instrument is reliable to the extent that it yields consistent information. If you ask eight-year-olds whether their parent is overly protective of them, they might answer “yes” one week and “no” the next – not because the parent changed, but because the children have no idea what the term overly protective means, and therefore are just giving a haphazard answer because they feel they must give some answer. If you get different answers from the same client to the same question at roughly the same point in time, it probably means there is something wrong with the question. Likewise, if an instrument's total score indicates severe depression on October 7 and mild depression on October 14, chances are the instrument as a whole is unreliable.

       Is the instrument valid? An instrument is valid if it really measures what it is intended to measure. If youths who smoke marijuana every day consistently deny doing so on a particular instrument, then the instrument is not a valid measure of marijuana use. (Note that the instrument would be reliable because the answers, though untrue, would be consistent. Reliability is necessary, but it is not a sufficient condition for validity.)

       Is the instrument sensitive to relatively small but important changes? If you are monitoring client changes every week during a 10-week treatment period, an instrument that asks about the frequency of behaviors during the past six months won't be sensitive to the changes you hope to detect. Likewise, if you are treating a child with extremely low self-esteem, meaningful improvement can occur without the child achieving high self-esteem. An instrument that can only distinguish between youths with high, medium, and low self-esteem might not be sufficiently sensitive to detect changes as your client moves from extremely low self-esteem to a better level of low self-esteem.

       Is the instrument feasible? If you are monitoring a child's progress from week to week regarding behavioral and emotional problems, a 100-item checklist probably will be too lengthy. Parents and teachers may not want to take the time to complete it every week, and if you are asking the child to complete it during office visits, there go your 45 minutes. If your clients can't read, then a written self-report scale won't work.

       Is the instrument culturally sensitive? The issue of an instrument's cultural sensitivity overlaps with the issue of feasibility. If your written self-report scale is in English, but your clients are recent immigrants who don't speak English, the scale will be culturally insensitive and unfeasible for you to use. But cultural insensitivity can be a problem even if your scale is translated into another language. Something might go awry in the translation. Even if the translation is fine, certain phrases may have different meanings in different cultures. If most English-speaking Americans are asked whether they feel blue, they'll probably know that blue means sad. Translate that question into Spanish and then ask “Esta azul? to a Spanish-speaking person who just crossed the border from Mexico, and you might get a very strange look. Cultural sensitivity also overlaps with reliability and validity. If clients don't understand your language, you might get a different answer every time you ask the same question. If clients think you are asking whether their skin is blue, they'll almost certainly say “no” even if they are in a very sad mood and willing to admit it.

      Many studies can be found that assess the reliability and validity of various assessment tools. Some also assess sensitivity. Although there are fewer studies that measure cultural sensitivity, the number is growing in response to the current increased emphasis on cultural responsivity and attention to diversity in the human services professions.

      Perhaps the most commonly posed type of EIP question pertains to selecting the most effective intervention, program, or policy. As noted previously, some managed care companies or government agencies define EBP (or EIP) narrowly and focus only on this effectiveness question. They will call your practice evidence-informed only if you are providing a specific intervention that appears on their list of preferred interventions, whose effectiveness has been supported by a sufficient number of rigorous experimental outcome evaluations to merit their “seal of approval” as an evidence-informed intervention. As noted earlier, this definition incorrectly fails to allow for the incorporation of practitioner expertise and patient values. The EIP process, however, allows practitioners to choose a different intervention if the “approved” one appears to be contraindicated in light of client characteristics and preferences or the realities of the practice context.

      The process definition of EIP is more consistent with the scientific method, which holds that all knowledge is provisional and subject to refutation. In science, knowledge is constantly evolving. Indeed, at any moment a new study might appear that debunks current perceptions that a particular intervention has the best empirical support. For example, new studies may test interventions that were previously untested and therefore of unknown efficacy, or demonstrate unintended side effects or consequences that reduce the attractiveness of existing “evidence-informed” interventions when disseminated more broadly in different communities. Sometimes the published evidence can be contradictory or unclear. Rather than feel compelled to adhere to a list of approved interventions that predates such new studies, practitioners should be free to engage in an EIP process that enables them to critically appraise and be informed by existing and emerging scientific evidence. Based on practitioner expertise and client characteristics, practitioners engaging in the EIP process may choose to implement an intervention that has a promising yet less rigorous evidence base. Whether or not the chosen intervention has a great deal of evidence supporting its use, practitioners must assess whether any chosen intervention works for each individual client. Even the most effective treatments will not work for everyone. Sometimes the first-choice intervention option doesn't work, and a second or even third approach (which may have less research evidence) is needed.

      Thus, when the EIP question pertains to decisions about what intervention program or policy to provide, practitioners will attempt to maximize the likelihood that their clients will receive the best intervention possible in light of the following:

       The most rigorous scientific evidence available.

       Practitioner expertise.

       Client attributes, values, preferences, and circumstances.

       Assessing for each case whether the chosen intervention is achieving the desired outcome.

       If the intervention is not achieving the desired outcome, repeating the process of choosing and evaluating alternative interventions.

Schematic illustration of original EIP model.