Saturday, September 28, 2002

This presentation is part of : Methodological Issues in Intervention Research

Methodological issues and safeguards in effectiveness research

Maureen McClatchey, PhD, senior biostatistician1, Patricia Moritz, RN, PhD, FAAN, associate professor and director1, Souraya Sidani, RN, PhD, associate professor2, and Dana Epstein, RN, PhD, associate chief nurse for research3. (1) University of Colorado Health Sciences Center, National Center for Children, Families and Communities, Denver, CO, USA, (2) Faculty of Nursing, University of Toronto, Toronto, ON, Canada, (3) Department of Veterans Affairs, Carl T. Hayden VA Medical Center, Phoenix, AZ, USA

Objective: Evaluating the effectiveness of nursing interventions in producing desired outcomes is critical for developing the knowledge base that guides clinical practice. In several instances, the interventions are delivered under the conditions of everyday practice, which are characterized by a complex system of causes and effects. Multiple factors related to the characteristics of the clients, the clinician, the setting and its environment, and the interventions, influence outcome achievement. Controlling for these factors, as is often recommended, is difficult to accomplish, does not take into account the complexity of clinical reality, and produces results that are of limited utility to guide practice. Alternative strategies are needed to meaningfully deal with the influence of these factors. These strategies fall under the rubric of the theory-driven approach to intervention evaluation, and consist of accounting, rather than controlling, for factors that affect outcomes. The alternative strategies have several implications for the design and conduct of intervention evaluation research. In this paper, strategies for dealing with four inter-related aspects of a research design are discussed and illustrated with examples. The aspects are client selection, intervention implementation fidelity, setting-related contextual factors, and variability in the response to treatment. Design and Method: This paper presents a review of methodological issues related to the four aspects and provides examples from ongoing evaluation research of the implementation of a public health nursing intervention in multiple agencies across the Nation. Issues and Strategies: Careful client selection, based on a set of inclusion criteria from original clinical trials, is advocated to control for extraneous factors that influence outcome achievement. Sample homogeneity increases the likelihood of detecting significant intervention effects; however, generalizability to all segments of the target population will be limited. Alternatively, having a less restrictive set of selection criteria enhances the representativeness of the sample, but requires special data analysis that examines how different subgroups of the sample responded to the intervention. Random assignment of clients to study groups is considered the gold standard to minimize selection bias and to achieve initial group equivalence. This equivalence is necessary for validly attributing the observed outcomes to the intervention. Despite its advantages, random assignment has been found to impossible to implement during everyday health care delivery. The alternative strategy proposes to account for eligible clients’ preferences and choices by documenting them and including these factors when examining fidelity of intervention implementation. Analyses are conducted using process data for the implementation that consider the treatment plan, content of the intervention, continued participation versus attrition, and comparisons across sites and to clinical trial data. Variability in the response to treatment is viewed as a source of error that decreases the power to detect significant intervention effects. Yet, this variability is of interest to clinicians as health care is designed to meet individual needs. The alternative strategy calls for assessing this variability, rather than considering it a source of error, which can be accomplished through various data collection and statistical techniques. Conclusions and Implications: Careful client selection, random assignment, and considering individual variability in the response to treatment as error, have been traditionally advocated as means for minimizing threats to internal validity in intervention evaluation research. Their use, however, results in limited generalizability and applicability of the findings to everyday practice. The strategies examined here attempt to represent and understand inter-individual variability in treatment implementation and outcomes, which is consistent with nursing and clinical reality. They reflect a shift in the paradigm guiding scientific clinical inquiry. The goal is to conduct intervention evaluation research in a way that would provide answers to clinically relevant questions and link daily clinical practice with outcomes. Such a knowledge base has the potential to reduce the research-practice gap. Supported by R. W. Johnson Foundation, Oklahoma State Department of Health, Doris Duke Foundation

Back to Methodological Issues in Intervention Research
Back to The Advancing Nursing Practice Excellence: State of the Science