QED evaluation designs compare a treatment group to a control group, but researchers cannot control who is assigned to which group. These studies do not explain with certainty whether the treatment provided directly relates to the positive outcomes. They cannot claim that the comparison groups are truly equivalent prior to treatment. This fails to show that some factors other than the intervention may have influenced outcomes. Providers have a stronger argument for a program's effectiveness if its evaluation uses RCT methods, because they better account for differences in groups, thereby increasing confidence that the outcome is due to the program. Under an RCT design, participants are randomly assigned to the treatment group, thus allowing researchers to balance out certain pre-existing characteristics; for example, motivation to seek treatment. There are limitations to RCTs, however. It may be problematic to assume that randomization creates unbiased treatment and control groups. It is important to know who the individuals are who compose the sample and whether differences exist between the control and experimental groups despite randomization. Similarly, randomization does not correct for volunteer bias in the sample as a whole. Thus, no matter which research method is used, researchers must explain the reasons for their decisions, be aware of any limitations that might arise from the use of that method, and understand how these limitations influence the results and their interpretation. Ways are suggested for improving evaluation research designs for IPV programs. 1 table and 10 references
Similar Publications
- Assessing Methods to Enhance and Preserve Proteinaceous Impressions from the Skin of Decedents during the Early Stages of Decomposition
 - Do Crime Hot Spots Move? Exploring the Effects of the Modifiable Areal Unit Problem and Modifiable Temporal Unit Problem on Crime Hot Spot Stability
 - Mental Health and Rape History in Relation to Non-Medical Use of Prescription Drugs in a National Sample of Women