This post is part of a series on beliefs about social experimentation; if you missed the first post, start at the beginning of the series here.
Social experiments sacrifice external validity (i.e., outside the study; findings can be generalized to other interventions and settings) for internal validity (i.e., inside the study, ensuring the evaluation/research is robust and executed correctly).
Yes, social scientists within many organization are very concerned with “getting it right” and being able to defend their methods (i.e., internal validity). That is a good thing. They also need to maximize external validity by evaluating the effects of the intervention in a realistic setting (on the job, not in a training room) and via replication (evaluate multiple times under the same circumstances).
Most L&D evaluations employ a quasi-experimental design (i.e., assess the impact of only those trained—the experimental group) because we cannot or, by design, do not randomly assign learners to treatment groups. We just evaluate the way the intervention was designed to rollout. We can engender more organizational trust and more robust evidence by conducting random control trials (RCT) (i.e., compare those trained—experimental group—with a comparison group who was not trained) the research gold standard. RCTs are our most powerful tool for establishing cause and effect and have the properties of reliability and transparency. All we do is take advantage of incidental experiments. That is, after the intervention is designed, you stagger the rollout and randomly assign learners to staggered interventions—and voilà—you have an RCT.
As evaluators and social scientists, we must adhere to the American Evaluation Association’s Guiding Principles to uphold industry standards and as our moral duty of care.