Having just completed a Program Evaluation course at Walden University, I reflect here on the experience.
I didn’t realize that the process of creating and presenting my program evaluation plan was central to my learning until it was done. It was in pulling the disparate pieces of the plan together into a cohesive whole that I found meaning. Only then did I realize that had I not thoroughly explored the program context; not stretched myself to identify primary and secondary stakeholders and their interests, needs, and biases; not named my values and committed myself to them; not reflected on my own biases; not considered the impact of my report, my evaluation plan, which may have held together well enough to bring me to the last week of the course, would ultimately have failed the final analysis.
Program evaluation can inform what needs to change and form the basis of a change management plan, but first, it must fit the program context. It is incumbent on the evaluator to consider how contextual factors may inform the selection of an evaluation model. Program evaluation is inherently a political process, and an evaluator who ignores, avoids, or mismanages the political realities of evaluation limits the effectiveness and usefulness of the process (Fitzpatrick, Sanders, & Worthen, 2010.
This experience has shown me that working with stakeholders is one of the most challenging aspects of program evaluation. The important work of planning a program evaluation can be upset by stakeholder conflict, politics, bias, and unexpected manifestations of organizational culture. And yet, as an evaluator, I have a professional obligation to find my way to promote meaningful evaluation and the application of evaluation results by stakeholders (Fitzpatrick et al., 2010).
Technology can facilitate communication with stakeholders and simplify the processes of data collection, data management, and research (Laureate Education, Inc., n.d.), but it is only a tool; the evaluator must provide the raw material and craft the work. In every phase of evaluation it is incumbent on the evaluator to uphold the priority of justice (Schweigert, 2007); to mine the program context for cultural cues, gaps in understanding, potential bias; and feasibility; and to demonstrate and promote respect for stakeholders and the evaluation process.
Bias is the weed that pervades the evaluation process, from the evaluator’s preference for a particular approach or data collection design to overt or covert liking of some stakeholders more than others, finding some steps of the evaluation process more interesting, more compelling, or more exhausting than others. The presence of bias is a given. Evaluators must be frankly self-reflective about their role in the evaluation process and circumspect about client requests, so as to minimize the potential for bias and ethical compromise (Fitzpatrick et al., 2010).
At some point, situational circumstance requires evaluators to make interpretations and best guesses (Schweigert, 2007), which are subject to bias and ethical compromise. I carry with me from this course Sieber’s (1980) conclusion that “being ethical in program evaluation is a process of growth in understanding, perception, and creative problem-solving ability that respects the interests of individuals and of society” (p. 53).
American Evaluation Association, 2004. Guiding principles. Retrieved from
Fetterman, D. (2001). The transformation of evaluation into a collaboration: A vision of
evaluation in the 21st century. American Journal of Evaluation, 22(3), 381–384.
Retrieved from the Education Research Complete database
Fitzpatrick, J., Sanders, J., & Worthen, B. (2010). Program evaluation: Alternative approaches
and practical guidelines (4th ed.). Boston, MA: Pearson
Laureate Education, Inc. (Producer). (2009). Formative and Summative Evaluation. [DVD]
Schweigert, F. J. (2007). The priority of justice: A framework approach to ethics in program
evaluation. Evaluation and Program Planning, 30(4), 394–399.
Sieber, J. E. (1980). Being ethical: Professional and personal decisions in program evaluation. In R.E. Perloff & E. Perloff (Eds.), Values, ethics, and standards in evaluation. New
Directions for Program Evaluation, No. 7, 51-61. San Francisco: Jossey-Bass.
Worthen, B. (2001). Whither evaluation? That all depends. American Journal of Evaluation,
22(3), 409–416. Retrieved from the Education Research Complete database