Access Type

Open Access Dissertation

Date of Award

January 2013

Degree Type

Dissertation

Degree Name

Ph.D.

Department

Instructional Technology

First Advisor

James L. Moseley

Abstract

This dissertation empirically examines the Guerra-Lopez (2007a) Impact Evaluation Process (IEP), which is a prescriptive program evaluation model. Since there is no generally accepted process for arriving at final judgments about the usefulness, appropriateness, effectiveness, reliability, and validity of evaluation models, this study used a combination of approaches to begin to build a body of evidence about the effectiveness of the IEP. Primarily, the study used Stufflebeam's (2011) recently revised Program Evaluations Metaevaluation Checklist to examine the model. The Checklist is based on the Joint Committee on Standards for Educational Evaluation's (2010) Program Evaluation Standards. Fitzpatrick, Sanders, & Worthen (2011) recommend selecting a subset of these standards to use when evaluation a design. Additionally, the study used Miller's (2010) framework for empirically evaluating how evaluation theory informs practice. First, through a study of three evaluation theory classification schemes, the researcher identifies where the IEP fits among other common evaluation models. Next, in order to reach a judgment based on the model's application in the real world, the researcher conducted an impact evaluation on a 1:1 technology program at a secondary school using the model. The process used to conduct the evaluation is discussed in detail. As part of the process, the researcher developed an operationalized version of the model. Based on these standards and Stufflebeam's (2011) Checklist scoring method, the evaluator and a professional metaevaluator rated the Impact Evaluation Process as "Very Good".

Share

COinS