Off-campus WSU users: To download campus access dissertations, please use the following link to log into our proxy server with your WSU access ID and password, then click the "Off-campus Download" button below.

Non-WSU users: Please talk to your librarian about requesting this dissertation through interlibrary loan.

Access Type

WSU Access

Date of Award

December 2011

Degree Type

Dissertation

Degree Name

Ph.D.

Department

Education Evaluation and Research

First Advisor

Shlomo Sawilowsky

Abstract

Statistical theory and its application provide the foundation to modern systematic inquiry in the behavioral, physical and social sciences disciplines (Fisher, 1958; Wilcox, 1996). It provides the tools for scholars and researchers to operationalize constructs, describe populations, and measure and interpret the relations between populations and variables (Weinbach & Grinnell, 1997; Wilcox, 1996). Given that the majority of real data analysis in the behavioral and social sciences is comprised of non-normally distributed data, it is important that researchers be aware of the effects of non-normal distributions on the probability of detecting equivalence between populations.

The present study examined the effects and management of non-normally distributed data on equivalency tests under varied conditions for a two-sample design; and compared the properties of showing equivalence between populations at the smallest effect sizes. The findings for this report indicated that under conditions where sample sets were non-normally distributed, the differences in the statistical properties of the three equivalency tests became most pronounced at the lowest nominal =.001 for small to medium sample sizes. Optimal performance in relation to detecting equivalence occurred for the nominal =.001, for small sample sizes n1n2 (10, 10; 10, 20) under the Smooth Symmetric and Extreme Asymmetry distributions. Overall, all three tests demonstrated low power due to the relatively small sample size combinations paired with small effect sizes, and failed to control Type I error. Based on the findings of this study, none of the three tests were recommended as superior to the other.