Abstract
The purpose of the present paper was to evaluate the internal consistency reliability of the General Teacher Test assuming clustered and non-clustered data using commercial software (Mplus). Participants were 2,000 testees who were selected using random sampling from a larger pool of examinees (more than 65k). The measure involved four factors, namely: (a) planning for learning, (b) promoting learning, (c) supporting learning, and (d) professional responsibilities and was hypothesized to comprise a unidimensional instrument assessing generalized skills and competencies. Intra-class correlation coefficients and variance ratio statistics suggested the need to incorporate a clustering variable (i.e., university) when evaluating the factor structure of the measure. Results indicated that single level reliability estimation significantly overestimated the reliability observed across persons and underestimated the reliability at the clustering variable (university). One level reliability was also, at times, lower than the lowest acceptable levels leading to a conclusion of unreliability whereas multilevel reliability was low at the between person level but excellent at the between university level. It is concluded that ignoring nesting is associated with distorted and erroneous estimates of internal consistency reliability of an ability measure and the use of MCFA is imperative to account for dependencies between levels of analyses.
DOI
10.22237/jmasm/1530027194
Appendix, including Mplus programs used
Included in
Applied Statistics Commons, Social and Behavioral Sciences Commons, Statistical Theory Commons