Although estimating substantive importance (in the form of reporting effect sizes) has recently received widespread endorsement, its use has not been subjected to the same degree of scrutiny as has statistical hypothesis testing. As such, many researchers do not seem to be aware that certain of the same criticisms launched against the latter can also be aimed at the former. Our purpose here is to highlight major concerns about effect sizes and their estimation. In so doing, we argue that effect size measures per se are not the hoped-for panaceas for interpreting empirical research findings. Further, we contend that if effect sizes were the only basis for interpreting statistical data, social-science research would not be in any better position than it would if statistical hypothesis testing were the only basis. We recommend that hypothesis testing and effect-size estimation be used in tandem to establish a reported outcome’s believability and magnitude, respectively, with hypothesis testing (or some other inferential statistical procedure) retained as a “gatekeeper” for determining whether or not effect sizes should be interpreted. Other methods for addressing statistical and substantive significance are advocated, particularly confidence intervals and independent replications.
Onwuegbuzie, Anthony J. and Levin, Joel R.
"Without Supporting Statistical Evidence, Where Would Reported Measures of Substantive Importance Lead? To No Good Effect,"
Journal of Modern Applied Statistical Methods:
1, Article 12.
Available at: http://digitalcommons.wayne.edu/jmasm/vol2/iss1/12