The robustness of parametric analyses is rarely questioned or qualified. Robustness, generally understood, means the exact and approximate p-values will lie on the same side of alpha for any reasonable data set; and 1) any data set would qualify as reasonable and 2) robustness holds universally, for all alpha levels and approximations. For this to be true, the approximation would need to be perfect all of the time. Any discrepancy between the approximation and the exact p-value, for any combination of alpha level and data set, would constitute a violation. Clearly, this is not true, and when confronted with this reality, the “No True Scotsman” fallacy is often invoked with the declaration it must have been a pathological data set, as if this would obviate the responsibility to select an appropriate research method. Ideally, a method would be selected because it is optimal, or at least appropriate, without needing special pleading, but judging by how often approximations are used when the exact values they are trying to approximate are readily available, current trends do not come close to this ideal. One possible explanation might be that there is not much information available on data sets for which the approximations fail miserably. Examples are presented in an effort to clarify the need for exact analyses.
Berger, V. W. (2017). An empirical demonstration of the need for exact tests. Journal of Modern Applied Statistical Methods, 16(1), 34-50. doi: 10.22237/jmasm/1493596920