•  
  •  
 

Abstract

The performance of an autocovariance base estimator (ABE) for GARCH models against that of the maximum likelihood estimator (MLE) if a distribution assumption is wrongly specified as normal was studied. This was accomplished by simulating time series data that fits a GARCH model using the Log normal and t-distributions with degrees of freedom of 5, 10 and 15. The simulated time series was considered as the true probability distribution, but normality was assumed in the process of parameter estimations. To track consistency, sample sizes of 200, 500, 1,000 and 1,200 were employed. The two methods were then used to analyze the series under the normality assumption. The results show that the ABE method appears to be competitive in the situations considered.

DOI

10.22237/jmasm/1257034740

Share

COinS