A Monte Carlo study of several regression estimators in the context of second-order autocorrelation

Loading...
Thumbnail Image

Date

1988

Journal Title

Journal ISSN

Volume Title

Publisher

Montana State University - Bozeman, College of Agriculture

Abstract

The purpose of this study is to compare the small sample properties of several estimators in the context of second-order autocorrelation by using Monte-Carlo methods. Each estimator is applied to data sets generated under several experimental design assumptions. The design assumptions consist of four structures on the independent variable, iterated over two types of error structures, with each error structure being iterated over twenty-five combinations of the autoregressive parameters evenly distributed Within the second-order autoregressive stability triangle. The four independent variable structures included: (1) white noise; (2) dampened trend; (3) growth trend; and (4) dampened cyclical trend. The error structures are generated first under the assumption that the process is stationary, and second under the assumption that the process is initially nonstationary. Each permutation of the experimental design consists of a sample size of T=25, with each permutation being replicated 2500 times. Several statistics relevant to the selection of an appropriate estimator based on efficiency, computational complexities, and inference validity of each estimator are offered. From an overall perspective of choosing an estimator that uniformly demonstrates robustness across the various independent variable types, error structure assumptions, and autoregressive parameter values, a general ranking of the estimators is apparent. The ranking of these estimators is: (1) full maximum likelihood; (2) initially nonstationary; (3) Prais-Winsten estimators; and (4) Cochrane-Orcutt estimators. Ordinary least squares, however, is the superior estimator when the distance of the point defined by the value of the autoregressive parameters is close to the center of the second-order autoregressive stability triangle. The performance of the initially nonstationary estimator is not substantially below that of the full maximum likelihood estimator when viewed from the perspective of overall performance across the various subexperimental designs. This result coupled with the ease in implementing the initially nonstationary estimator for any order autoregressive process suggests strong potential tor this estimator to efficiently estimate linear regression models with autoregressive errors.

Description

Keywords

Citation

Copyright (c) 2002-2022, LYRASIS. All rights reserved.