TY - JOUR
T1 - Bootstrap confidence regions computed from autoregressions of arbitrary order
AU - Choi, Edwin
AU - Hall, Peter
PY - 2000
Y1 - 2000
N2 - Given a linear time series, e.g. an autoregression of infinite order, we may construct a finite order approximation and use that as the basis for bootstrap confidence regions. The sieve or autoregressive bootstrap, as this method is often called, is generally seen as a competitor with the better-understood block bootstrap approach. However, in the present paper we argue that, for linear time series, the sieve bootstrap has significantly better performance than blocking methods and offers a wider range of opportunities. In particular, since it does not corrupt second-order properties then it may be used in a double-bootstrap form, with the second bootstrap application being employed to calibrate a basic percentile method confidence interval. This approach confers second-order accuracy without the need to estimate variance. That offers substantial benefits, since variances of statistics based on time series can be difficult to estimate reliably, and - partly because of the relatively small amount of information contained in a dependent process - are notorious for causing problems when used to Studentize. Other advantages of the sieve bootstrap include considerably greater robustness against variations in the choice of the tuning parameter, here equal to the autoregressive order, and the fact that, in contradistinction to the case of the block bootstrap, the percentile t version of the sieve bootstrap may be based on the 'raw' estimator of standard error. In the process of establishing these properties we show that the sieve bootstrap is second order correct.
AB - Given a linear time series, e.g. an autoregression of infinite order, we may construct a finite order approximation and use that as the basis for bootstrap confidence regions. The sieve or autoregressive bootstrap, as this method is often called, is generally seen as a competitor with the better-understood block bootstrap approach. However, in the present paper we argue that, for linear time series, the sieve bootstrap has significantly better performance than blocking methods and offers a wider range of opportunities. In particular, since it does not corrupt second-order properties then it may be used in a double-bootstrap form, with the second bootstrap application being employed to calibrate a basic percentile method confidence interval. This approach confers second-order accuracy without the need to estimate variance. That offers substantial benefits, since variances of statistics based on time series can be difficult to estimate reliably, and - partly because of the relatively small amount of information contained in a dependent process - are notorious for causing problems when used to Studentize. Other advantages of the sieve bootstrap include considerably greater robustness against variations in the choice of the tuning parameter, here equal to the autoregressive order, and the fact that, in contradistinction to the case of the block bootstrap, the percentile t version of the sieve bootstrap may be based on the 'raw' estimator of standard error. In the process of establishing these properties we show that the sieve bootstrap is second order correct.
KW - Akaike's information criterion
KW - Autoregressive bootstrap
KW - Block bootstrap
KW - Calibration
KW - Coverage accuracy
KW - Double bootstrap
KW - Edgeworth expansion
KW - Moving average
KW - Percentile method
KW - Percentile t method
KW - Second-order accuracy
KW - Sieve bootstrap
KW - Studentization
UR - http://www.scopus.com/inward/record.url?scp=0034366252&partnerID=8YFLogxK
U2 - 10.1111/1467-9868.00244
DO - 10.1111/1467-9868.00244
M3 - Article
SN - 1369-7412
VL - 62
SP - 461
EP - 477
JO - Journal of the Royal Statistical Society. Series B: Statistical Methodology
JF - Journal of the Royal Statistical Society. Series B: Statistical Methodology
IS - 3
ER -