Intentionally biased bootstrap methods

Peter Hall*, Brett Presnell

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    105 Citations (Scopus)

    Abstract

    A class of weighted bootstrap techniques, called biased bootstrap or b-bootstrap methods, is introduced. It is motivated by the need to adjust empirical methods, such as the "uniform" bootstrap, in a surgical way to alter some of their features while leaving others unchanged. Depending on the nature of the adjustment, the b-bootstrap can be used to reduce bias, or to reduce variance or to render some characteristic equal to a predetermined quantity. Examples of the last application include a b-bootstrap approach to hypothesis testing in nonparametric contexts, where the b-bootstrap enables simulation "under the null hypothesis", even when the hypothesis is false, and a b-bootstrap competitor to Tibshirani's variance stabilization method. An example of the bias reduction application is adjustment of Nadaraya-Watson kernel estimators to make them competitive with local linear smoothing. Other applications include density estimation under constraints, outlier trimming, sensitivity analysis, skewness or kurtosis reduction and shrinkage.

    Original languageEnglish
    Pages (from-to)143-158
    Number of pages16
    JournalJournal of the Royal Statistical Society. Series B: Statistical Methodology
    Volume61
    Issue number1
    DOIs
    Publication statusPublished - 1999

    Fingerprint

    Dive into the research topics of 'Intentionally biased bootstrap methods'. Together they form a unique fingerprint.

    Cite this