A Review of the Linear Suf�ciency and Linear Prediction Suf�ciency in the Linear Model with New Observations

Stephen Haslett, Jarkko Isotalo, Radosław Kala, A Markiewicz, S Puntanen

    Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

    Abstract

    We consider the general linear model y = Xβββ + εεε, denoted as M = {y, Xβββ, V}, supplemented with the new unobservable random vector y∗, coming from y∗ = X∗βββ + εεε∗, where the covariance matrix of y∗ is known as well as the cross-covariance matrix between y∗ and y. A linear statistic Fy is called linearly sufficient for X∗βββ if there exists a matrix A such that AFy is the best linear unbi-ased estimator, BLUE, for X∗βββ. The concept of linear sufficiency with respect to a predictable random vector is defined in the corresponding way but considering the best linear unbiased predictor, BLUP instead of BLUE. In this paper, we consider the linear sufficiency of Fy with respect to y∗, X∗βββ, and εεε∗. We also apply our results into the linear mixed model. The concept of linear sufficiency was essentially introduced in early 1980s by Baksalary, Kala, and Drygas. Recently, several papers providing further properties of the linear sufficiency have been published by the present authors. Our aim is to provide an easy-to-read review of recent results and while doing that, we go through some basic concepts related to linear sufficiency. As a review paper, we do not provide many proofs, instead our goal is to explain and clarify the central results.
    Original languageEnglish
    Title of host publicationMultivariate, Multilinear and Mixed Linear Models
    EditorsKatarzyna Filipiak, Augustyn Markiewicz, Dietrich von Rosen
    Place of PublicationSwitzerland
    PublisherSpringer, Cham
    Pages265-318
    Volume1
    Edition1
    ISBN (Print)978-3-030-75493-8
    DOIs
    Publication statusPublished - 2021

    Fingerprint

    Dive into the research topics of 'A Review of the Linear Suf�ciency and Linear Prediction Suf�ciency in the Linear Model with New Observations'. Together they form a unique fingerprint.

    Cite this