Popis: |
In a linear errors-in-variables regression model, one observes a dependent scalar variable Y i and an error-prone measurement X i of an r -dimensional latent predictor x i , i = 1,…, n , where it is assumed that E [ Y i | x i ] = a + Bx i . It is well known that the naive least squares estimator (LSE) of B obtained by regressing Y i on X i is biased and inconsistent. A natural alternative is to use maximum likelihood to estimate B , under normality assumptions. The use of normality assumptions creates identifiability problems which require parametric restrictions to resolve. Gleser (1992) argues that an appropriate parametric restriction is to assume that the reliability matrix A of the measured predictors X i is known. The problem of estimating B then reduces to estimation of slopes in a standard linear model with random regressors Λ X i , but with a known bound on the scaled magnitude (signal-to-noise ratio) of B . The slope b of the regression of Y i on Λ X i is the best unbiased estimator of B , and is asymptotically equivalent to the maximum likelihood estimator (MLE) of B . In the present paper, it is shown that b is dominated in matrix (and thus total mean-squared-error) risk by a linear ‘shrinkage’ b of b . The naive LSE is also a linear shrinkage of b ; both the naive LSE and b are shown to be linearly inadmissible under total mean-squared-error risk unless Λ = cI r . |