Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Specification Analysis and Model Selection

The result cited is E[bi] = Pi + P1.2P2 where P1.2 = (Xi Xi) 1X X2, so the coefficient estimator is biased. If the conditional mean function [X2 X1] is a linear function of Xb then the sample estimator P12 actually is an unbiased estimator of the slopes of that function. (That result is Theorem B.3, equation (B-68), in another form). Now, write the model in the form [Pg.30]

when we regress y on Xj alone and compute the predictions, we are computing an estimator of Xi(Pi + P1.2P2) = X1P1 + E[X2 Xi]p2. Both parts of the compound disturbance in this regression e and (X2 - E[X2 Xi])P2 have mean zero and are uncoiTelated with X] and E[X2 X ], so the prediction error has mean zero. The implication is that the forecast is unbiased. Note that this is not true if E[X2 Xi] is nonlinear, since PL2 does not estimate the slopes of the conditional mean in that instance. The generality is that leaving out variables wil bias the coefficients, but need not bias the forecasts. It depends on the relationship between the conditional mean function E[X2 Xi] and XiPi.2. [Pg.30]

Compare the mean squared errors of bj and bi.2 in Section 8.2.2. (Hint, the comparison depends on the data and the model parameters, but you can devise a compact expression for the two quantities.) [Pg.30]

The long estimator, bj 2 is unbiased, so its mean squared eiror equals its variance, t2(Xi M2Xi)  [Pg.30]

The short estimator, bi is biased E[bi] = Pi + Pi.2P2- It s variance is ct2(X1 X1) 1. It s easy to show that this latter variance is smaller. You can do that by comparing the inverses of the two matrices. The inverse of the first matrix equals the inverse of the second one minus a positive definite matrix, which makes the inverse smaller hence the original matrix is larger - Var[bi.2] Var[bi]. But, since bi is biased, the variance is not its mean squared eiTor. The mean squared eiTor of bj is Var[bi] + biasxbias. The second term is Pi.2P2P2 Pi.2 - When this is added to the variance, the sum may be larger or smaller than Var[bi 2] it depends on the data and on the parameters, p2. The important point is that the mean squared error of the biased estimator may be smaller than that of the unbiased estimator. [Pg.30]

The J test in Example is canied out using over 50 years of data. It is optimistic to hope that the underlying stracture of the economy did not change in 50 years. Does the result of the test carried out in Example 8.2 persist if it is based on data only from 1980 to 2000 Repeat the eomputation with this subset of the data. [Pg.30]


See other pages where Specification Analysis and Model Selection is mentioned: [Pg.30]    [Pg.30]   


SEARCH



Model analysis

Model selection

Modeling and Analysis

Modeling selecting models

Selection analysis

Selective analysis

Selectivity analysis

Selectivity and Specificity

Specific Analysis

Specific model

Specific selectivity

Specification model

© 2024 chempedia.info