Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Observations from Normal Linear Regression Model

The posterior mean vector is the weighted average of the prior mean vector and the sample mean vector, where the weights are the proportions of their precision matrices to the posterior precision matrix. It is given by [Pg.87]

8 OBSERVATIONS FROM NORMAL LINEAR REGRESSION MODEL [Pg.87]

In the normal linear regression model, we have n independent observations i/i. j/n where each observation yi has its own mean Hi, and all observations have the same variance cr. The means are unknown linear functions of the p predictor variables xi. Xp. The values of the predictor variables are known for each observation. Hence we can write the mean as [Pg.87]

The last term does not contain /3 so it does not enter the likelihood and can be absorbed into the proportionality constant. WeletVi = Vo +V j. The posterior becomes [Pg.88]

Let U U = Vj where U an orthogonal matrix. We are assuming Vj is of full rank so both U and U are also full rank, and their inverses exist. We complete the square by adding (bi gV g + b(,V )U(U )-HV bi.s+V X)- We subtract it as well, but since that does not contain the parameter /3, that part gets absorbed into the constant. The posterior becomes [Pg.88]


When we have n independent observations from the normal linear regression model where the observations all have the same known variance, the conjugate prior distribution for the regression coefficient vector /3 is multivariate normal(bo, Vq). The posterior distribution of /3 will be multivariate nor-mal y>i, Vi), where... [Pg.91]

Two datasets are fist simulated. The first contains only normal samples, whereas there are 3 outliers in the second dataset, which are shown in Plot A and B of Figure 2, respectively. For each dataset, a percentage (70%) of samples are randomly selected to build a linear regression model of which the slope and intercept is recorded. Repeating this procedure 1000 times, we obtain 1000 values for both the slope and intercept. For both datasets, the intercept is plotted against the slope as displayed in Plot C and D, respectively. It can be observed that the joint distribution of the intercept and slope for the normal dataset appears to be multivariate normally distributed. In contrast, this distribution for the dataset with outliers looks quite different, far from a normal distribution. Specifically, the distributions of slopes for both datasets are shown in Plot E and F. These results show that the existence of outliers can greatly influence a regression model, which is reflected by the odd distributions of both slopes and intercepts. In return, a distribution of a model parameter that is far from a normal one would, most likely, indicate some abnormality in the data. [Pg.5]

Nelder and Wedderburn (1972) extended the general linear model in two ways. First, they relaxed the assumption that the observations have the normal distribution to allow the observations to come from some one-dimensional exponential family, not necessarily normal. Second, instead of requiring the mean of the observations to equal a linear function of the predictor, they allowed a function of the mean to be linked to (set equal to) the linear predictor. They named this the generalized linear model and called the function set equal to the linear predictor the link function. The logistic regression model satisfies the assumptions of the generalized linear model. They are ... [Pg.182]

A mainstay of the classical partition model is the experimental observation that the major thermodynamic driving force for sorption is the hydrophobic effect. The hydrophobic effect results from gain in free energy when non- or weakly-polar molecular surface is transferred out of the polar medium of water 2-4), The hydrophobic effect is manifested by a linear free energy relationship (LFER) between the NOM-normalized partition coefficient (A om) and the w-octanol-water partition coefficient K ) [i.e.. In nom a n + b where a and b are regression constants], or the inverse of the compound s liquid (or theoretical subcooled liquid) saturated water solubility CJ) [i.e.. In AT om = -c In + d. ... [Pg.206]


See other pages where Observations from Normal Linear Regression Model is mentioned: [Pg.145]    [Pg.45]    [Pg.433]    [Pg.293]    [Pg.360]    [Pg.203]    [Pg.386]    [Pg.11]   


SEARCH



Linear regression

Linear regression models

Linearized model

Model Linearity

Models linear model

Models linearization

Normal linear regression model

Observation model

Regression model

Regression modeling

© 2024 chempedia.info