Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Linear predictor

It is also possible to use the information which has been stored to write programs for other tasks. A useful one, for example, keeps track of the stoichiometry (i.e. total atom counts) of the system. For a closed system, stoichiometry should be automatically maintained by linear predictor-corrector solvers, and the stoichiometry program provides a diagnostic of numerical errors (and others) which have accumulated. In other than closed systems, it gives an independent check on the sources and sinks which are being modeled. [Pg.123]

The best linear predictor will be provided by that set of a(kN) (k = 1, 2,. .. N) which minimize the expected square error... [Pg.335]

Following the random-function model (1), consider the prediction of T(x ) by f (.v) = a x)y, that is, a linear combination of the n values of the output variable observed in the experiment. The best linear unbiased predictor is obtained by minimizing the mean squared error of the linear predictor or approximator, Y(x). The mean squared error, MSIi K(x), is... [Pg.313]

FIGURE 7.17 Plotted are the observations versus the predictions obtained with a model that includes creatinine clearance as a linear predictor of clearance. Since individuals 16 and 42 dominated the graph, they were excluded from this display. [Pg.204]

Since the best linear predictor in the mean square sense is obtained by conditioning the estimate on all available past information, we get that the r-step ahead forecast,... [Pg.255]

Theorem 6.1 (z-step ahead linear predictor) The m-step ahead linear predictor,... [Pg.288]

The purpose of the multivariate calibration here is to predict an analyte s concentrations y, in objects i= 1,2,...,/ from a set of optical densities xnc at wavelength channels k= 1,2,..., AT via a linear predictor equation shown as follows ... [Pg.190]

The logit is linked to the linear predictor, an unknown linear function of the predictor variables. [Pg.181]

Nelder and Wedderburn (1972) extended the general linear model in two ways. First, they relaxed the assumption that the observations have the normal distribution to allow the observations to come from some one-dimensional exponential family, not necessarily normal. Second, instead of requiring the mean of the observations to equal a linear function of the predictor, they allowed a function of the mean to be linked to (set equal to) the linear predictor. They named this the generalized linear model and called the function set equal to the linear predictor the link function. The logistic regression model satisfies the assumptions of the generalized linear model. They are ... [Pg.182]

If we have a random sample from the posterior instead of the exact posterior distribution, we draw a random sample from the predictive distribution instead of evaluating it exactly. For each draw from the posterior sample we calculate the value of the linear predictor using those particular values of the predictor variables. Then we calculate the success probability for that draw. We draw a binomial 1, tt ) random variable for each of the success probabilities iti. This gives us a random sample fi-om the predictive distribution. [Pg.198]

Example 12 (continued) Suppose we want to find the predictive distribution for 30-day survival of a 60-year old male patient who does not have shock or renal failure and is given a stent. Let the draws of the intercept, the slope coefficient for age, the slope coefficient for male, and the slope coefficient for stent be in columns cl, c2, c3, and c6, respectively. We let column c8 equal draws for the linear predictor for that male. Then we let column c9 draws from the exponential of c8 divided by one plus the exponential of c8 for that male. Then we let c 10 be a random draw from a binomial(l, clO). Each observation has its own success probability that is its value in cIO. The summary statistics for the 2000 draws from the predictive distribution are mean. 9615 and standard deviation. 19245. [Pg.199]

In the logistic regression model, the observations are independent binomial observations where each observation has its own probability of success that is related to a set of predictor variables by setting the logarithm of the odds ratio equal to an unknown linear function of the predictor variables. This is known as the logit link function. The coefficients of the linear predictor are the unknown parameters that we are interested in. [Pg.199]

The logistic regression model is an example of a generalized linear model. The observations come from a member of one-dimensional exponential family, in this case binomial. Each observation has its own parameter value that is linked to the linear predictor by a link function, in this case the logit link. The observations are all independent. [Pg.199]

The maximum likelihood estimate of the coefficient of the linear predictors can be found by iteratively reweighted least squares. This method also finds a "covariance matrix."... [Pg.200]

This is called the log link function and it relates the linear predictor to the parameter. Other link functions could be used, but the log link is the most commonly used for Poisson observations. We use the link function to rewrite the likelihood function as a function of the parameters Po,...,Pp. The likelihood becomes... [Pg.205]

Note we can absorb the underlying constant hazard rate A into the linear predictor by including the intercept term 0o = loge At the end of the study, for each individual we have recorded ti which is either time of death, or time at end of study, and an indicator variable Wi which indicates whether U indicates time of death, or ti indicates the end of the study. In that case, we don t know Ti, the time of death of the individual, only that Ti > ti. We say that observation T has been censored, and Wi is the censoring variable... [Pg.215]

We have seen this before in Equation 9.1 where it is the likelihood for a random sample of n independent Poisson random variables with parameters fit. This means that given A, we can treat the censoring variables lUj as an independent random sample of Poisson random variables with respective parameters /x. Suppose we let T]i = be the linear predictor. Taking logarithm of the parameter /Xj we... [Pg.216]

SO the linear predictor is linked to the parameter via the log link function. The term ogg Xti) is called an offset. The coefficient of the offset is assumed to be one and will not be estimated. We can absorb the underlying hazard rate A by including an intercept y3o in the linear predictor. In terms of the parameters Po,---i0p the likelihood becomes... [Pg.217]

The observations of the censoring variable Wi come from the Poisson distribution, a member of the exponential family. The logarithm of the parameter /r, is linked to the linear predictor rj. The observations are independent. Clearly the proportional hazards model is a generalized linear model and can be analyzed the same way as the Poisson regression model. [Pg.217]

Including extra predictor variables that do not affect the linear predictor will lead to poor predictions. In other words, if the true coefficient = 0, then we should not include variable Xj as a predictor since the parameter will not be related to it. We will get better predictions for new data if we leave out the unnecessary predictors. This... [Pg.224]

The link function relates the linear predictor to a function of the parameter. The log link function is commonly used for the Poisson regression model. [Pg.228]

A constant underlying hazard rate A can be estimated by including an intercept Po is the linear predictor. [Pg.229]

When we have eensored survival times data, and we relate the linear predictor to the hazard function we have the proportional hazards model. The function BayesCPH draws a random sample from the posterior distribution for the proportional hazards model. First, the function finds an approximate normal likelihood function for the proportional hazards model. The (multivariate) normal likelihood matches the mean to the maximum likelihood estimator found using iteratively reweighted least squares. Details of this are found in Myers et al. (2002) and Jennrich (1995). The covariance matrix is found that matches the curvature of the likelihood function at its maximum. The approximate normal posterior by applying the usual normal updating formulas with a normal conjugate prior. If we used this as the candidate distribution, it may be that the tails of true posterior are heavier than the candidate distribution. This would mean that the accepted values would not be a sample from the true posterior because the tails would not be adequately represented. Assuming that y is the Poisson censored response vector, time is time, and x is a vector of covariates then... [Pg.302]


See other pages where Linear predictor is mentioned: [Pg.334]    [Pg.253]    [Pg.203]    [Pg.395]    [Pg.328]    [Pg.346]    [Pg.51]    [Pg.88]    [Pg.464]    [Pg.464]    [Pg.288]    [Pg.456]    [Pg.22]    [Pg.199]    [Pg.203]    [Pg.204]    [Pg.205]    [Pg.227]    [Pg.283]   
See also in sourсe #XX -- [ Pg.313 ]




SEARCH



Best linear unbiased predictors

Empirical best linear unbiased predictor

Predictors

© 2024 chempedia.info