Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Iteratively reweighted least squares

A useful method of weighting is through the use of an iterative reweighted least squares algorithm. The first step in this process is to fit the data to an unweighted model. Table 11.7 shows a set of responses to a range of concentrations of an agonist in a functional assay. The data is fit to a three-parameter model of the form... [Pg.237]

Note that there is a strong similarity to LDA (Section 5.2.1), because it can be shown that also for LDA the log-ratio of the posterior probabilities is modeled by a linear function of the x-variables. However, for LR, we make no assumption for the data distribution, and the parameters are estimated differently. The estimation of the coefficients b0, b, ..., bm is done by the maximum likelihood method which leads to an iteratively reweighted least squares (IRLS) algorithm (Hastie et al. 2001). [Pg.222]

A useful method of weighting is through the use of an iterative reweighted least squares algorithm. The first step... [Pg.285]

The new estimate of 0 is a more efficient estimator since it makes use of the variability in the data. A modification of this process is to combine Steps 2 and 3 into a single step and iterate until the GLS parameter estimates stabilize. This modification is referred to as iteratively reweighted least-squares (IRWLS) and is an option available in both WinNonlin and SAS. Early versions of WinNonlin were limited in that g(.) was limited to the form in Eq. (4.6) where is specified by the user. For example, specifying 4> = 0.5, forces weights... [Pg.133]

In general, it may be messy to find the simultaneous solution of these equations algebraically. Nelder and Wedderburn (1972) showed that in the generalized linear model, these maximum likelihood estimators could also be found by iteratively reweighted least squares. Let the observation vector and parameter vector be... [Pg.182]

The maximum likelihood estimate of the coefficient of the linear predictors can be found by iteratively reweighted least squares. This method also finds a "covariance matrix."... [Pg.200]

We recognize the equation in step 2 as the weighted least squares estimate on the adjusted observations. Iterating through these two steps until convergence finds the iteratively reweighted least squares estimates. [Pg.207]

We approximate the likelihood function by a multivariate normal 0ML where 0ml the MLE and is the matched curvature covariance matrix that is output by the iteratively reweighted least squares. We use a multivariate nor-ma/[bo, Vo] prior for or we can use a "flat" prior if we have no prior information. The approximate posterior will be... [Pg.207]

When we have eensored survival times data, and we relate the linear predictor to the hazard function we have the proportional hazards model. The function BayesCPH draws a random sample from the posterior distribution for the proportional hazards model. First, the function finds an approximate normal likelihood function for the proportional hazards model. The (multivariate) normal likelihood matches the mean to the maximum likelihood estimator found using iteratively reweighted least squares. Details of this are found in Myers et al. (2002) and Jennrich (1995). The covariance matrix is found that matches the curvature of the likelihood function at its maximum. The approximate normal posterior by applying the usual normal updating formulas with a normal conjugate prior. If we used this as the candidate distribution, it may be that the tails of true posterior are heavier than the candidate distribution. This would mean that the accepted values would not be a sample from the true posterior because the tails would not be adequately represented. Assuming that y is the Poisson censored response vector, time is time, and x is a vector of covariates then... [Pg.302]

MINITAB (and other software) was used to estimate the parameters of interest via several more refined statistieal methods, ineluding iteratively reweighted least squares (IRLS), " " maximum likelihood estimation (MLE), probit analysis and ordinal regression. , "" The exact procedures are discussed below. [Pg.265]

Holland, P.W., Welsch, R.E. Robust regression using iteratively reweighted least squares. Communications in Statistics - Theory and Methods 6(9), 813-827 (1977)... [Pg.40]


See other pages where Iteratively reweighted least squares is mentioned: [Pg.2765]    [Pg.2765]    [Pg.22]    [Pg.179]    [Pg.183]    [Pg.199]    [Pg.203]    [Pg.206]    [Pg.207]    [Pg.217]    [Pg.218]    [Pg.218]    [Pg.279]    [Pg.281]    [Pg.283]    [Pg.299]    [Pg.301]    [Pg.333]    [Pg.669]   
See also in sourсe #XX -- [ Pg.183 , Pg.200 , Pg.207 , Pg.217 ]

See also in sourсe #XX -- [ Pg.265 ]




SEARCH



ITER

Iterated

Iteration

Iteration iterator

Iterative

Iteratively reweighted least

Reweighting

© 2024 chempedia.info