Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Maximum-likelihood regression

VLE data are correlated by any one of thirteen equations representing the excess Gibbs energy in the liquid phase. These equations contain from two to five adjustable binary parameters these are estimated by a nonlinear regression method based on the maximum-likelihood principle (Anderson et al., 1978). [Pg.211]

Weighted regression of U- " U- °Th- Th isotope data on three or more coeval samples provides robust estimates of the isotopic information required for age calculation. Ludwig (2003) details the use of maximum likelihood estimation of the regression parameters in either coupled XY-XZ isochrons or a single three dimensional XYZ isochron, where A, Y and Z correspond to either (1) U/ Th, °Th/ Th and... [Pg.414]

When experimental data is to be fit with a mathematical model, it is necessary to allow for the fact that the data has errors. The engineer is interested in finding the parameters in the model as well as the uncertainty in their determination. In the simplest case, the model is a linear equation with only two parameters, and they are found by a least-squares minimization of the errors in fitting the data. Multiple regression is just linear least squares applied with more terms. Nonlinear regression allows the parameters of the model to enter in a nonlinear fashion. See Press et al. (1986) for a description of maximum likelihood as it applies to both linear and nonlinear least squares. [Pg.84]

In a well-behaved calibration model, residuals will have a Normal (i.e., Gaussian) distribution. In fact, as we have previously discussed, least-squares regression analysis is also a Maximum Likelihood method, but only when the errors are Normally distributed. If the data does not follow the straight line model, then there will be an excessive number of residuals with too-large values, and the residuals will then not follow the Normal distribution. It follows, then, that a test for Normality of residuals will also detect nonlinearity. [Pg.437]

Ordinary least squares regression requires constant variance across the range of data. This has typically not been satisfied with chromatographic data ( 4,9,10 ). Some have adjusted data to constant variance by a weighted least squares method ( ) The other general adjustment method has been by transformation of data. The log-log transformation is commonly used ( 9,10 ). One author compares the robustness of nonweighted, weighted linear, and maximum likelihood estimation methods ( ). Another has... [Pg.134]

To compute the limited information maximum likelihood estimator, we require the matrix of sums of squares and cross products of residuals of the regressions of y i and y2 on Xi and on xb x2, and x3. These are... [Pg.76]

For random sampling from the classical regression model in (17-3), reparameterize the likelihood function in terms of 77 = 1/cr and 8 = (1 o)P- Find the maximum likelihood estimators of 77 and 8 and obtain the a symptotic covariance matrix of the estimators of these parameters. [Pg.90]

Maximum likelihood estima tes of the Poisson regression parameters are given below. [Pg.110]

A nonlinear multiresponse regression program (9) was used to search for the parameters which yield statistically the best accordance (maximum likelihood (10)) between the twenty interpolated and calculated responses. [Pg.23]

Mallet, A., A maximum likelihood estimation method for random coefficient regression models, Biometrika, Vol. 73, No. 3, 1986, pp. 645-656. [Pg.420]

An independent method to identify the stochastic errors of impedance data is described in Chapter 21. An alternative approach has been to use the method of maximum likelihood, in which the regression procedure is used to obtain a joint estimate for the parameter vector P and the error structure of the data. The maximum likelihood method is recommended under conditions where the error structure is unknown, but the error structure obtained by simultaneous regression is severely constrained by the assumed form of the error-variance model. In addition, the assumption that the error variance model can be obtained by minimizing the objective function ignores the differences eimong the contributions to the residual errors shown in Chapter 21. Finedly, the use of the regression procedure to estimate the standard deviation of the data precludes use of the statistic... [Pg.382]

Haines et al. (47) suggested including the criterion Bayesian D-optimality, which maximizes some concave function of the information matrix, which in essence is the minimization of the generalized variance of the maximum likelihood estimators of the two parameters of the logistic regression. The authors underline that toxicity is recorded as an ordinal variable and not a simple binary variable, and that the present design needs to be extended to proportional odds models. [Pg.792]

Once a PBPK model is developed and implemented, it should be tested for mass balance consistency, as weU as through simulated test cases that can highlight potential errors. These test cases often include software boundary conditions, such as zero dose and high initial tissue concentrations. Some parameters in the PBPK model may have to be estimated through available in vivo data via standard techniques such as nonlinear regression or maximum likelihood estimation (30). Furthermore, in vivo data can be used to update existing (or prior) PBPK model parameter estimates in a Bayesian framework, and thus help in the rehnement of the PBPK model. The Markov chain Monte Carlo (MCMC) (31-34) is one of the... [Pg.1077]

Also under OLS assumptions, the regression parameter estimates have a number of optimal properties. First, 0 is an unbiased estimator for 0. Second, the standard error of the estimates are at a minimum, i.e., the standard error of the estimates will be larger than the OLS estimates given any other assumptions. Third, assuming the errors to be normally distributed, the OLS estimates are also the maximum likelihood (ML) estimates for 0 (see below). It is often stated that the OLS parameter estimates are BLUE (Best Linear Unbiased Predictors) in the sense that best means minimum variance. Fourth, OLS estimates are consistent, which in simple terms means that as the sample size increases the standard error of the estimate decreases and the bias of the parameter estimates themselves decreases. [Pg.59]

A parametric method for handling missing data is maximum likelihood. Recall that in linear regression maximum likelihood maximizes the likelihood function L(.)... [Pg.88]

Missing and censored data should be handled exactly as in the case of linear regression. The analyst can use complete case analysis, naive substitution, conditional mean substitution, maximum likelihood, or multiple imputation. The same advantages and disadvantages for these techniques that were present with linear regression apply to nonlinear regression. [Pg.121]

The regression of parameters from experimental data can follow two statistical techniques least square (LSQ) and maximum likelihood (ML). [Pg.204]

In a second attempt, we treat the same problem by the maximum likelihood method by using the Data Regression System available in Aspen Plus. The results are 3,2=0.8449 and Bj,=0.5274. [Pg.208]

Figure 6.16 shows a comparison between calculated and experimental data, with Antoine parameters from the database, and Maximum Likelihood objective function. The accuracy is sufficient for technical computations, but it could be better when Antoine parameters are simultaneously regressed. [Pg.218]


See other pages where Maximum-likelihood regression is mentioned: [Pg.57]    [Pg.57]    [Pg.3]    [Pg.648]    [Pg.232]    [Pg.629]    [Pg.301]    [Pg.182]    [Pg.32]    [Pg.48]    [Pg.147]    [Pg.75]    [Pg.489]    [Pg.157]    [Pg.372]    [Pg.42]    [Pg.318]    [Pg.119]    [Pg.182]    [Pg.3]    [Pg.343]    [Pg.786]    [Pg.213]    [Pg.86]    [Pg.253]    [Pg.32]    [Pg.48]   
See also in sourсe #XX -- [ Pg.205 ]




SEARCH



Likelihood

Logistic regression model maximum likelihood estimation

Maximum likelihood

© 2024 chempedia.info