Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Zero-parameter models, optimization

Perhaps the simplest comparison that we can make is for zero-parameter models, and this is where we start. Zero-parameter models have nothing to optimize. They suggest an immediate and convenient framework for comparison, independent of successful (or unsuccessful) optimization of parameters. [Pg.90]

We obtain an r.m.s. deviation of 0.84 kcal/mol with an optimal a of 0.181. One can also note the similarity between the a value of this model and that of the two-parameter model with a free a and /3. This suggests that the model is robust in the sense that the actual polar and non-polar free energy contributions are more or less invariant, as long as deviations from linear response are taken into account in a proper way. The FEP-derived model could be considered preferable to the two-parameter model since it contains only one free parameter, viz. oc. The results of adding a constant yto the new model was also investigated. Remarkably, the optimal value for such a y was found to be -0.02 kcal/mol, i.e. virtually zero. [Pg.180]

The proper dose of ketoprofen for an optimized zero-order model to obtain the desired drug level pattern to remain in the therapeutic range for 12 h (twice-a-day formulation) was estimated from drug pharmacokinetic parameters [6] by conventional equations [3] on the basis of a one-compartment open model and was found to be 1 lOmg. [Pg.73]

The optimization can be carried out by several methods of linear and nonlinear regression. The mathematical methods must be chosen with criteria to fit the calculation of the applied objective functions. The most widely applied methods of nonlinear regression can be separated into two categories methods with or without using partial derivatives of the objective function to the model parameters. The most widely employed nonderivative methods are zero order, such as the methods of direct search and the Simplex (Himmelblau, 1972). The most widely used derivative methods are first order, such as the method of indirect search, Gauss-Seidel or Newton, gradient method, and the Marquardt method. [Pg.212]

Maximum likelihood (ML) estimation can be performed if the statistics of the measurement noise Ej are known. This estimate is the value of the parameters for which the observation of the vector, yj, is the most probable. If we assume the probability density function (pdf) of to be normal, with zero mean and uniform variance, ML estimation reduces to ordinary least squares estimation. An estimate, 0, of the true yth individual parameters (pj can be obtained through optimization of some objective function, 0 (0 ). ModeP is assumed to be a natural choice if each measurement is assumed to be equally precise for all values of yj. This is usually the case in concentration-effect modeling. Considering the multiplicative log-normal error model, the observed concentration y is given by ... [Pg.2948]

Intermediate Least Squares regression (ILS) is an extension of the Partial Least Squares (PLS) algorithm where the optimal variable subset model is calculated as intermediate to PLS and stepwise regression, by two parameters whose values are estimated by cross-validation [Frank, 1987]. The first parameter is the number of optimal latent variables and the second is the number of elements in the weight vector w set to zero. This last parameter (ALIM) controls the number of selected variables by acting on the weight vector of each mth latent variable as the following ... [Pg.472]

The particular iterative technique chosen by Car and Parrinello to iteratively solve the electronic structure problem in concert with nuclear motion was simulated annealing [11]. Specifically, variational parameters for the electronic wave function, in addition to nuclear positions, were treated like dynamical variables in a molecular dynamics simulation. When electronic parameters are kept near absolute zero in temperature, they describe the Bom-Oppenheimer electronic wave function. One advantage of the Car-Parrinello procedure is rather subtle. Taking the parameters as dynamical variables leads to robust prediction of values at a new time step from previous values, and cancellation in errors in the value of the nuclear forces. Another advantage is that the procedure, as is generally true of simulated annealing techniques, is equally suited to both linear and non-linear optimization. If desired, both linear coefficients of basis functions and non-linear functional parameters can be optimized, and arbitrary electronic models employed, so long as derivatives with respect to electronic wave function parameters can be calculated. [Pg.418]


See other pages where Zero-parameter models, optimization is mentioned: [Pg.129]    [Pg.267]    [Pg.216]    [Pg.602]    [Pg.335]    [Pg.318]    [Pg.207]    [Pg.3495]    [Pg.494]    [Pg.128]    [Pg.120]    [Pg.74]    [Pg.253]    [Pg.89]    [Pg.326]    [Pg.81]    [Pg.175]    [Pg.181]    [Pg.58]    [Pg.141]    [Pg.24]    [Pg.479]    [Pg.33]    [Pg.213]    [Pg.50]    [Pg.320]    [Pg.801]    [Pg.303]    [Pg.4514]    [Pg.113]    [Pg.158]    [Pg.124]    [Pg.317]    [Pg.482]    [Pg.81]    [Pg.175]    [Pg.818]    [Pg.708]    [Pg.352]    [Pg.119]    [Pg.7]    [Pg.138]   
See also in sourсe #XX -- [ Pg.90 ]




SEARCH



Model parameter

Optimism model

Optimization models

Optimization parameter

© 2024 chempedia.info