Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Parameter space optimization

A more balanced description requires MCSCF based methods where the orbitals are optimized for each particular state, or optimized for a suitable average of the desired states (state averaged MCSCF). It should be noted that such excited state MCSCF solutions correspond to saddle points in the parameter space for the wave function, and second-order optimization techniques are therefore almost mandatory. In order to obtain accurate excitation energies it is normally necessarily to also include dynamical Correlation, for example by using the CASPT2 method. [Pg.147]

Search for the overall optimum within the available parameter space Factorial, simplex, regression, and brute-force techniques. The classical, the brute-force, and the factorial methods are applicable to the optimization of the experiment. The simplex and various regression methods can be used to optimize both the experiment and fit models to data. [Pg.150]

The first step in developing an optimization strategy is to define the parameter space to be searched for an acceptable... [Pg.752]

A new idea has recently been presented that makes use of Monte Carlo simulations [60,61], By defining a range of parameter values, the parameter space can be examined in a random fashion to obtain the best model and associated parameter set to characterize the experimental data. This method avoids difficulties in achieving convergence through an optimization algorithm, which could be a formidable problem for a complex model. Each set of simulated concentration-time data can be evaluated by a goodness-of-fit criterion to determine the models that predict most accurately. [Pg.97]

As pointed out by Tjoa and Biegler, in the first case, the optimization needs only to be performed in the parameters spaces, which is usually small. In the second case, we need to minimize the errors for all variables, and the challenge here is that the... [Pg.178]

A single experiment consists of the measurement of each of the g observed variables for a given set of state variables (dependent, independent). Now if the independent state variables are error-free (explicit models), the optimization need only be performed in the parameter space, which is usually small. [Pg.180]

We want to find controls / that optimize a score P(f ) subject to a constraint E(f). A numerical local optimization can be visualized in parameter space as shown in Figure 4.10. [Pg.181]

Normally it is hardly feasible nor necessary to explore a certain parameter space to the full theoretically possible extent. Rather certain areas of parameter space are examined. These areas can be determined by intuition, synthetic accessibility or by structure-activity-relationships and structure-activity-hypotheses. Consider for example structure 6 to, be a lead compound which needs to be optimized with respect to an activity profile consisting of two components, i.e. potency and oral effectivity (gastro-intestinal absorption). Let us for simplicity assume that we want to confine ourselves to variations of the substituent in 4-position (7) and that we... [Pg.12]

J. P. Stewart, subsequently left Dewar s labs to work as an independent researcher. Stewart felt that the development of AMI had been potentially non-optimal, from a statistical point of view, because (i) the optimization of parameters had been accomplished in a stepwise fashion (thereby potentially accumulating errors), (ii) the search of parameter space had been less exhaustive than might be desired (in part because of limited computational resources at the time), and (iii) human intervention based on the perceived reasonableness of parameters had occurred in many instances. Stewart had a somewhat more mathematical philosophy, and felt that a sophisticated search of parameter space using complex optimization algorithms might be more successful in producing a best possible parameter set within the Dewar-specific NDDO framework. [Pg.146]

To that end, Stewart set out to optimize simultaneously parameters for H, C, N, O, F, Al, Si, P, S, Cl, Br, and I. He adopted an NDDO functional form identical to that of AMI, except that he limited himself to two Gaussian functions per atom instead of the four in Eq. (5.16). Because his optimization algorithms permitted an efficient search of parameter space, he was able to employ a significantly larger data set in evaluating his penalty function than had been true for previous efforts. He reported his results in 1989 as he considered his parameter set to be the third of its ilk (tire first two being MNDO and AMI), he named it Parameterized Model 3 (PM3 Stewart 1989). [Pg.146]

There is a possibility that the PM3 parameter set may actually be the global minimum in parameter space for the Dewar-NDDO functional form. However, it must be kept in mind that even if it is the global minimum, it is a minimum for a particular penalty function, which is itself influenced by the choice of molecules in the data set, and the human weighting of the errors in the various observables included therein (see Section 2.2.7). Thus, PM3 will not necessarily outperform MNDO or AMI for any particular problem or set of problems, although it is likely to be optimal for systems closely resembling molecules found in the training set. As noted in the next section, some features of the PM3 parameter set can lead to very unphysical behaviors that were not assessed by the penalty function, and thus were not avoided. Nevertheless, it is a very robust NDDO model, and continues to be used at least as widely as AMI. [Pg.146]

We want values of pA"w, pAj, and pK2 that minimize the sum of squares of residuals in cell B12. Select SOLVER from the TOOLS menu. In the SOLVER window, Set Target Cell B12 Equal to Min By Changing Cells B9. B10. Bl 1. Then click Solve and SOLVER finds the best values in cells B9, BIO, and Bll to minimize the sum of squares of residuals in cell B12. Starting with 13.797, 2.35, and 9.78 in cells B9, B10, and Bll gives a sum of squares of residuals equal to 0.110 in cell B12. After SOLVER is executed, cells B9, B10, and Bll become 13.807, 2.312, and 9.625. The sum in cell B12 is reduced to 0.0048. When you use SOLVER to optimize several parameters at once, it is a good idea to try different starting values to see if the same solution is reached. Sometimes a local minimum can be reached that is not as low as might be reached elsewhere in parameter space. [Pg.265]

In Quantum Chemistry, equation systems arise primarily from optimization problems. For simplicity, let us call equations of the type f(x) = 0 conventional equation systems, while those of the type f(x) = v x) are called quasi-secular equations. As we shall see shortly, the former arise from optimization problems with no constrains on the parameters. The quasi-secular equation is obtained when the optimization is done on a subset of the parameter space. Let us assume that some differentiable function Q(x) is to be minimized. We then define the following functions of x ... [Pg.31]

In the simplex method, die number of initial experiments conducted is one more than the number of parameters (temperature, gradient rate, etc.) to be simultaneously optimized. The conditions of the initial experiments constitute the vertices of a geometric figure (simplex), which will subsequently move through the parameter space in search of the optimum. Once the initial simplex is established, the vertex with the lowest value is rejected, and is replaced by a new vertex found by reflecting the simplex in the direction away from the rejected vertex. The vertices of the new simplex are then evaluated as before, and in this way the simplex proceeds toward the optimum set of conditions. [Pg.317]

Shown in Figure 10 is the chromatogram acquired at the optimum predicted by CRF-4. Baseline resolution of all 8 components was achieved in about 27 minutes, except for components 2-4 which were almost baseline resolved. Additional evidence for the accuracy of the retention model (equation 9 and Table VI) employed for this window diagram optimization is evident in Table VIII, where predicted and measured retention factors differed by less than 15%. The slight positive bias observed for all solutes at the optimum conditions in Table VIII was coincidental averaged over the entire parameter space the bias was almost completely random. [Pg.332]


See other pages where Parameter space optimization is mentioned: [Pg.92]    [Pg.2220]    [Pg.17]    [Pg.342]    [Pg.206]    [Pg.50]    [Pg.41]    [Pg.241]    [Pg.244]    [Pg.245]    [Pg.247]    [Pg.760]    [Pg.325]    [Pg.196]    [Pg.199]    [Pg.231]    [Pg.451]    [Pg.200]    [Pg.228]    [Pg.120]    [Pg.191]    [Pg.65]    [Pg.107]    [Pg.177]    [Pg.164]    [Pg.419]    [Pg.13]    [Pg.15]    [Pg.252]    [Pg.122]    [Pg.326]    [Pg.335]   


SEARCH



Optimal space

Optimization parameter

Selectivity optimization parameter space

Space parameter

© 2024 chempedia.info