Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Constrained methods Constraint

If at the outset the data are very noisy and if the noise predominates in the Fourier frequency range needed to effect a restoration, constraints provide the only hope for improvement. The reason is that many of the noise values in the data would restore to physically unrealizable values by linear deconvolution. The constrained methods are inherently more robust because they must find a solution that is consistent with both data and physical reality. [Pg.90]

Perhaps the benefits of physical-realizability constraints, particularly ordinate bounds such as positivity, have not been sufficiently recognized. Surely everyone agrees in principle that such constraints are desirable. Even the early literature on this subject frequently mentions their potential advantages. For one reason or another, however, the earliest nonlinear constrained methods did not fully reveal the inherent power of constraints. [Pg.96]

For the present work, we chose the constrained method described by Jansson (1968) and Jansson et al (1968, 1970). See also Section V.A of Chapter 4 and supporting material in Chapter III. This method has also been applied to ESCA spectra by McLachlan et al (1974). In our adaptation (Jansson and Davies, 1974) the procedure was identical to that used in the original application to infrared spectra except that the data were presmoothed three times instead of once, and the variable relaxation factor was modified to accommodate the lack of an upper bound. Referring to Eqs. (15) and (16) of Section V.A.2 of Chapter 4, we set k = 2o(k)K0 for 6(k) < j and k = Kq exp[3 — for o(k) > This function is seen to apply the positivity constraint in a manner similar to that previously employed but eliminates the upper bound in favor of an exponential falloff. We also experimented with k = k0 for o(k) > j, and found it to be equally effective. As in the infrared application, only 10 iterations were needed. [Pg.144]

As the Excel Solver is only for single objective optimization, use the e-constrained method and the Excel Solver.xls on the CD to optimize the 4-plant IE for the two objectives as in Cases A and B. For the Solver to work reliably, number of decision variables should be limited. Thus, it is recommended to set Z21 = Z32 = 0 and Z22 = 1 = 1 for Z = a, b, c and d. This would leave the capacities of the 4 plants (Xj) as the decision variables. Treat lEvP as the constraint and vary it in the range 1.213-1.419 for Case A and 1.220-1.321 for Case B, and observe the trends of the decision variables and the objective. Do they follow similar trends as the IE for 6 plants ... [Pg.337]

The projected symplectic constrained method (4.20)-(4.24) is only first order accurate. We forego providing a detailed proof of this fact, but note that it could be demonstrated using standard methods [164]. Note that (4.20)-(4.24) reduces to the symplectic Euler method in the absence of constraints, and the projection of the momenta would not alter this fact. There are several constraint-preserving, second-order alternatives which generalize the Stormer-Verlet scheme. One of these is the SHAKE method [322]. The original derivation of the SHAKE method began from the position-only, two-step form of the Stormer rule for q = F(q)... [Pg.161]

Several useful methods have been proposed to overcome the variational coUapse problem, and a number of different schemes have been proposed for obtaining SCF wave functions for excited states [10, 16-26]. In recent years, there has been renewed interest in the orthogonality-constrained methods [14, 27] as weU as in the SCF theory for excited states [28-32]. It is clear that an experience accumulated for the HF excited state calculations can be useful to develop similar methods within density functional theory [33-36]. Some of these approaches [10, 18, 19, 23, 24, 26, 30-35] explicitly introduce orthogonality constraints to lower states. Other methods [21, 22, 25] either use this restriction implicitly or locate excited states as higher solutions of nonlinear SCF equations [29]. In latter type of scheme, the excited state SCF wave functions of interest are not necessarily orthogonal to the best SCF functions for a lower state or states of the same symmetry. [Pg.187]

A particularly useful approach which provides unique information about the importance of different contributions to chemical bonding can be obtained with the constrained space orbital variation (CSOV) method. In this method, constraints are applied to the orbitals that are optimized in the variational procedure and in the space in which the restricted optimization is performed. The restrictions that are applied are based on chemical principles which allow certain types of interactions to occur and prevent others from occurring. The general... [Pg.2876]

The philosophy in the pinch design method was to start the design where it was most constrained. If the design is pinched, the problem is most constrained at the pinch. If there is no pinch, where is the design most constrained Figure 16.9a shows a threshold problem that requires no hot utility, just cold utility. The most constrained part of this problem is the no-utility end. Tips is where temperature differences are smallest, and there may be constraints, as shown in Fig. 16.96, where the target temperatures on some of the cold... [Pg.371]

The general constrained optimization problem can be considered as minimizing a function of n variables F(x), subject to a series of m constraints of the fomi C.(x) = 0. In the penalty fiinction method, additional temis of the fomi. (x), a.> 0, are fomially added to the original fiinction, thus... [Pg.2347]

By combining the Lagrange multiplier method with the highly efficient delocalized internal coordinates, a very powerfiil algoritlun for constrained optimization has been developed [ ]. Given that delocalized internal coordinates are potentially linear combinations of all possible primitive stretches, bends and torsions in the system, cf Z-matrix coordinates which are individual primitives, it would seem very difficult to impose any constraints at all however, as... [Pg.2348]

The form of the Hamiltonian impedes efficient symplectic discretization. While symplectic discretization of the general constrained Hamiltonian system is possible using, e.g., the methods of Jay [19], these methods will require the solution of a nontrivial nonlinear system of equations at each step which can be quite costly. An alternative approach is described in [10] ( impetus-striction ) which essentially converts the Lagrange multiplier for the constraint to a differential equation before solving the entire system with implicit midpoint this method also appears to be quite costly on a per-step basis. [Pg.355]

Iris type of constrained minimisation problem can be tackled using the method of Lagrange nultipliers. In this approach (see Section 1.10.5 for a brief introduction to Lagrange nultipliers) the derivative of the function to be minimised is added to the derivatives of he constraint(s) multiplied by a constant called a Lagrange multiplier. The sum is then et equal to zero. If the Lagrange multiplier for each of the orthonormality conditions is... [Pg.72]

The most commonly used method for applying constraints, particularly in molecula dynamics, is the SHAKE procedure of Ryckaert, Ciccotti and Berendsen [Ryckaert et a 1977]. In constraint dynamics the equations of motion are solved while simultaneous satisfying the imposed constraints. Constrained systems have been much studied in classics mechanics we shall illustrate the general principles using a simple system comprising a bo sliding down a frictionless slope in two dimensions (Figure 7.8). The box is constrained t remain on the slope and so the box s x and y coordinates must always satisfy the equatio of the slope (which we shall write as y = + c). If the slope were not present then the bo... [Pg.385]

Constrained Optimization When constraints exist and cannot be eliminated in an optimization problem, more general methods must be employed than those described above, since the unconstrained optimum may correspond to unrealistic values of the operating variables. The general form of a nonhuear programming problem allows for a nonlinear objec tive function and nonlinear constraints, or... [Pg.744]

One important class of nonlinear programming techniques is called quadratic programming (QP), where the objective function is quadratic and the constraints are hnear. While the solution is iterative, it can be obtained qmckly as in linear programming. This is the basis for the newest type of constrained multivariable control algorithms called model predic tive control. The dominant method used in the refining industiy utilizes the solution of a QP and is called dynamic matrix con-... [Pg.745]

There are various ways to obtain the solutions to this problem. The most straightforward method is to solve the full problem by first computing the Lagrange multipliers from the time-differentiated constraint equations and then using the values obtained to solve the equations of motion [7,8,37]. This method, however, is not computationally cheap because it requires a matrix inversion at every iteration. In practice, therefore, the problem is solved by a simple iterative scheme to satisfy the constraints. This scheme is called SHAKE [6,14] (see Section V.B). Note that the computational advantage has to be balanced against the additional work required to solve the constraint equations. This approach allows a modest increase in speed by a factor of 2 or 3 if all bonds are constrained. [Pg.63]

Although constrained dynamics is usually discussed in the context of the geometrically constrained system described above, the same techniques can have many other applications. For instance, constant-pressure and constant-temperature dynamics can be imposed by using constraint methods [33,34]. Car and Parrinello [35] describe the use of the extended Lagrangian to maintain constraints in the context of their ab initio MD method. (For more details on the Car-Parrinello method, refer to the excellent review by Gain and Pasquarrello [36].)... [Pg.63]

Further Comments on General Programming.—This section will utilize ideas developed in linear programming. The use of Lagrange multipliers provides one method for solving constrained optimization problems in which the constraints are given as equalities. [Pg.302]

Owing to the constraints, no direct solution exists and we must use iterative methods to obtain the solution. It is possible to use bound constrained version of optimization algorithms such as conjugate gradients or limited memory variable metric methods (Schwartz and Polak, 1997 Thiebaut, 2002) but multiplicative methods have also been derived to enforce non-negativity and deserve particular mention because they are widely used RLA (Richardson, 1972 Lucy, 1974) for Poissonian noise and ISRA (Daube-Witherspoon and Muehllehner, 1986) for Gaussian noise. [Pg.405]

The random search technique can be applied to constrained or unconstrained optimization problems involving any number of parameters. The solution starts with an initial set of parameters that satisfies the constraints. A small random change is made in each parameter to create a new set of parameters, and the objective function is calculated. If the new set satisfies all the constraints and gives a better value for the objective function, it is accepted and becomes the starting point for another set of random changes. Otherwise, the old parameter set is retained as the starting point for the next attempt. The key to the method is the step that sets the new, trial values for the parameters ... [Pg.206]

It is also worth noting that the stochastic optimization methods described previously are readily adapted to the inclusion of constraints. For example, in simulated annealing, if a move suggested at random takes the solution outside of the feasible region, then the algorithm can be constrained to prevent this by simply setting the probability of that move to 0. [Pg.43]


See other pages where Constrained methods Constraint is mentioned: [Pg.355]    [Pg.38]    [Pg.2253]    [Pg.2349]    [Pg.377]    [Pg.230]    [Pg.351]    [Pg.357]    [Pg.314]    [Pg.385]    [Pg.389]    [Pg.390]    [Pg.256]    [Pg.62]    [Pg.75]    [Pg.486]    [Pg.744]    [Pg.745]    [Pg.303]    [Pg.208]    [Pg.56]    [Pg.398]    [Pg.408]    [Pg.377]    [Pg.429]    [Pg.27]    [Pg.404]    [Pg.289]    [Pg.272]   
See also in sourсe #XX -- [ Pg.79 , Pg.86 , Pg.90 , Pg.96 , Pg.108 , Pg.114 , Pg.144 , Pg.181 , Pg.268 , Pg.269 , Pg.275 , Pg.323 ]




SEARCH



Constrained methods

© 2024 chempedia.info