Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Nonlinear simplex method

First, and most general, is the case of an objective function that may or may not be smooth and may or may not allow for the computation of a gradient at every point. The nonlinear Simplex method [77] (not to be confused with the Simplex algorithm for linear programming) performs a pattern search on the basis of only function values, not derivatives. Because it makes little use ofthe objective function characteristics, it typically requires a great many iterations to find a solution that is even close to an optimum. [Pg.70]

Becanse the gradient vector of this cost function is difficult to determine, we use the nonlinear simplex method that only requires us to compute the cost function value for each trial 0. This technique returns only a local minimiun therefore, we augment it by simulated annealing to search for a global minimum, as is done by the routine sim.annealJ /IR.m ... [Pg.417]

Lieb, S. G. Simplex Method of Nonlinear Least-Squares—A Logical Complementary Method to Linear Least-Squares Analysis ofData, /. Chem. Educ. 1997, 74, 1008-1011. [Pg.134]

Phillips, G. R., and Eyring, E. M., Error Estimation Using the Sequential Simplex Method in Nonlinear Least Squares Data Analysis, Anal. Chem. 60, 1988, 738-741. [Pg.411]

Chapter 1 presents some examples of the constraints that occur in optimization problems. Constraints are classified as being inequality constraints or equality constraints, and as linear or nonlinear. Chapter 7 described the simplex method for solving problems with linear objective functions subject to linear constraints. This chapter treats more difficult problems involving minimization (or maximization) of a nonlinear objective function subject to linear or nonlinear constraints ... [Pg.265]

In the book, Vapor-Liquid Equilibrium Data Collection, Gmehling and colleagues (1981), nonlinear regression has been applied to develop several different vapor-liquid equilibria relations suitable for correlating numerous data systems. As an example, p versus xx data for the system water (1) and 1,4 dioxane (2) at 20.00°C are listed in Table El2.3. The Antoine equation coefficients for each component are also shown in Table E12.3. A12 and A21 were calculated by Gmehling and colleaques using the Nelder-Mead simplex method (see Section 6.1.4) to be 2.0656 and 1.6993, respectively. The vapor phase mole fractions, total pressure, and the deviation between predicted and experimental values of the total p... [Pg.453]

The separation of synthetic red pigments has been optimized for HPTLC separation. The structures of the pigments are listed in Table 3.1. Separations were carried out on silica HPTLC plates in presaturated chambers. Three initial mobile-phase systems were applied for the optimization A = n-butanol-formic acid (100+1) B = ethyl acetate C = THF-water (9+1). The optimal ratios of mobile phases were 5.0 A, 5.0 B and 9.0 for the prisma model and 5.0 A, 7.2 B and 10.3 C for the simplex model. The parameters of equations describing the linear and nonlinear dependence of the retention on the composition of the mobile phase are compiled in Table 3.2. It was concluded from the results that both the prisma model and the simplex method are suitable for the optimization of the separation of these red pigments. Multivariate regression analysis indicated that the components of the mobile phase interact with each other [79],... [Pg.374]

The computer program for solving the nonlinear programming problem (9) was developed using Matlab, applying the simplex method for searching the optimal solution. [Pg.276]

The indirect method can be employed by extrapolating the rheologic models or the shear stress-shear rate data to zero shear rate. The computer software Table Curve 1.12 was used to fit the shear stress-shear rate data to the different rheologic models. This software uses the Simplex method for a nonlinear regression curve fit. [Pg.353]

With all the necessary ingredients in place, the task is now to derive a reliable force field. In an automated refinement, the first step is to define in machine-readable form what constitutes a good force field. Following that, the parameters are varied, randomly or systematically (15,42). For each new parameter set, the entire data set is recalculated, to yield the quality of the new force field. The best force field so far is retained and used as the basis for new trial parameter sets. The task is a standard one in nonlinear numerical optimization many efficient procedures exist for selection of the optimum search direction (43). Only one recipe will be covered here, a combination of Newton-Raphson and Simplex methods that has been successfully employed in several recent parameterization efforts (11,19,20,28,44). [Pg.19]

Parkinson, J, M., and Hutchison, D. (1972). An investigation into the efficiency of variants of the simplex method. In Numerical Methods for Nonlinear Optimization (F. A. Lootsma, ed.). [Pg.76]

In a nonlinear problem the optimal solutions can occur at an interior point of the feasible region or on the boundary of the feasible region, which is not an extreme point or at an extreme point of the feasible region. As a consequence procedures searching oidy the extreme point such as the simplex method cannot be used (Bradley et al. 1977). [Pg.931]

For fitting the binary interaction parameters nonlinear regression methods are applied, which allow adjusting the parameters in such a way that a minimum deviation of an arbitrary chosen objective function F is obtained. For this job, for example, the Simplex-Nelder-Mead method (21j can be applied successfully. The Simplex-Nelder-Mead method in contrast to many other methods [22] is a simple search routine, which does not need the first and the second derivate of the objective function with respect to the different variables. This has the great advantage that computational problems, such as "underflow or overflow with the arbitrarily chosen initial parameters can be avoided. [Pg.218]

Mathematical solution. In this aspect, we will not go into detail but just mention that the most common procedure for solving LP problems is the simplex method. In more advanced courses students will have the opportunity to learn this method and others that are used to solve integer programming (IP) problems, nonlinear problems, and others. [Pg.289]

The Assume Linear Model check box determines whether the simplex method or the GRG2 nonlinear programming algorithm will be used to solve the problem. The Use Automatic Scaling check box causes the model to be rescaled internally before the solution. The Assume Non-Negative check box places lower bounds of zero on any decision variables that do not have explicit bounds in the Constraints list box. [Pg.28]

Phillips GR, Eyring EM (1988) Error estimation using the sequential Simplex method in nonlinear least squares data analysis. Anal Chem 60 738-741... [Pg.438]

Simplex methods ([72, 71, 73]) move from boundary to boundary within the feasible region. The simplex methods requires initial basic solution to be feasible. There are various variants of simplex methods like dual simplex method, the Big M method, and the two-phase simplex method. Interior point methods on the other hand visit points within the interior of the feasible region more inline with the nonlinear programming methods. In general, good interior point methods perform as well or better than simplex codes on larger problems when no prior information about the solution is available. When such warm start information is available, simplex methods are able to make much better use of it than the interior point methods. [Pg.71]

Nonlinear programming (NLP), as the name implies, is similar to LP, but the objective function or constraints can be nonlinear functions. There are no algorithms (like the simplex method) that guarantee a solution for NLP problems. Many methods have been developed, and Solver has one of these built in (called Generalized Reduced Gradient). The subject of NLP is quite complex and far beyond what can be covered ho-e. NLP is introduced by way of a simple example. Even the simplest of chemical and biomolecu-lar engineering NLP problems can be too complex to warrant coverage here. [Pg.184]

The simplex mefliod, implemented as fminsearch in the optional MATLAB optimization tool kit, requires only aroutine thatretums F x). WWle simplex methods are used commonly for linear programming problems with linear cost functions and constraints (Nocedal Wright, 1999), for unconstrained optimization with nonlinear cost functions, the gradient and Newton methods discussed below are preferred. Thus, we provide here only a cursory description, and refer the interested reader to the supplemental material in the accompanying website for further details. [Pg.213]

Supercomputers, such as the CRAY X-MP, CRAY Y-MP, and CRAY-2, are partially available and used for flow-sheet and optimization studies (7—10). Optimization modules usiag linear and nonlinear programming (LINPRO and UNLPl, based on a revised simplex, and Davidson-Eletcher-PoweU and Broyden methods, respectively) are available ia MicroMENTOR (11). [Pg.62]

LP software includes two related but fundamentally different kinds of programs. The first is solver software, which takes data specifying an LP or MILP as input, solves it, and returns the results. Solver software may contain one or more algorithms (simplex and interior point LP solvers and branch-and-bound methods for MILPs, which call an LP solver many times). Some LP solvers also include facilities for solving some types of nonlinear problems, usually quadratic programming problems (quadratic objective function, linear constraints see Section 8.3), or separable nonlinear problems, in which the objective or some constraint functions are a sum of nonlinear functions, each of a single variable, such as... [Pg.243]

In general, linear functions and correspondingly linear optimization methods can be distinguished from nonlinear optimization problems. The former, being in itself the wide field of linear programming with the predominant Simplex algorithm for routine solution [75] shall be excluded here. [Pg.69]

The optimization can be carried out by several methods of linear and nonlinear regression. The mathematical methods must be chosen with criteria to fit the calculation of the applied objective functions. The most widely applied methods of nonlinear regression can be separated into two categories methods with or without using partial derivatives of the objective function to the model parameters. The most widely employed nonderivative methods are zero order, such as the methods of direct search and the Simplex (Himmelblau, 1972). The most widely used derivative methods are first order, such as the method of indirect search, Gauss-Seidel or Newton, gradient method, and the Marquardt method. [Pg.212]

At each step of an iterative method for nonlinear optimization, the subsequent coordinate step Aq must be estimated. The vector Q, interpolated in a selected m-simplex of prior coordinate vectors, must be combined with an iterative estimate of the component of Aq orthogonal to the hyperplane of the simplex. [Pg.27]


See other pages where Nonlinear simplex method is mentioned: [Pg.127]    [Pg.128]    [Pg.252]    [Pg.127]    [Pg.128]    [Pg.252]    [Pg.272]    [Pg.149]    [Pg.725]    [Pg.294]    [Pg.35]    [Pg.257]    [Pg.2534]    [Pg.2543]    [Pg.2560]    [Pg.46]    [Pg.580]    [Pg.75]    [Pg.255]    [Pg.28]    [Pg.173]    [Pg.257]    [Pg.362]    [Pg.108]    [Pg.140]    [Pg.68]    [Pg.131]    [Pg.292]    [Pg.103]   
See also in sourсe #XX -- [ Pg.261 ]




SEARCH



Nonlinear methods

Simplexes

© 2024 chempedia.info