Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Least-squares optimization solution

A least-squares optimization solution is then required to find the transformation, T, such that the set of points are mapped using T so that they match as closely as possible to the set of points, B ... [Pg.35]

Nf < 0 The problem is overdetermined. If NF < 0, fewer process variables exist in the problem than independent equations, and consequently the set of equations has no solutions. The process model is said to be overdetermined, and least squares optimization or some similar criterion can be used to obtain values of the unknown variables as described in Section 2.5. [Pg.67]

Ultraviolet/visible (UV/vis) titrations were performed with a Varian Australia Pty Cary 3E UV/vis spectrophotometer. Titrations of Zn-porphyrin 1 were carried out by adding 5 pL aliquots (0.5 equiv.) of 10 2 M solutions of pyridine 2 or 4 to 1 mL of a 10-4 M Zn-porphyrin 1 solution, up to a maximum of 10 pyridine equivalents. Association constants were calculated by fitting the experimental titration curves to a 1 1 binding model based on (Equation (9.3)), on the mass balances (Equations (9.4) and (9.5)) and the Lambert-Beer law by applying a least squares optimization routine. [Pg.215]

Without additional information, a T2 model is not identifiable, meaning that there is a whole family of possible solutions to the least-squares optimization. This nonidentihability problem is similar to the rotation problem for bilinear models, except that for T2 models there are two rotation matrices whose elements must be established. [Pg.690]

The selection to minimize absolute error [Eq. (6)] calls for optimization algorithms different from those of the standard least-squares problem. Both problems have simple and extensively documented solutions. A slight advantage of the LP solution is that it does not need to be solved for the points for which the approximation error is less than the selected error threshold. In contrast, the least squares problem has to be solved with every newly acquired piece of data. The LP problem can effectively be solved with the dual simplex algorithm, which allows the solution to proceed recursively with the gradual introduction of constraints corresponding to the new data points. [Pg.189]

Firstly, it has been found that the estimation of all of the amplitudes of the LI spectrum cannot be made with a standard least-squares based fitting scheme for this ill-conditioned problem. One of the solutions to this problem is a numerical procedure called regularization [55]. In this method, the optimization criterion includes the misfit plus an extra term. Specifically in our implementation, the quantity to be minimized can be expressed as follows [53] ... [Pg.347]

To compensate for the errors involved in experimental data, the number of data sets should be greater than the number of coefficients p in the model. Least squares is just the application of optimization to obtain the best solution of the equations, meaning that the sum of the squares of the errors between the predicted and the experimental values of the dependent variable y for each data point x is minimized. Consider a general algebraic model that is linear in the coefficients. [Pg.55]

The optimal number of components from the prediction point of view can be determined by cross-validation (10). This method compares the predictive power of several models and chooses the optimal one. In our case, the models differ in the number of components. The predictive power is calculated by a leave-one-out technique, so that each sample gets predicted once from a model in the calculation of which it did not participate. This technique can also be used to determine the number of underlying factors in the predictor matrix, although if the factors are highly correlated, their number will be underestimated. In contrast to the least squares solution, PLS can estimate the regression coefficients also for underdetermined systems. In this case, it introduces some bias in trade for the (infinite) variance of the least squares solution. [Pg.275]

Unconstrained nonlinear optimization problems arise in several science and engineering applications ranging from simultaneous solution of nonlinear equations (e.g., chemical phase equilibrium) to parameter estimation and identification problems (e.g., nonlinear least squares). [Pg.45]

The solution of this equation, which is optimal in the least-squares sense is given by... [Pg.193]

For models in which the dependent variables are linear functions of the parameters, the solution to the above-mentioned optimization problems can be obtained in closed form when the least squares objective functions (3.22) and (3.24) are considered. However, in chemical kinetics, linear problems are encountered only in very simple cases, so that optimization techniques for nonlinear models must be considered. [Pg.48]

Numerical simulations of the data were conducted with the algorithms discussed above, with the added twist of optimizing the model to fit the data collected in the laboratory by adjusting the collision efficiency and the fractal dimension (no independent estimate of fractal dimension was made). Thus, a numerical solution was produced, then compared with the experimental data via a least squares approach. The best fit was achieved by minimizing the least squared difference between model solution and experimental data, and estimating the collision efficiency and fractal dimension in the process. The best model fit achieved for the data in Fig. 10a is plotted in Fig. 10b, and that for Fig. 11a is shown in Fig. lib. The collision efficiencies estimated were 1 x 10-4 and 2 x 10-4, and the fractal dimensions were 1.5 and 1.4, respectively. As expected, collision efficiency and fractal dimension were inversely correlated. However, the values of the estimates are, in both cases, lower than might be expected. The lower values were attributed to the following ... [Pg.537]

The Patterson or direct method solution will give a number of electron density peaks which can be identified as atoms of certain types. This is still a very crade model of the stracture, which should be optimized by the least squares (LS) refinement in the following way. Spherically symmetrical Hartree-Fock atoms are placed at the positions of the peaks and the coordinates (Section 2.2.2) and displacement parameters (Section 2.2.3) of these atoms are altered so as to minimize the function... [Pg.1125]

In contrast to the explicit analytical solution of least-squares fit used in linear regression, our present treatment of data analysis relies on an iterative optimization, which is a completely different approach as a result of the operations discussed in the previous section, theoretical data are calculated, dependent on the model and choice of parameters, which can be compared with the experimental results. The deviation between theoretical and experimental data is usually expressed as the sum of the errors squared for all the data points, alternatively called the sum of squared deviations (SSD) ... [Pg.326]

Fluorescence and affinity measurements - Peptide in 25 mM Tris, 100 mM KCl and 1 mM CaCl2 at pH 7.5 and 30 C was titrated with a stock solution of calmodulin in UV transmitting plastic cuvettes since the peptides appear to bind to glass. Fluorescence titration spectra were recorded using a SPEX FluoroMax fluorescence spectrometer with excitation at 280 nm and emission scanned from 310 to 390 nm. The value of fluorescence intensity at 330nm was plotted as a function of calmodulin concentration and fitted using standard non-linear least squares methods (6) to obtain optimal values of the dissociation constant (Kj) and the maximum fluorescence enhancement (F/F ). The detection limit under our experimental conditions was 50 nM peptide and all quoted Kj values are the average of at least 3 independent determinations. [Pg.403]

Since the covariance matrix was diagonal in Investigation 11, the minimization of —2 np 6 Y) gave optimally weighted least-squares solutions. The same was true in Investigation 9. [Pg.165]

This method gives an optimally weighted least-squares solution. Constraints 0 < < 1 and 67 > 0 are recommended. [Pg.171]

The solution of the inverse problem is reduced to inversion of the linear system (10.96) with respect to mx and then to computation of Ax using condition (10.94). After that we find Act as a least squares solution of the optimization problem (10.95). [Pg.307]

The online solution of this constrained estimation problem, known as full information estimator because we consider all the available measurements, is formulated as an optimization problem - typically posed as a least squares mathematical program-subject to the model constraints and inequality constraints that represents bounds on variables or equations. [Pg.508]

The least-squares solution of a multiple linear regression equation can be calculated in one step, so the number of iterations performed is not an important issue to consider when selecting descriptors. Therefore, simulated annealing is an appropriate choice for performing the optimization. [Pg.115]


See other pages where Least-squares optimization solution is mentioned: [Pg.74]    [Pg.244]    [Pg.327]    [Pg.114]    [Pg.606]    [Pg.74]    [Pg.40]    [Pg.622]    [Pg.558]    [Pg.64]    [Pg.344]    [Pg.133]    [Pg.543]    [Pg.575]    [Pg.165]    [Pg.245]    [Pg.197]    [Pg.145]    [Pg.335]    [Pg.400]    [Pg.156]    [Pg.601]    [Pg.492]    [Pg.495]    [Pg.501]    [Pg.463]    [Pg.142]    [Pg.289]   
See also in sourсe #XX -- [ Pg.36 ]




SEARCH



Least squares optimization

Optimization optimal solution

© 2024 chempedia.info