Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Optimization alternating least squares

The next subsection deals first with aspects common to all resolution methods. These include (1) issues related to the initial estimates, i.e., how to obtain the profiles used as the starting point in the iterative optimization, and (2) issues related to the use of mathematical and chemical information available about the data set in the form of so-called constraints. The last part of this section describes two of the most widely used iterative methods iterative target transformation factor analysis (ITTFA) and multivariate curve resolution-alternating least squares (MCR-ALS). [Pg.432]

Multivariate curve resolution-alternating least squares (MCR-ALS) uses an alternative approach to iteratively find the matrices of concentration profiles and instrumental responses. In this method, neither the C nor the ST matrix have priority over each other, and both are optimized at each iterative cycle [7, 21, 42], The general operating procedure of MCR-ALS includes the following steps ... [Pg.439]

The convergence criterion in the alternating least-squares optimization is based on the comparison of the fit obtained in two consecutive iterations. When the relative difference in fit is below a threshold value, the optimization is finished. Sometimes a maximum number of iterative cycles is used as the stop criterion. This method is very flexible and can be adapted to very diverse real examples, as shown in Section 11.7. [Pg.440]

Esteban, M., Anno, C., Dfaz-Cruz, J.M., Dfaz-Cruz, M.S., and Tauler, R., Multivariate curve resolution with alternating least squares optimization a soft-modeling approach to metal complexation studies by voltammetric techniques, Trends Anal. Chem., 19, 49-61, 2000. [Pg.468]

Multivariate curve resolution-alternating least squares (MCR-ALS) is an algorithm that fits the requirements for image resolution [71, 73-75]. MCR-ALS is an iterative method that performs the decomposition into the bilinear model D = CS by means of an alternating least squares optimization of the matrices C and according to the following steps ... [Pg.90]

Alternating Least Square Optimization. The optimization process starts the iterative calculations from the initial estimates (spectral or electrophoretic profiles) of species to be modeled. If spectra are used as an input, the conjugated peak profile contributions C can be calculated as follows ... [Pg.210]

There are different routes to estimating the parameters of a model. Finding the parameters is an optimization problem, and in some situations, a directly computable solution may exist. In other situations an iterative algorithm has to be used. The two most important tools for fitting models in multi-way analysis are called alternating least squares and eigenvalue based solutions. Other approaches also exist, but these are beyond the scope of this book. [Pg.111]

In order to fit the model using alternating least squares it is necessary to come up with an update for A given B and C for B given A and C and for C given A and B. Due to the symmetry of the model, an update for one mode, e.g. A, is essentially identical to an update for any of the modes with the role of the different loading matrices shifted. To estimate A conditionally on B and C formulate the optimization problem as... [Pg.114]

Sands R, Young FW, Component models for three-way data An alternating least squares algorithm with optimal scaling features, Psychometrika, 1980,45, 39-67. [Pg.365]

Takane, Y., Young, F. W. and De Leeuw, J. (1977). Nonmetric Individual differences multidimensional scaling An alternating least squares method with optimal scaling features. Psychometrika, 42, 8-67. [Pg.184]

Constrained alternating least squares optimization of C and until convergence is achieved. [Pg.88]

Fitting model predictions to experimental observations can be performed in the Laplace, Fourier or time domains with optimal parameter choices often being made using weighted residuals techniques. James et al. [71] review and compare least squares, stochastic and hill-climbing methods for evaluating parameters and Froment and Bischoff [16] summarise some of the more common methods and warn that ordinary moments matching-techniques appear to be less reliable than alternative procedures. References 72 and 73 are studies of the errors associated with a selection of parameter extraction routines. [Pg.268]

This procedure is equivalent to the Savitsky-Golay method, algorithms for which have been included in computer software for scientific instruments such as the Fourier-transform infrared (FTIR) spectrometer. Alternatives to smoothing are weighted least-squares fitting or optimal (Weiner) filtering techniques. ... [Pg.709]

In contrast to the explicit analytical solution of least-squares fit used in linear regression, our present treatment of data analysis relies on an iterative optimization, which is a completely different approach as a result of the operations discussed in the previous section, theoretical data are calculated, dependent on the model and choice of parameters, which can be compared with the experimental results. The deviation between theoretical and experimental data is usually expressed as the sum of the errors squared for all the data points, alternatively called the sum of squared deviations (SSD) ... [Pg.326]

In general, there is an art and a science to molecular mechanics parameterization. On one extreme, least-squares methods can be used to optimize the parameters to best fit the available data set, and reviews on this topic are avail-able. Alternatively, parameters can be determined on a trial-and-error basis. The situation in either case is far from straightforward because the data usually available come from a variety of sources, are measured by different kinds of experiments in different units, and have relative importances that need subjective assessment. Therefore, straightforward applications of least-squares methods are not expected to give optimum results. [Pg.94]

An alternative approach to parametrisation, pioneered by Lifson and co-workers in the development of their consistent force fields, is to use least-squares fitting to determine the set of parameters that gives the optimal fit to the data [Lifson and Warshel 1968]. Again, the first step is to choose a set of experimental data that one wishes the force field to reproduce (or calculate using quantum mechanics, if appropriate). Warshel and Lifson used thermodynamic data, equilibrium conformations and vibrational frequencies. The error for a given set of parameters equals the sum of squares of the differences between the observed and calculated values for the set of properties. The objective is to change the force field parameters to minimise the error. This is done by assuming that the properties can be related to the force field by a Taylor series expansion ... [Pg.230]


See other pages where Optimization alternating least squares is mentioned: [Pg.71]    [Pg.113]    [Pg.130]    [Pg.255]    [Pg.354]    [Pg.87]    [Pg.248]    [Pg.373]    [Pg.131]    [Pg.55]    [Pg.185]    [Pg.89]    [Pg.176]    [Pg.492]    [Pg.47]    [Pg.131]    [Pg.406]    [Pg.372]    [Pg.288]    [Pg.266]    [Pg.482]    [Pg.180]    [Pg.238]    [Pg.172]    [Pg.743]    [Pg.13]    [Pg.165]    [Pg.183]    [Pg.26]    [Pg.5]    [Pg.128]   


SEARCH



Alternating least squares

Least squares optimization

© 2024 chempedia.info