Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Least squares algorithm

One widely used algorithm for performing a PCA is the NIPALS (Nonlineai Iterative Partial Least Squares) algorithm, which is described in Ref [5],... [Pg.448]

The field points must then be fitted to predict the activity. There are generally far more field points than known compound activities to be fitted. The least-squares algorithms used in QSAR studies do not function for such an underdetermined system. A partial least squares (PLS) algorithm is used for this type of fitting. This method starts with matrices of field data and activity data. These matrices are then used to derive two new matrices containing a description of the system and the residual noise in the data. Earlier studies used a similar technique, called principal component analysis (PCA). PLS is generally considered to be superior. [Pg.248]

PLS (partial least-squares) algorithm used for 3D QSAR calculations PM3 (parameterization method three) a semiempirical method PMF (potential of mean force) a solvation method for molecular dynamics calculations... [Pg.367]

Ogren, P. J. Norton, J. R. Applying a Simple Linear Least-Squares Algorithm to Data with Uncertainties in Both Variables, /. Chem. Educ. 1992, 69, A130-A131. [Pg.134]

A useful method of weighting is through the use of an iterative reweighted least squares algorithm. The first step in this process is to fit the data to an unweighted model. Table 11.7 shows a set of responses to a range of concentrations of an agonist in a functional assay. The data is fit to a three-parameter model of the form... [Pg.237]

FIGURE 11.9 Outliers, (a) Dose-response curve fit to all of the data points. The potential outlier value raises the fit maximal asymptote, (b) Iterative least squares algorithm weighting of the data points (Equation 11.25) rejects the outlier and a refit without this point shows a lower-fit maximal asymptote. [Pg.238]

L.E. Wangen and B.R. Kowalski, A multiblock partial least squares algorithm for investigating complex chemical systems, J. Chemom., 3 (1988) 3-10. [Pg.419]

Before we introduce the Kalman filter, we reformulate the least-squares algorithm discussed in Chapter 8 in a recursive way. By way of illustration, we consider a simple straight line model which is estimated by recursive regression. Firstly, the measurement model has to be specified, which describes the relationship between the independent variable x, e.g., the concentrations of a series of standard solutions, and the dependent variable, y, the measured response. If we assume a straight line model, any response is described by ... [Pg.577]

Manne R (1987) Analysis of two partial-least-squares algorithms for multivariate calibration. Chemom Intell Lab Syst 2 187... [Pg.200]

The figure shows a polynomial fit to a dataset calculated according to a standard least squares algorithm (solid line) this is compared with a series of attempts to find a fit to the same data using a genetic algorithm. [Pg.3]

This circuit is usually referred to as the Randles circuit and its analysis has been a major feature of AC impedance studies in the last fifty years. In principle, we can measure the impedance of our cell as a function of frequency and then obtain the best values of the parameters Rct,<7,C4i and Rso by a least squares algorithm. The advent of fast micro-computers makes this the normal method nowadays but it is often extremely helpful to represent the AC data graphically since the suitability of a simple model, such as the Randles model, can usually be immediately assessed. The most common graphical representation is the impedance plot in which the real part of the measured impedance (i.e. that in phase with the impressed cell voltage) is plotted against the 90° out-of-phase quadrature or imaginary part of the impedance. [Pg.165]

An important group of methods relies on the inherent order of the data, typically time in kinetics or chromatography. These methods are often based on Evolving Factor Analysis and its derivatives. Another well known family of model-free methods is based on the Alternating Least-Squares algorithm that solely relies on restrictions such as positive spectra and concentrations. [Pg.5]

The least-squares algorithm used is basically the linearized non-linear algorithm which is of common use in crystallographic structure... [Pg.361]

But where did the calibration plane come from By letting the computer use the same statistical tool that it used before the least squares best fit. Only this time it operates with one more dimension. Initially (in order to calibrate the system) one obtains a so-called "learning set" of standards (samples which have been analyzed by some acceptable reference method). Each sample is then measured at each wavelength and the absorbances are plotted in three dimensional space rather than in the plane of a sheet of graph paper. The residuals (distances from point to plane) are minimized by the least squares algorithm, and the plane which fits best through these points is, by definition, the calibration plane (see Figure 10). [Pg.99]

A Graphical Approach to the Basic Partial Least-squares Algorithm... [Pg.181]

A more complex method is described by WOLD [1978], who used cross-validation to estimate the number of factors in FA and PCA. WOLD applied the NIPALS (non linear iterative partial least squares) algorithm and also mentioned its usefulness in cases of incomplete data. [Pg.173]

These exact data were then fit using a weighted, least squares algorithm. The first moments of the distribution, the cumulants (8), were obtained as a function of BLINE, where BLINE is the baseline which was stepped through in 0.1 % increments. The results are shown in Table III. [Pg.60]

Multiple Pass Analysis. Pike and coworkers (13) have provided a method to increase the resolution of the ordinary least squares algorithm somewhat. It was noted that any reasonable set of assumed particle sizes constitutes a basis set for the inversion (within experimental error). Thus, the data can be analyzed a number of times with a different basis set each time, and the results combined. A statistically more-probable solution results from an average of the several equally-likely solutions. Although this "multiple pass analysis".helps locate the peaks of the distribution with better resolution and provides a smoother presentation of the result, it can still only provide limited resolution without the use of a non-negatively constrained least squares technique. We have shown, however, that the combination of both the non-negatively constrained calculation and the multiple pass analysis gives the advantages of both. [Pg.92]

Curvilinear regression should not be confused with the nonlinear regression methods used to estimate model parameters expressed in a nonlinear form. For example, the model parameters a and b in y = axb cannot be estimated by a linear least-squares algorithm. Information in Chapter 7 describes nonlinear approaches to use in this case. Alternatively, a transformation to a linear model can sometimes be used. Implementing a logarithmic transformation on yt = ax/ produces the model log y = log a + blog x which can now be utilized with a linear least-squares algorithm. The literature [4, 5] should be consulted for additional information on linear transformations. [Pg.113]

Bro, R. and Sidiropoulos, N.D., Least squares algorithms under unimodality and nonnegativity constraints, J. Chemom., 12, 223-247, 1998. [Pg.470]

PARAFAC refers both to the parallel factorization of the data set R by Equation 12.1a and Equation 12.lb and to an alternating least-squares algorithm for determining X, Y, and Z in the two equations. The ALS algorithm is known as PARAFAC, emanating from the work by Kroonenberg [31], and as CANDECOMP, for canonical decomposition, based on the work of Harshman [32], In either case, the two basic algorithms are practically identical. [Pg.491]


See other pages where Least squares algorithm is mentioned: [Pg.102]    [Pg.46]    [Pg.462]    [Pg.168]    [Pg.361]    [Pg.32]    [Pg.302]    [Pg.58]    [Pg.43]    [Pg.145]    [Pg.197]    [Pg.287]    [Pg.166]    [Pg.103]    [Pg.92]    [Pg.93]    [Pg.105]    [Pg.231]    [Pg.470]    [Pg.493]    [Pg.89]    [Pg.92]    [Pg.112]   
See also in sourсe #XX -- [ Pg.60 ]




SEARCH



GENERALIZED LEAST SQUARES ALGORITHM

Genetic algorithm/Partial least squares

Least algorithm

Least squares regression basic algorithm

Least-Mean-Square-Based Algorithms

Least-squares-based search algorithm

Local least squares algorithm

Nonlinear iterative least squares algorithm (NIPALS

Nonlinear iterative partial least squares NIPALS) algorithm

Partial least squares algorithm

Partial least squares nonlinear iterative algorithm

© 2024 chempedia.info