Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Iterative least-squares methods

D Rodbard, DM Hutt. Statistical analysis of radioimmunoassays and immunoradio-metric (labelled antibody) assays A generalized weighted, iterative, least-squares method for logistic curve fitting. In Radioimmunoassay and Related Procedures in Medicine, Vol I. Vienna International Atomic Energy Agency, 1974, p 165. [Pg.302]

Each compound was described by a set of about 100 descriptors. The descriptors were generated from a fragmentation code or a coded structure (connection table). The set of descriptors is used as a pattern vector that characterizes a compound. Learning machine C261, 3593 and an iterative least-squares method C2603 have been used to train binary classifiers that predict mass spectral peak presence or absence at certain mass numbers. 60 classifiers for 60 mass numbers predict whether the peak is greater than 0.5 % of the total ion current. (The total ion current is the sum of all peak heights in a spectrum.) For 11 of these mass numbers... [Pg.155]

The structures were first related together by comparison of the main-chain and j3-C atoms. The best fit determined by an iterative least-squares method gave a r.m.s. difference of 1.32 A. By further comparison of the Fourier syntheses and also of each structure with the Fourier synthesis of the other, Hoi, Drenth, and their co-workers estimate that 0.5 A differences result from errors in the electron densities, the interpretation, and the model building. Further errors of 0.8 A are estimated to result from the determination of the co-ordinates from the model. Thus differences of less than 1.3 A are not significant. [Pg.394]

A disadvantage of the iterative least-squares method is that it involves inverting a matrix whose size depends on the number of digitized frequencies used. In addition, the number of frequencies used cannot exceed the number of calibration standards used. Thus, in practical cases the number of frequencies that can be employed is limited. [Pg.181]

In a strict sense parameter estimation is the procedure of computing the estimates by localizing the extremum point of an objective function. A further advantage of the least squares method is that this step is well supported by efficient numerical techniques. Its use is particularly simple if the response function (3.1) is linear in the parameters, since then the estimates are found by linear regression without the inherent iteration in nonlinear optimization problems. [Pg.143]

Calibration and mixture analysis addresses the methods for performing standard experiments with known samples and then using that information optimally to measure unknowns later. Classical least squares, iterative least squares, principal components analysis, and partial least squares have been compared for these tasks, and the trade-offs have been discussed (Haaland,... [Pg.81]

The least-squares methods may generally be applied using one of two possible approaches. One of them requires a computation of the function gradients whilst the other does not. Gradient methods use iterative computation of corrections to consecutive approximate solutions g[Pg.265]

If the analysis of a dynamic NMR spectrum is carried out by an iterative least-squares fitting method, the results are accompanied by estimates of the errors. These are proportional to the square root of the sum of the squares of the deviations of the theoretical spectrum from the experimental one, as well as to the sensitivity of the sum to changes in the value of the parameter considered within the region where the sum attains a minimum. These estimates constitute a measure of the effects, on the resulting parameter values, of random errors. They do not include any effects due to systematic errors such as those involved in the assumed values of certain parameters. Moreover, because of the nonlinearity of the least-squares fitting procedure employed, estimates of the errors have only an approximate statistical significance (Section IV.B.2 and reference 67). [Pg.281]

To solve this problem, some assumptions should be made on the relationship between the error of the regression line and the concentration. As a rule, one assumes that the error of the regression line is proportional to the concentration. The variance function Var(X) is obtained by plotting the standard error vs. the concentration. The function is consequently estimated with the least-squares method Var(X) = Sl = (c -T d cone)2. An alternative approach is described in the ISO 11483-2 standard, which uses an iterative procedure to estimate the variance function [18]. [Pg.145]

Therefore it is better to use the nonlinear model directly in a nonlinear regression of the observed variable, the nonlinear least-squares method. Because of the nonlinearity minimization is an iterative process. [Pg.315]

The effects of substituents on the symmetrically disubstituted diarylethyl tosylates, [27(X = Y)j, can be described accurately in terms of the Y-T relationship with p = —4.44 and r = 0.53. The Y-T plot against the Y-T ascale with an appropriate r of 0.53 gives an excellent linear correlation for the whole set of substituents, indicating a uniform mechanism for all of them. When Y X, the overall solvolysis rate constant ki corresponds to the sum of the rate constants, k + kj, and hence k, cannot be employed directly in the Y-T analysis. The acetolysis of monosubstituted diphenylethyl tosylates gave a non-linear Y-T correlation, which is ascribed to a competitive X-substituted aryl-assisted pathway k and the phenyl-assisted k pathway. By application of an iterative non-linear least-squares method to (9), where the terms k and ks are now replaced by k and k, respectively, the substituent effect on kt can be dissected into a correlation with = —3.53, = 0.60, and an... [Pg.299]

The use of the non-linear least squares method does not require any derivatives, but needs an initial estimation and takes more time to compute, since several iterations (usually 3 or 4) are necessary to reduce the difference between the estimated and calculated values of the damping coefficient to within 0.1%. But since this method only requires between 100 and 150 data points without a loss in accuracy compared to as many as 1000 for the peak-finding and least squares methods, the scan rate can be reduced as much as 90% and the time required for the calculations is reduced to the order of a minute. [Pg.346]

Several of the procedures for deriving structural parameters from moments of inertia make use of the method of least squares. Since the relation between moments of inertia and Cartesian coordinates or internal coordinates is nonlinear, an iterative least squares procedure must be used.18 In this procedure an initial estimate of the structural parameters is made and derivatives of the n moments of inertia with respect to each of the k coordinates are calculated based on this estimate. These derivatives make up a matrix D with n rows and k columns. We then define a vector X to be the changes in the k coordinates and a vector B to be the differences between the experimental moments and the calculated moments. We also define a weight matrix W to be the inverse of the ma-... [Pg.100]

Since the data exhibits significant scatter, one must use a least-squares method to correlate probit with logio (time) [see Finney (9), Chapter 4]. This is done in an iterative style of successively improving the approximation to the least-squares line. Four iterations were required to yield a least-squares lines given by ... [Pg.38]

The minimization process use ICP algorithm (Iterative Closest Point) described by Greespan [8] and Besl et al. [9]. Defining D the set of data points of the surface Sj and M the set of points of the model or surface S2, this method establish a matching of D and Mpoints. Thus for each point of D there is a point (the nearest) of the model M. By the correspondence established above, the transformation that minimizes the distance criterion is calculated and applied to the points of the set D and the overall error is calculated using least squares method. [Pg.11]


See other pages where Iterative least-squares methods is mentioned: [Pg.412]    [Pg.102]    [Pg.183]    [Pg.664]    [Pg.412]    [Pg.102]    [Pg.183]    [Pg.664]    [Pg.209]    [Pg.278]    [Pg.246]    [Pg.196]    [Pg.424]    [Pg.251]    [Pg.101]    [Pg.91]    [Pg.8]    [Pg.116]    [Pg.98]    [Pg.295]    [Pg.8]    [Pg.251]    [Pg.191]    [Pg.175]    [Pg.15]    [Pg.201]    [Pg.90]    [Pg.1365]    [Pg.339]    [Pg.282]    [Pg.176]    [Pg.187]    [Pg.1155]    [Pg.213]    [Pg.51]    [Pg.176]   
See also in sourсe #XX -- [ Pg.183 ]




SEARCH



ITER

Iterated

Iteration

Iteration iterator

Iteration method

Iterative

Iterative methods

Least-squared method

Least-squares method

© 2024 chempedia.info