Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Least-squares optimization

The most notable advance in computational crystallography was the availability of methods for rehning protein structures by least-squares optimization. This developed in a number of laboratories and was made feasible by the implementation of fast Fourier transform techniques [32]. The most widely used system was PROLSQ from the Flendrickson lab [33]. [Pg.287]

Deutsch and Hansch applied this principle to the sweet taste of the 2-substituted 5-nitroanilines. Using the data available (see Table VII), the calculated regression Eqs. 5-7 (using the method of least squares) optimally expressed the relationship between relative sweetness (RS), the Hammett constant, cr, and the hydrophobic-bonding constant, ir. [Pg.225]

OH S,Oi Hnd VotrSiO - were determined by the least squares optimization... [Pg.270]

A method is described for fitting the Cole-Cole phenomenological equation to isochronal mechanical relaxation scans. The basic parameters in the equation are the unrelaxed and relaxed moduli, a width parameter and the central relaxation time. The first three are given linear temperature coefficients and the latter can have WLF or Arrhenius behavior. A set of these parameters is determined for each relaxation in the specimen by means of nonlinear least squares optimization of the fit of the equation to the data. An interactive front-end is present in the fitting routine to aid in initial parameter estimation for the iterative fitting process. The use of the determined parameters in assisting in the interpretation of relaxation processes is discussed. [Pg.89]

These parameters are determined by non-linear least-squares optimization of the fit of the function to both the experimental storage and loss moduli curves. As emphasized, the two determiners of temperature-scan peak width referred to above (i.e., in terms of equation (2), activation energy AH of x0 and a ) have features that allow distinguishing... [Pg.92]

In order to answer these questions, the kinetic and network structure models were used in conjunction with a nonlinear least squares optimization program (SIMPLEX) to determine cure response in "optimized ovens ". Ovens were optimized in two different ways. In the first the bake time was fixed and oven air temperatures were adjusted so that the crosslink densities were as close as possible to the optimum value. In the second, oven air temperatures were varied to minimize the bake time subject to the constraint that all parts of the car be acceptably cured. Air temperatures were optimized for each of the different paints as a function of different sets of minimum and maximum heating rate constants. [Pg.268]

Fe2(OH)2+. A kinetic model was developed that consisted of 10 steps, and it was fit to the complete set of time-dependent traces by use of a combined least-squares optimization with numerical integration. [Pg.366]

Certain assumptions underly least squares computations such as the independence of the unobservable errors ef, a constant error variance, and lack of error in the jc s (Draper and Smith, 1998). If the model represents the data adequately, the residuals should possess characteristics that agree with these basic assumptions. The analysis of residuals is thus a way of checking that one or more of the assumptions underlying least squares optimization is not violated. For example, if the model fits well, the residuals should be randomly distributed about the value of y predicted by the model. Systematic departures from randomness indicate that the model is unsatisfactory examination of the patterns formed by the residuals can provide clues about how the model can be improved (Box and Hill, 1967 Draper and Hunter, 1967). [Pg.60]

Nf < 0 The problem is overdetermined. If NF < 0, fewer process variables exist in the problem than independent equations, and consequently the set of equations has no solutions. The process model is said to be overdetermined, and least squares optimization or some similar criterion can be used to obtain values of the unknown variables as described in Section 2.5. [Pg.67]

Historically, treatment of measurement noise has been addressed through two distinct avenues. For steady-state data and processes, Kuehn and Davidson (1961) presented the seminal paper describing the data reconciliation problem based on least squares optimization. For dynamic data and processes, Kalman filtering (Gelb, 1974) has been successfully used to recursively smooth measurement data and estimate parameters. Both techniques were developed for linear systems and weighted least squares objective functions. [Pg.577]

Table 4.1 Distance least-squares optimized atomic coordinates for UZM-5. Table 4.1 Distance least-squares optimized atomic coordinates for UZM-5.
The mathematical techniques are part of multivariate statistics. They are closely related and often exchangeable. Two main approaches can be distinguished Least Squares Optimization (LSO), and Factor Analysis (FA). [Pg.81]

The convergence criterion in the alternating least-squares optimization is based on the comparison of the fit obtained in two consecutive iterations. When the relative difference in fit is below a threshold value, the optimization is finished. Sometimes a maximum number of iterative cycles is used as the stop criterion. This method is very flexible and can be adapted to very diverse real examples, as shown in Section 11.7. [Pg.440]

Esteban, M., Anno, C., Dfaz-Cruz, J.M., Dfaz-Cruz, M.S., and Tauler, R., Multivariate curve resolution with alternating least squares optimization a soft-modeling approach to metal complexation studies by voltammetric techniques, Trends Anal. Chem., 19, 49-61, 2000. [Pg.468]

The computer least squares optimization algorithm used is termed Simplex [4] which was programmed using Microsoft QuickBasic 4.5. A version of the program is provided at the end of this section. [Pg.144]

Figure 8. Stereoview of the BSS framework drawn as straight lines connecting adjacent tetrahedral vertices. Atomic co-ordinates are based on distance least squares optimized values. Figure 8. Stereoview of the BSS framework drawn as straight lines connecting adjacent tetrahedral vertices. Atomic co-ordinates are based on distance least squares optimized values.
Examine visually the representations (projections down the three principal crystallographic directions) of the full unit-cell contents for each of the saved models, followed by addition of framework oxygen atoms between neighboring T-atoms, and DLS (Distance Least Squares) optimization. [Pg.399]

Multivariate curve resolution-alternating least squares (MCR-ALS) is an algorithm that fits the requirements for image resolution [71, 73-75]. MCR-ALS is an iterative method that performs the decomposition into the bilinear model D = CS by means of an alternating least squares optimization of the matrices C and according to the following steps ... [Pg.90]

Alternating Least Square Optimization. The optimization process starts the iterative calculations from the initial estimates (spectral or electrophoretic profiles) of species to be modeled. If spectra are used as an input, the conjugated peak profile contributions C can be calculated as follows ... [Pg.210]

The open-source Scientific Python (SciPy) package (14) provides a least-squares optimizer that can be used for fitting nonlinear regressions. [Pg.203]

This brings us to a final comment it is very easy to modify the non-linear least squares to include weighting, since the user determines the residuals. We can include any weighting factors we want in our column with residuals, thereby converting Solver to a weighted non-linear least-squares optimizer. The important part is to include the proper weighting. For that we need to know what is (are) the major source(s) of the experimental uncertainty. And therein lies the problem we are often too much in a hurry to find out where the experimental uncertainty comes from. Unfortunately, without that knowledge, we cannot expect to get reliable answers, no matter how sophisticated the software used. [Pg.117]


See other pages where Least-squares optimization is mentioned: [Pg.74]    [Pg.264]    [Pg.38]    [Pg.185]    [Pg.340]    [Pg.69]    [Pg.31]    [Pg.63]    [Pg.1314]    [Pg.53]    [Pg.397]    [Pg.550]    [Pg.556]    [Pg.244]    [Pg.327]    [Pg.259]    [Pg.102]    [Pg.108]    [Pg.185]    [Pg.187]    [Pg.260]    [Pg.270]    [Pg.557]    [Pg.563]    [Pg.28]    [Pg.50]    [Pg.235]    [Pg.314]   
See also in sourсe #XX -- [ Pg.74 , Pg.97 ]




SEARCH



Least squares optimization procedure

Least squares optimization technique

Least-squares optimization solution

Optimization alternating least squares

© 2024 chempedia.info