Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Iterative refinement

The initial classification is iteratively refined according to each unique atom s neighbors, as far as a neighbor is already unique (forms a class for itself) [17,193]. Each unique atom in turn is used to split non-unique classes, and each atom that becomes unique joins the queue to be used itself. [Pg.207]

By using steps 1 and 2, a discrete partition is often obtained, particularly for molecular graphs. [Pg.207]

Successive application of items in step 1 yields a progressively finer classification  [Pg.208]

Step 2. Iterative refinement according to each unique atom s immediate neighbors  [Pg.208]

Finally we can separate D from 14 according to the neighborship to unique atom [Pg.208]


Because many practical flames are turbulent (spark ignited engine flames, nil field flares), an understanding of the interaction between the complex fluid dynamics of turbulence and the combustion processes is necessary to develop predictive computer models. Once these predictive models are developed, they arc repeatedly compared with measurements of species, temperatures, and flow in actual flames for iterative refinement. If the model is deficient, it is changed and again compared with experiment. The process is repeated until a satisfactory predictive model is obtained. [Pg.274]

Table 5.24. Iterative refinement of the isochron parameters (slope a and intercept fi) for the lead isotope data listed in Table 5.23. Table 5.24. Iterative refinement of the isochron parameters (slope a and intercept fi) for the lead isotope data listed in Table 5.23.
Implementation of a covariance structure into this numerical scheme is described in Tarantola and Valette (1982). In essence, an a priori covariance structure is assumed for the whole set of observations and parameters, which should be tightened by iterative refinements since we are still dealing with a minimum variance estimate. [Pg.309]

Table 5.27. Iterative refinement of the time-drift parameters by the total inverse method. Table 5.27. Iterative refinement of the time-drift parameters by the total inverse method.
Aiming to construct explicit dynamic models, Eqs. (5) and (6) provide the basic relationships of all metabolic modeling. All current efforts to construct large-scale kinetic models are based on an specification of the elements of Eq (5), usually involving several rounds of iterative refinement For a schematic workflow, see again Fig. 4. In the following sections, we provide a brief summary of the properties of the stoichiometric matrix (Section III.B) and discuss the most common functional form of enzyme-kinetic rate equations (Section III.C). A selection of explicit kinetic models is provided in Table I. TABLE I Selected Examples of Explicit Kinetic Models of Metabolisin 1 ... [Pg.123]

Gotoh, O. (1996). Significant improvement in accuracy of multiple protein sequence alignments by iterative refinement as assessed by reference to structural alignments. /. Mol. Biol. 264, 823-838. [Pg.134]

Iterative refining processes invariably depend on the quality of initial guesses. If these are too far from the optimum, the process can diverge and collapse, sometimes seriously. The code provided for all iterative processes is minimal and works for most reasonably well behaved problems however, the routines are not fool-proof. In the case of divergence and collapse we recommend the user investigates the appropriateness of the initial guesses supplied to the function. [Pg.6]

In this section we demonstrate that it is possible to use the complete vector or matrix of residuals to drive the iterative refinement towards the minimum. [Pg.148]

The crucial aspect of equation (4.79) is that the residuals are defined as a function of the rate constants only. We are back to a reasonable number of parameters, two for our example Note, however, that the matrix A is always based on the matrix C. Thus, during the iterative refinement, where rate constants are still incorrect, C as well as A are incorrect too. Only at the veiy end will they be correct. [Pg.163]

What is the effect on the iterative refinement of the parameters The minimum is defined by Jtr=0. The curvature matrix is only required to guide the iterative process towards the minimum and thus the approximation, J J, for the curvature matrix does not compromise the exact location of the minimum. The approximation only results in a slightly different path taken by the algorithm towards the minimum. Ignoring the terms... [Pg.203]

It is worthwhile to compare this iterative refinement of concentration profiles as given on p.271 with ITTFA, the other iterative process we introduced in Chapter 5.2.2. [Pg.275]

As we have seen with the previous iterative refinement and ITTFA, convergence generally is very sluggish. Even with moderately complex systems, it is often too slow to be useful. There are alternative, non-iterative methods that compare favourably with the above iterative algorithms. [Pg.276]

The subsequent computation of the absorption spectra A from C and Y is a simple linear regression. This is followed by the normalisation of the concentration profiles to a maximum of one, as has been outlined already in the preceding chapter Iterative Refinement of the Concentration Profiles. The normalisation is done using the routine norm max. m (p.275). [Pg.279]

ALS should more correctly be called Alternating Linear Least-Squares as every step in the iterative cycle is a linear least-squares calculation followed by some correction of the results. The main advantage and strength of ALS is the ease with which any conceivable constraint can be implemented its main weakness is the inherent poor convergence. This is a property ALS shares with the very similar methods of Iterative Target Transform Factor Analysis, TTTFA and Iterative Refinement of the Concentration Profiles, discussed in Chapters 5.2.2 and 5.3.3. [Pg.280]

The Human Genome Project went three-dimensional in late 2000. Structural genomics efforts will determine the structures of thousands of new proteins over the next decade. These initiatives seek to streamline and automate every experimental and computational aspect of the structural determination pipeline, with most of the steps involved covered in previous chapters of this volume. At the end of the pipeline, an atomic model is built and iteratively refined to best fit the observed data. The final atomic model, after careful analysis, is deposited in the Protein Data Bank, or PDB (Berman et ah, 2000). About 25,000 unique protein sequences are currently in the PDB. High-throughput and conventional methods will dramatically increase this number and it is crucial that these new structures be of the highest quality (Chandonia and Brenner, 2006). [Pg.191]

To achieve the greatest improvements in drug discovery efficiency, empirical data of various kinds must be collected throughout the iterative refinement process. It is desirable to obtain more accurate dissociation constants rather than IC50 or single-point percent-inhibition values. In addition, the 3-dimensional structures of interesting target—inhibitor complexes are determined... [Pg.533]

The zero subscripts in Eqs. (5.127) and (5.128) emphasize that the initial-guess c s, with no iterative refinement, were used to calculate G in the subsequent iterations of the SCF procedure Hcore will remain constant while G will be refined as the c s,... [Pg.223]

The Cl strict analogue of the iterative refinement of the coefficients that we saw in HF calculations (Section 5.23.6.5) would refine just the weighting factors of the determinants (the c s of Eqs. (5.168), but in the MCSCF version of Cl the spatial MOs within the determinants are also optimized (by optimizing the c s of the... [Pg.273]

Sensitivity analysis should be an integral component of the uncertainty analysis in order to identify key sources of variability, uncertainty or both and to aid in iterative refinement of the exposure model. The results of sensitivity analysis should be used to identify key sources of uncertainty that should be the target of additional data collection or research, to identify key sources of controllable variability that can be the focus of risk management strategies and to evaluate model responses and the relative importance of various model inputs and model components to guide model development. [Pg.60]

Where appropriate to an assessment objective, exposure assessments should be iteratively refined over time to incorporate new data, information and methods to better characterize uncertainty and variability. [Pg.65]

Sensitivity analysis should be an integral component of the uncertainty analysis in order to identify key sources of variability, uncertainty or both and to aid in iterative refinement of the exposure model. [Pg.174]


See other pages where Iterative refinement is mentioned: [Pg.2265]    [Pg.221]    [Pg.289]    [Pg.731]    [Pg.101]    [Pg.120]    [Pg.276]    [Pg.352]    [Pg.168]    [Pg.271]    [Pg.292]    [Pg.29]    [Pg.111]    [Pg.201]    [Pg.377]    [Pg.66]    [Pg.527]    [Pg.179]    [Pg.26]    [Pg.129]    [Pg.164]    [Pg.206]    [Pg.451]    [Pg.466]    [Pg.511]    [Pg.182]    [Pg.31]    [Pg.70]   
See also in sourсe #XX -- [ Pg.400 ]




SEARCH



ITER

Iterated

Iteration

Iteration iterator

Iterative

Iterative binding refinement

Iterative refinement process, structure

© 2024 chempedia.info