Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Least-Squares Superposition Methods

The necessary mathematical background to the generation of mean Cartesian coordinates referred to inertial axes of the fragment is covered in Chapter 1. This method [Pg.156]


The most straightforward method for comparison of 3-D structures involves rigid-body least-squares superposition of the C positions. We have developed a procedure for alignment of several homologous structures [9, 10] without bias to any one in the set. Divergent proteins usually retain the general arrangement of strands and helices. However, when there are less than 20% sequence identities, the differences in relative orientation and position of the secondary structural elements usually preclude their simultaneous superposition [1,2,11,12). [Pg.670]

The basic problem with the fragment assembly method is the use of the least-squares superposition, which means that the proteins are being treated as rigid bodies. This may result in only a small number of equivalent positions being used to pinpoint a large part of the model, and information other than the Ca-positions in known structures is often neglected. Thus, another technique was developed to allow a more flexible representation of protein structure, both for comparison and modelling purposes. [Pg.452]

An interesting method of fitting was presented with the introduction, some years ago, of the model 310 curve resolver by E. I. du Pont de Nemours and Company. With this equipment, the operator chose between superpositions of Gaussian and Cauchy functions electronically generated and visually superimposed on the data record. The operator had freedom to adjust the component parameters and seek a visual best match to the data. The curve resolver provided an excellent graphic demonstration of the ambiguities that can result when any method is employed to resolve curves, whether the fit is visually based or firmly rooted in rigorous least squares. The operator of the model 310 soon discovered that, when data comprise two closely spaced peaks, acceptable fits can be obtained with more than one choice of parameters. The closer the blended peaks, the wider was the choice of parameters. The part played by noise also became rapidly apparent. The noisy data trace allowed the operator additional freedom of choice, when he considered the error bar that is implicit at each data point. [Pg.33]

In this chapter only QSAR methods which use physicochemical or structural features of molecules will be discussed, while in Chapter 25 3D-QSAR approaches will be presented. These so-called 3D-QSAR techniques, e.g. CoMFA, use the basic statistical principles, such as partial least squares (PLS), of QSAR methods, but in addition use the three-dimensional characteristics of a molecule specifically related to electronic, steric and lipophilic field effects. In these methods the molecular superposition believed relevant to binding to the target is crucial. [Pg.352]

Linear prediction is a method to directly obtain the resonance frequencies and relaxation rates from time domain signals, which are a superposition of exponents, by solving the characteristic polynomials. Phases and intensities are calculated interactively using a least-squares procedure. The correlation spectroscopy of a two-dimensional NMR spectrometer employs several specific programs such as RELAY and TOCSY. The recognition of response peaks, the isolations of signals from noise and artifacts, and the spectral position (e.g., chemical shift) are all carried out by computers. [Pg.488]

Another method of analyzing the spectra is to assume that they consist of linear superpositions of component spectra (also referred to as basis spectra). Each component spectra corresponds to a biochemical constituent of the cell (protein, DNA/RNA, carbohydrate, lipid, etc.). The linear coefficients that multiply each component spectmm to produce a match to the measured spectrum are determined using a least-squares optimization. [Pg.176]

The indirect transformation technique models the r -multiplied autocorrelation function r (y(r)) , = r y(r) = p r) as a superposition of equidistant B-splines (up to a cutoff maximum particle size that needs to be given in advance) that are Fourier transformed, subjected to any applicable collimation effects (the method was originally developed for a Kratky slit collimation), and then least-squares fitted to the experimental intensity distribution, so that p r] can be computed using the fit coefficients and the untransformed B-splines. The shape of p(r) is known for many standard particles including solid spheres, core-shell hollow spheres, and rods. [Pg.368]

Structure comparison methods are a way to compare three-dimensional structures. They are important for at least two reasons. First, they allow for inferring a similarity or distance measure to be used for the construction of structural classifications of proteins. Second, they can be used to assess the success of prediction procedures by measuring the deviation from a given standard-of-truth, usually given via the experimentally determined native protein structure. Formally, the problem of structure superposition is given as two sets of points in 3D space each connected as a linear chain. The objective is to provide a maximum number of point pairs, one from each of the two sets such that an optimal translation and rotation of one of the point sets (structural superposition) minimizes the rms (root mean square deviation) between the matched points. Obviously, there are two contrary criteria to be optimized the rms to be minimized and the number of matched residues to be maximized. Clearly, a smaller number of residue pairs can be superposed with a smaller rms and, clearly, a larger number of equivalent residues with a certain rms is more indicative of significant overall structural similarity. [Pg.263]


See other pages where Least-Squares Superposition Methods is mentioned: [Pg.156]    [Pg.156]    [Pg.280]    [Pg.168]    [Pg.255]    [Pg.168]    [Pg.132]    [Pg.197]    [Pg.139]    [Pg.168]    [Pg.703]    [Pg.722]    [Pg.492]    [Pg.255]    [Pg.17]    [Pg.492]    [Pg.412]    [Pg.177]    [Pg.1695]    [Pg.2618]    [Pg.63]    [Pg.69]   


SEARCH



Least-squared method

Least-squares method

Superposition methods

Superpositioning

Superpositions

© 2024 chempedia.info