Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Other linear methods

A large number of linear methods have been developed with particular characteristics that tend to suit them to specific deconvolution problems. None of these adaptations shows beneficial results nearly so profound as those resulting from the imposition of the physical-realizability constraints discussed in the next chapter. Furthermore, the present work is not intended [Pg.87]

The demand that the solution 6 be consistent with the data i results in the improved resolution that we expect from a deconvolution method. As we have explained, however, it also results in the amplification of high-frequency noise. The smoothing of this noise to some extent defeats the purpose of deconvolution. The tradeoff between smoothness and consistency is explicit in the formulation of a method first described by Phillips (1962) and further developed by Twomey (1965). In this method, we minimize the quantity [Pg.88]

The first term governs smoothness through the second differences of dn. The second term imposes the consistency between the solution values dn and the data values im. The tradeoff is controlled by varying parameter jS. Frieden (1975) explores the method briefly. Hunt (1973) applies it to images in a computationally efficient way that uses the fast Fourier transform. [Pg.88]

Confronted with a problem in which two data sets were available, Breedlove et al. (1977) chose a solution that minimizes a sum of terms not unlike expression (56). Available were two images one a blurred representation of the object, the other a superposition of sharp renderings. In this sum, the right-hand term accommodates the blurred image as in expression (56). The other term incorporates the multiple exposure via the Lagrange multiplier technique. Solutions obtained by this method illustrated the desirability of using all the available data. [Pg.88]

Huang et al. (1975) and Huang (1975) described a method based on iteratively correcting estimate 6 by adding a term that involves a hyperplane projection operation. Like the relaxation methods discussed in Sections II and III, it has the potential to be upgraded by the implementation of constraints. [Pg.88]

In Section 8.2.8 we have discussed the standard addition method as a means to quantitate an analyte in the presence of unknown matrix effects cf. Section 13.9). While the matrix effect is corrected for, the presence of other emalytes may still interfere with the analysis. The method can be generalized, however, to the simultaneous analysis of p analytes. Multiple standard additions are applied in order to determine the analytes of interest using many q p) analytical sensors. It [Pg.367]

More efficient estimation methods exist than the simple method described here [17]. The generalized standard addition method (GSAM) shares the strong points (e.g correction for interferences) and weak points (e.g. error amplification because of the extrapolation involved) of the simple standard addition method [18]. [Pg.368]

Leaving out one object at a time represents only a small perturbation to the data when the number (n) of observations is not too low. The popular LOO procedure has a tendency to lead to overfitting, giving models that have too many factors and a RMSPE that is optimistically biased. Another approach is k-fold cross-validation where one applies k calibration steps (5 k 15), each time setting a different subset of (approximately) n/k samples aside. For example, with a total of 58 samples one may form 8 subsets (2 subsets of 8 samples and 6 of 7), each subset tested with a model derived from the remaining 49 or 50 samples. In principle, one may repeat this / -fold cross-validation a number of times using a different splitting [20]. [Pg.370]

Van der Voet [21] advocates the use of a randomization test (cf. Section 12.3) to choose among different models. Under the hypothesis of equivalent prediction performance of two models, A and B, the errors obtained with these two models come from one and the same distribution. It is then allowed to exchange the observed errors, and c,b, for the ith sample that are associated with the two models. In the randomization test this is actually done in half of the cases. For each object i the two residuals are swapped or not, each with a probability 0.5. Thus, for all objects in the calibration set about half will retain the original residuals, for the other half they are exchanged. One now computes the error sum of squares for each of the two sets of residuals, and from that the ratio F = SSE/JSSE. Repeating the process some 100-2(K) times yields a distribution of such F-ratios, which serves as a reference distribution for the actually observed F-ratio. When for instance the observed ratio lies in the extreme higher tail of the simulated distribution one may [Pg.370]


Mass transfer can alter the observed kinetic parameter of enzyme reactions. Hints of this are provided by non-linear Lineweaver-Burk plots (or other linearization methods), non-linear Arrhenius plots, or differing Ku values for native and immobilized enzymes. Different expressions have been developed for the description of apparent Michaelis constants under the influence of external mass transfer limitations by Homby (1968) [Eq. (5.69)], Kobayashi (1971), [Eq. (5.70)], and Schuler (1972) [Eq. (5.71)]. [Pg.118]

Several other linearizing methods have been published (199,202,203), but nonuniform variance is inherent in these methods as well. No single linearized plotting method will be appropriate for all IA data (200,204). It may be necessary to investigate several plotting methods before choosing the most appropriate one. This problem can be circumvented if the sigmoidal log concentration-response curve is retained. [Pg.269]

Other linearization methods have been described by Sondack (63) and Wiedemann and Riesen i64). These methods are based on adding constant K to the total or partial areas... [Pg.659]

The previously mentioned data set with a total of 115 compounds has already been studied by other statistical methods such as Principal Component Analysis (PCA), Linear Discriminant Analysis, and the Partial Least Squares (PLS) method [39]. Thus, the choice and selection of descriptors has already been accomplished. [Pg.508]

Butene. Commercial production of 1-butene, as well as the manufacture of other linear a-olefins with even carbon atom numbers, is based on the ethylene oligomerization reaction. The reaction can be catalyzed by triethyl aluminum at 180—280°C and 15—30 MPa ( 150 300 atm) pressure (6) or by nickel-based catalysts at 80—120°C and 7—15 MPa pressure (7—9). Another commercially developed method includes ethylene dimerization with the Ziegler dimerization catalysts, (OR) —AIR, where R represents small alkyl groups (10). In addition, several processes are used to manufacture 1-butene from mixed butylene streams in refineries (11) (see BuTYLENEs). [Pg.425]

Variables sueh as eoneentration of reaetants, reaetion eoil length, injeetion volume, flow rate. ete. are studied and optimized. Reprodueibility, linearity, deteetion limit and statistieal evaluation are shown. The methods results are in good agreement to other standard methods. [Pg.356]

Kihara20 used a core model in which the Lennard-Jones potential is assumed to hold for the shortest distance between the molecular cores instead of molecular centers. By use of linear, tetrahedral, and other shapes of cores, various molecules can be approximated. Thomaes,41 Rowlinson,35 Hamann, McManamey, and Pearse,14 Atoji and Lipscomb,1 Pitzer,30 and Balescu,4 have used other models of attracting centers and other mathemtical methods, but obtain similar conclusions. The primary effect is to steepen the potential curve so that in terms of inverse powers of the inter-... [Pg.73]

Linear viscoelasticity Linear viscoelastic theory and its application to static stress analysis is now developed. According to this theory, material is linearly viscoelastic if, when it is stressed below some limiting stress (about half the short-time yield stress), small strains are at any time almost linearly proportional to the imposed stresses. Portions of the creep data typify such behavior and furnish the basis for fairly accurate predictions concerning the deformation of plastics when subjected to loads over long periods of time. It should be noted that linear behavior, as defined, does not always persist throughout the time span over which the data are acquired i.e., the theory is not valid in nonlinear regions and other prediction methods must be used in such cases. [Pg.113]

Both the INTERFEROMETER method (d) and the MIRROR REFLECTION (e) methods use optical means to detect changes in linear expansion of the sample under test. The instrumentation is more complex and will not be described here. The other two methods, X-RAY LATTICE CONSTANTS (f) and SAMPLE DENSITY (g) have not been employed to any great extent for determination of ttL... [Pg.397]

For the individual types of transient measuring techniques, special names exist but their terminology lacks uniformity. The potentiostatic techniques where the time-dependent current variation is determined are often called chronoamperometric, and the galvanostatic techniques where the potential variation is determined are called chronopotentiometric. For the potentiodynamic method involving linear potential scans, the term voltammetry is used, but this term is often used for other transient methods as well. [Pg.200]

As an extension of perceptron-like networks MLF networks can be used for non-linear classification tasks. They can however also be used to model complex non-linear relationships between two related series of data, descriptor or independent variables (X matrix) and their associated predictor or dependent variables (Y matrix). Used as such they are an alternative for other numerical non-linear methods. Each row of the X-data table corresponds to an input or descriptor pattern. The corresponding row in the Y matrix is the associated desired output or solution pattern. A detailed description can be found in Refs. [9,10,12-18]. [Pg.662]

Relaxation methods for the study of fast electrode processes are recent developments but their origin, except in the case of faradaic rectification, can be traced to older work. The other relaxation methods are subject to errors related directly or indirectly to the internal resistance of the cell and the double-layer capacity of the test electrode. These errors tend to increase as the reaction becomes more and more reversible. None of these methods is suitable for the accurate determination of rate constants larger than 1.0 cm/s. Such errors are eliminated with faradaic rectification, because this method takes advantage of complete linearity of cell resistance and the slight nonlinearity of double-layer capacity. The potentialities of the faradaic rectification method for measurement of rate constants of the order of 10 cm/s are well recognized, and it is hoped that by suitably developing the technique for measurement at frequencies above 20 MHz, it should be possible to measure rate constants even of the order of 100 cm/s. [Pg.178]

In principle, it would be possible to perform multistage mass spectrometry like in an ICR analyzer although with no gas CID would of course not be possible, but other dissociation methods could be employed. There might, however, be technical issues. At the time of writing, fragmentation is performed in the linear QIT preceeding the orbitrap in Thermo Fischer Scientific s instrument. Both pulsed and continuous ion sources can be employed. There are several ion sources that can be employed with Thermo Fischer Scientific s orbitrap. [Pg.58]

It is interesting to note that of the three linear regression methods used, viz., RR, PCS, and PLS, RR outperformed the other two methods significantly. This is in line with our earlier observations with HiQSARs using the three methods [30,37,38,46]. [Pg.488]

It tests all linear contrasts among the population means (the other three methods confine themselves to pairwise comparison, except they use a Bonferroni type correlation procedure). [Pg.927]

A simple strategy for variable selection is based on the information of other multivariate methods like PCA (Chapter 3) or PLS regression (Section 4.7). These methods form new latent variables by using linear combinations of the regressor... [Pg.157]

The article is organized as follows. The main features of the linear response theory methods at different levels of correlation are presented in Section 2. Section 3 describes the calculation of the dipole and quadmpole polarizabilities of two small diatomic molecules LiH and HF. Different computational aspects are discussed for each of them. The LiH molecule permits very accurate MCSCF studies employing large basis sets and CASs. This gives us the opportunity to benchmark the results from the other linear response methods with respect to both the shape of the polarizability radial functions and their values in the vibrational ground states. The second molecule, HF, is undoubtedly one of the most studied molecules. We use it here in order to examine the dependence of the dipole and quadmpole polarizabilities on the size of the active space in the CAS and RASSCF approaches. The conclusions of this study will be important for our future studies of dipole and quadmpole polarizabilities of heavier diatomic molecules. [Pg.187]

The limit of determination is commonly estimated by finding the intercept of extrapolated linear parts of the calibration curve (see point L.D. in fig. 5.1). However, it is often difficult to construct a straight line through the experimental potentials at low concentrations and, moreover, the precision of the potential measurement cannot be taken into consideration. Therefore, it has been recommended that, by analogy with other analytical methods, the determination limit be found statistically, as the value differing with a certain probability from the background [94]. [Pg.104]

For simultaneous solution of (16), however, the equivalent set of DAEs (and the problem index) changes over the time domain as different constraints are active. Therefore, reformulation strategies cannot be applied since the active sets are unknown a priori. Instead, we need to determine a maximum index for (16) and apply a suitable discretization, if it exists. Moreover, BDF and other linear multistep methods are also not appropriate for (16), since they are not self-starting. Therefore, implicit Runge-Kutta (IRK) methods, including orthogonal collocation, need to be considered. [Pg.240]


See other pages where Other linear methods is mentioned: [Pg.367]    [Pg.67]    [Pg.87]    [Pg.68]    [Pg.367]    [Pg.67]    [Pg.87]    [Pg.68]    [Pg.511]    [Pg.34]    [Pg.451]    [Pg.422]    [Pg.695]    [Pg.9]    [Pg.695]    [Pg.372]    [Pg.312]    [Pg.389]    [Pg.435]    [Pg.439]    [Pg.459]    [Pg.199]    [Pg.107]    [Pg.268]    [Pg.468]    [Pg.361]    [Pg.534]    [Pg.793]    [Pg.95]    [Pg.211]    [Pg.72]    [Pg.122]    [Pg.7]    [Pg.84]   


SEARCH



Linear methods

Linearized methods

Others methods

© 2024 chempedia.info