Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Forward linear prediction

Fig. 5.25 Parameters for a forward Linear Prediction with ID WIN-NMR (above the... Fig. 5.25 Parameters for a forward Linear Prediction with ID WIN-NMR (above the...
GHJCOSE 1D H GH 013001.FID. Note the baseline artifacts introduced by the truncated FID. In the Linear Prediction (LP) dialog box make sure that the Execute Backward LP option is enabled and the Execute Forward LP option disabled. Set LP backward to Point to 124. Following the rules given above vary the residual parameters First Point used for LP (recommended 196), Last Point used for for LP (recommended 2047) and Number of Coefficients (recommended 128 or larger). Carefully inspect the resulting spectra with respect to spectral resolution and signal shapes and compare it with the spectrum obtained without LP. [Pg.194]

This is known as the (forward) linear prediction equation, because the time series can be used to predict subsequent data points. For a single sinusoidal component, Eq. (101) reduces to xn = c exp[(Z> + iu>)n At, where the phase... [Pg.101]

Forward linear prediction can be applied in F1 but may give artefacts and degrades the signal-to-noise. [Pg.6178]

Forward linear prediction (LP) represents an elegant solution to the -domain sensitivity-resolution-time dilemma. While the mathematical implementation of LP is not trivial, its underlying principle is surprisingly simple. LP can be likened to an ideal automobile race in which the vehicles travel at a constant rate of speed. If their relative positions after 256 laps are noted, a very good estimate can be made as to what their positions will be after 1,024 laps. [Pg.247]

The method of linear prediction (LP) can play many roles in processing of NMR data [4,5], from the rectification of corrupted or distorted data, through to the complete generation of frequency-domain data from an FID an alternative to the FT. Here we consider its most popular usage, known as forward linear prediction, which extends a truncated FID. Rather than simply appending zeros, this method, as the name suggests, predicts the values of the missing data... [Pg.57]

Forward linear prediction can be used to add points on to the end of a digitized FID, or it can be used to add to a 2-D data matrix additional digitized FIDs corresponding to those that would be obtained if longer ti evolution times were used. [Pg.337]

W.F. Reynolds, R.G. Enriquez, The advantages of forward linear prediction over multiple aliasing for obtaining high-resolution HSQC spectra in systems with extreme spectral crowding, Magn. Reson. Chem. 41 (2003) 927-932. [Pg.227]

A variety of algorithms exist for finding the coefficients and multipliers y3y, with which the experimental data can be extrapolated forwards or backwards all share some common problems. Linear prediction has difficulty distinguishing between positive and negative decay constants Ty, and so is best suited to time series in which all the decay constants are either positive or negative, allowing spurious values to be rejected. The number m of coefficients to be used has to be decided by the ex-... [Pg.358]

To provide a comparison, we also evaluated forecast errors and cost for the rules used by the plarmers at this retailer, the k median method based on store descriptors, alone and combined with sales mix differences, and two standard approaches to variable selection in linear regression, since the problem of choosing k test stores and a linear prediction function based on test sales at these stores can be viewed as choosing the best k out of n possible variables in a linear regression. Given actual sales Sp and test sales Sjp for i = 1,. . . n and p = 1,. . . m, we used the forward selection and backward elimination methods (Myers (1990)) to choose k out of the n test... [Pg.119]

Model predictive control, as put forward in Section 9.4.1, is commonly known in its linear form as receding horizon model predictive control and has been used primarily in the control... [Pg.279]

A criterion of mechanism based on the Hammett acidity function, H0 (Section 3.2, p. 130),has long been used to decide the type of question raised by the choice between Mechanisms I and II in Scheme 8. Since in strongly acidic media the concentration of the protonated substrate should be proportional to h0, the reaction rate for a unimolecular decomposition of this protonated substrate (Mechanism I) should also be proportional to h0, whereas if a water molecule is required (Mechanism II), the rate should follow H30+ concentration instead. This test, known as the Zucker-Hammett hypothesis,76 when applied to acetal and ketal hydrolysis, appears to confirm the A-l mechanism, since a linear relationship is found between rate constant and h0 at high acidity.77 Inconsistencies have nevertheless been found in application of the Zucker-Hammett hypothesis, for example failure of the plots of log k vs. — H0 to have the theoretical slope of unity in a number of cases, and failure to predict consistent mechanisms for forward and reverse reactions the method is therefore now considered to be of doubtful validity.78 Bunnett has devised a more successful treatment (Equation 8.45), in which the parameter to measures the extent of... [Pg.430]

Figures 11 and 12 illustrate the performance of the pR2 compared with several of the currently popular criteria on a specific data set resulting from one of the drug hunting projects at Eli Lilly. This data set has IC50 values for 1289 molecules. There were 2317 descriptors (or covariates) and a multiple linear regression model was used with forward variable selection the linear model was trained on half the data (selected at random) and evaluated on the other (hold-out) half. The root mean squared error of prediction (RMSE) for the test hold-out set is minimized when the model has 21 parameters. Figure 11 shows the model size chosen by several criteria applied to the training set in a forward selection for example, the pR2 chose 22 descriptors, the Bayesian Information Criterion chose 49, Leave One Out cross-validation chose 308, the adjusted R2 chose 435, and the Akaike Information Criterion chose 512 descriptors in the model. Although the pR2 criterion selected considerably fewer descriptors than the other methods, it had the best prediction performance. Also, only pR2 and BIC had better prediction on the test data set than the null model. Figures 11 and 12 illustrate the performance of the pR2 compared with several of the currently popular criteria on a specific data set resulting from one of the drug hunting projects at Eli Lilly. This data set has IC50 values for 1289 molecules. There were 2317 descriptors (or covariates) and a multiple linear regression model was used with forward variable selection the linear model was trained on half the data (selected at random) and evaluated on the other (hold-out) half. The root mean squared error of prediction (RMSE) for the test hold-out set is minimized when the model has 21 parameters. Figure 11 shows the model size chosen by several criteria applied to the training set in a forward selection for example, the pR2 chose 22 descriptors, the Bayesian Information Criterion chose 49, Leave One Out cross-validation chose 308, the adjusted R2 chose 435, and the Akaike Information Criterion chose 512 descriptors in the model. Although the pR2 criterion selected considerably fewer descriptors than the other methods, it had the best prediction performance. Also, only pR2 and BIC had better prediction on the test data set than the null model.

See other pages where Forward linear prediction is mentioned: [Pg.188]    [Pg.190]    [Pg.193]    [Pg.241]    [Pg.151]    [Pg.171]    [Pg.251]    [Pg.65]    [Pg.65]    [Pg.65]    [Pg.65]    [Pg.65]    [Pg.66]    [Pg.66]    [Pg.337]    [Pg.146]    [Pg.212]    [Pg.1212]    [Pg.367]    [Pg.332]    [Pg.229]    [Pg.151]    [Pg.140]    [Pg.60]    [Pg.434]    [Pg.573]    [Pg.338]    [Pg.115]    [Pg.165]    [Pg.330]    [Pg.73]    [Pg.253]   
See also in sourсe #XX -- [ Pg.58 ]

See also in sourсe #XX -- [ Pg.45 ]




SEARCH



Forward

Forwarder

Linear prediction

© 2024 chempedia.info