Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Prediction error assessment, statistical

The statistical prediction errors for the unknowns are compared to the maximum statistical prediction error found from model validation in order to assess the reliability of the prediction. Prediction samples which have statistical prediction errors that are significantly larger than this criterion are investigated funher. In the model validation, the maximum error observed for component A is 0.025 (Figure 5-1 In) and 0.019 for component B (Figure 5.11b). For unknown 1, the statistical prediction errors are within this range. For the other unknowns, the statistical prediction errors are much larger. Therefore, the predicted concentrations should not be considered valid. [Pg.287]

Statistical Prediction Error vs. Sample Number Plot (Sample Diagnostic) The statistical prediction errors for the validation data are shown in Figure 5-84. There are no samples which have an error that is unusual relative to the rest of the validation data. This further confirms the earlier conclusion that there are no outlier samples. The maximum of 0.029 will be used for assessing the reliability of prediction in Habit 6. [Pg.321]

In order to assess the optimal complexity of a model, the RMSEP statistics for a series of different models with different complexity can be compared. In the case of PLS models, it is most common to plot the RMSEP as a function of the number of latent variables in the PLS model. In the styrene—butadiene copolymer example, an external validation set of 7 samples was extracted from the data set, and the remaining 63 samples were used to build a series of PLS models for ris-butadicne with 1 to 10 latent variables. These models were then used to predict the ris-butadicne of the seven samples in the external validation set. Figure 8.19 shows both the calibration fit error (in RMSEE) and the validation prediction error (RMSEP) as a function of the number of... [Pg.269]

The difference in approach is self-evident. In the mechanistically based model, key internal and external variables are identified. Their variabilities are readily incorporated into the model to assess the overall variabihty in response. The contribution of each of the random variables on the variability in response may be readily assessed. Given the explicit functional dependence, when duly validated, it can be used to predict response beyond the range of the experimental data. The experi-entially based statistical model, on the other hand, represents a statistical fit to the data in which the key internal variables could not be identified. As such, it is incapable of capturing the functional dependence on these variables, and its usefulness is limited to the range of the experimental data. Because experimental (including measurement) errors are lumped into estimates of the fitting parameters and their variability, the quality of the subsequent reliabihty analyses may be overly conservative, or uncertain. A more detailed discussion of these approaches may be found in [7]. [Pg.187]

The virtue of water quality data may be assessed on the basis of two aspects the accuracy of identification of the parameter or variable measured and the numerical accuracy. The qualitative identification must be made without reasonable doubt, and the quantitative measurements must be conducted precisely and accurately. Quantitative measurements must be made in such a manner that any error or uncertainty in the measurements can be tagged with a stated probability. To this end, measurements must be made in such a way as to provide statistical predictability. [Pg.4107]

On the other hand, the predictive ability of a calibration model is very difficult to assess empirically. Even the computational methods that we use are questionable Do squared errors correspond to the loss function that the user really wants And the statistical requirements are important How many test samples should be needed to test the predictive ability to what detail, and how should these test samples be distributed statistically in the multivariate sense These are difficult questions for statisticians and chemists alike. On this basis, how can we assess PLSR as an NIR calibration method ... [Pg.204]

Principal component analysis (PCA) and principal component regression (PCR) were used to analyze the data [39,40]. PCR was used to construct calibration models to predict Ang II dose from spectra of the aortas. A cross-validation routine was used with NIR spectra to assess the statistical significance of the prediction of Ang II dose and collagen/elastin in mice aortas. The accuracy of the PCR method in predicting Ang II dose from NIR spectra was determined by the F test and the standard error of performance (SEP) calculated from the validation samples. [Pg.659]


See other pages where Prediction error assessment, statistical is mentioned: [Pg.404]    [Pg.262]    [Pg.227]    [Pg.236]    [Pg.235]    [Pg.158]    [Pg.476]    [Pg.4]    [Pg.362]    [Pg.29]    [Pg.326]    [Pg.229]    [Pg.418]    [Pg.480]    [Pg.275]    [Pg.594]    [Pg.255]    [Pg.339]    [Pg.3]    [Pg.142]    [Pg.594]    [Pg.22]    [Pg.12]    [Pg.125]    [Pg.141]    [Pg.285]    [Pg.146]    [Pg.334]   


SEARCH



Predictable errors

Prediction statistics

Statistical assessment

Statistical error

Statistical prediction

Statistics errors

© 2024 chempedia.info