Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Parameter errors, model validation

The process of field validation and testing of models was presented at the Pellston conference as a systematic analysis of errors (6. In any model calibration, verification or validation effort, the model user is continually faced with the need to analyze and explain differences (i.e., errors, in this discussion) between observed data and model predictions. This requires assessments of the accuracy and validity of observed model input data, parameter values, system representation, and observed output data. Figure 2 schematically compares the model and the natural system with regard to inputs, outputs, and sources of error. Clearly there are possible errors associated with each of the categories noted above, i.e., input, parameters, system representation, output. Differences in each of these categories can have dramatic impacts on the conclusions of the model validation process. [Pg.157]

Like ANNs, SVMs can be useful in cases where the x-y relationships are highly nonlinear and poorly nnderstood. There are several optimization parameters that need to be optimized, including the severity of the cost penalty , the threshold fit error, and the nature of the nonlinear kernel. However, if one takes care to optimize these parameters by cross-validation (Section 12.4.3) or similar methods, the susceptibility to overfitting is not as great as for ANNs. Furthermore, the deployment of SVMs is relatively simpler than for other nonlinear modeling alternatives (such as local regression, ANNs, nonlinear variants of PLS) because the model can be expressed completely in terms of a relatively low number of support vectors. More details regarding SVMs can be obtained from several references [70-74]. [Pg.389]

A necessary condition for the validity of a regression model is that the multiple correlation coefficient is as close as possible to one and the standard error of the estimate s small. However, this condition (fitting ability) is not sufficient for model validity as the models give a closer fit (smaller s and larger R ) the larger the number of parameters and variables in the models. Moreover, unfortunately, these parameters are not related to the capability of the model to make reliable predictions on future data. [Pg.461]

It is essential to identify and separate these two types of errors to avoid confusion. If numerical errors are not isolated, they may lead to undesirable spurious model calibration exercises. It is, therefore, necessary to devise systematic methods to quantify numerical errors. The basic idea behind error analysis is to obtain a quantitative measure of numerical errors, to devise corrective measures to ensure that numerical errors are within tolerable limits and the results obtained are almost independent of numerical parameters. Having established adequate control of numerical errors, the simulated results may be compared with experimental data to evaluate errors in physical modeling. The latter process is called model validation. Several examples of model validation are discussed in Chapters 10 to 14. In this section, some comments on error analysis are made. [Pg.224]

A typical structure of a closed-loop RTO system is shown in Fig. 6, which consists of subsystems for data validation, model updating, model-based optimization, and command conditioning. Raw measurements taken from the plant are filtered and checked for reliability in the data validation subsystem. Because, model-based RTO systems rely on model updating to correct for modeling errors and disturbances, an effective model updating system is required to ensure that the RTO system tracks the changing optimal operations closely. Model updating, most commonly effected via on-line estimation of some set of model parameters, uses the validated data. The updated process model is then used by the model-based optimization... [Pg.2589]

The model validation procedures in the system could also be extended. For example, in some cases it would be desirable to have a feature which automatically compares measured data with computer generated data, serving as an additional check on the mathematical vaUdity of the proposed model. The system could also be extended to study the effects of inherent errors in the constant parameters, possibly by executing parts of SIMULATOR several times for extreme values of each constant parameter. [Pg.69]

Finally, the prediction error model assumes that the parameter values do not change with respect to time, that is, they are time invariant. A quick and simple test of the invariance of the model is to split the data into two parts and cross validate the models using the other data set. If both models perform successfully, then the parameters are probably time invariant, at least over the time interval considered. [Pg.303]

After the determination of the best-fit parameter values, the validity of the selected model should be tested does this model describe the available data properly or are there still indications that a part of the data is not explained by the model, indicating remaining model errors We need the means to assess whether or not the model is appropriate, that is, we need to test the goodness-af-fit against some useful statistical standard. [Pg.31]

The maximum number of latent variables is the smaller of the number of x values or the number of molecules. However, there is an optimum number of latent variables in the model beyond which the predictive ability of the model does not increase. A number of methods have been proposed to decide how many latent variables to use. One approach is to use a cross-validation method, which involves adding successive latent variables. Both leave-one-out and the group-based methods can be applied. As the number of latent variables increases, the cross-validated will first increase and then either reach a plateau or even decrease. Another parameter that can be used to choose the appropriate number of latent variables is the standard deviation of the error of the predictions, SpREss ... [Pg.725]

Recorded kinetic curves were fitted to the five-parameter Equation (1). The parameters pj with their errors and the standard deviation of regressions are summarized in Tables 1-6. Comparison of the data confirm the previously reported (refs. 8,12) similarity in the behavior of the two isomers in the presence of strong bases in spite of the different shape of the kinetic curves. The relatively good agreement of exponents p2, P4 computed for the diastereomers at the same temperature and amine concentration demonstrates the validity of the model used. From comparison of Equations (4) and (7) it follows that both reaction must give the same exponent. [Pg.268]

It is the main aim of semiempirical chromatographic models to couple the empirical parameters of retention with the established thermodynamic quantities generally used in physical chemistry. The validity of a model for chromatographic practice can hardly be overestimated, because it often and successfully helps to overcome the old trial-and-error approach to running the analyses, especially when incorporated in the separation selectivity oriented optimization strategy. [Pg.17]

For time-series data, the contiguous block method can provide a good assessment of the temporal stability of the model, whereas the Venetian blinds method can better assess nontemporal errors. For batch data, one can either specify custom subsets where each subset is assigned to a single batch (i.e., leave one batch out cross-validation), or use Venetian blinds or contiguous blocks to assess within-batch and between-batch prediction errors, respectively. For blocked data that contains replicates, one must be very careful with the Venetian blinds and contiguous block methods to select parameters such that the rephcate sample trap and the external subset traps, respectively, are avoided. [Pg.411]

NIR models are validated in order to ensure quality in the analytical results obtained in applying the method developed to samples independent of those used in the calibration process. Although constructing the model involves the use of validation techniques that allow some basic characteristics of the model to be established, a set of samples not employed in the calibration process is required for prediction in order to conhrm the goodness of the model. Such samples can be selected from the initial set, and should possess the same properties as those in the calibration set. The quality of the results is assessed in terms of parameters such as the relative standard error of prediction (RSEP) or the root mean square error of prediction (RMSEP). [Pg.476]


See other pages where Parameter errors, model validation is mentioned: [Pg.29]    [Pg.158]    [Pg.123]    [Pg.300]    [Pg.112]    [Pg.487]    [Pg.112]    [Pg.2294]    [Pg.401]    [Pg.39]    [Pg.125]    [Pg.309]    [Pg.165]    [Pg.123]    [Pg.18]    [Pg.262]    [Pg.195]    [Pg.409]    [Pg.245]    [Pg.524]    [Pg.63]    [Pg.333]    [Pg.348]    [Pg.491]    [Pg.235]    [Pg.369]    [Pg.575]    [Pg.648]    [Pg.229]    [Pg.343]    [Pg.114]    [Pg.233]    [Pg.461]    [Pg.189]    [Pg.33]    [Pg.134]   


SEARCH



Error model

Model parameter

Modeling validation

Models validity

Parameter errors, model validation testing

Validation error

Validation parameters

© 2024 chempedia.info