Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Comparative modelling validation

CM Topham, N Snmvasan, CJ Thorpe, IP Ovenngton, NA Kalsheker. Comparative modelling of major house dust mite allergen der p I Structure validation using an extended environmental ammo acid propensity table. Protein Eng 7 869-894, 1994. [Pg.311]

Important issues in groundwater model validation are the estimation of the aquifer physical properties, the estimation of the pollutant diffusion and decay coefficient. The aquifer properties are obtained via flow model calibration (i.e., parameter estimation see Bear, 20), and by employing various mathematical techniques such as kriging. The other parameters are obtained by comparing model output (i.e., predicted concentrations) to field measurements a quite difficult task, because clear contaminant plume shapes do not always exist in real life. [Pg.63]

The goal of this paper Is to present the current status of model validation and field testing of chemical fate and transport models other papers in this symposium discuss the state-of-the-art of modeling specific processes, environments, and multimedia problems. The process of model validation, and its various components, is described considerations in field testing, where model results are compared to field observations, are discussedp an assessment of the current extent of field testing for various processes and media is presented and future field testing and data needs are enumerated. [Pg.151]

Figure 1 presents an overview of the model testing/valida-tion process as developed at the Pellston workshop. A distinction is drawn between validation of empirical versus theoretical models as discussed by Lassiter (4 ). In reality, many models are combinations of empiricism and theory, with empirical formulations providing process descriptions or interactions lacking a sound, well-developed theoretical basis. The importance of field data is shown in Figure 1 for each step in the model validation process considerations in comparing field data with model predictions will be discussed in a later section. [Pg.154]

The process of field validation and testing of models was presented at the Pellston conference as a systematic analysis of errors (6. In any model calibration, verification or validation effort, the model user is continually faced with the need to analyze and explain differences (i.e., errors, in this discussion) between observed data and model predictions. This requires assessments of the accuracy and validity of observed model input data, parameter values, system representation, and observed output data. Figure 2 schematically compares the model and the natural system with regard to inputs, outputs, and sources of error. Clearly there are possible errors associated with each of the categories noted above, i.e., input, parameters, system representation, output. Differences in each of these categories can have dramatic impacts on the conclusions of the model validation process. [Pg.157]

In comparing the May storms of 1978 and 1976, clearly the simulated concentration values in Figure 3 are more representative of what actually occurred than the observed values. This is not meant to be a criticism of the sampling program but an indication of how errors in observed data can exist and impact the model validation process. [Pg.163]

Frequency domain performance has been analyzed with goodness-of-fit tests such as the Chi-square, Kolmogorov-Smirnov, and Wilcoxon Rank Sum tests. The studies by Young and Alward (14) and Hartigan et. al. (J 3) demonstrate the use of these tests for pesticide runoff and large-scale river basin modeling efforts, respectively, in conjunction with the paired-data tests. James and Burges ( 1 6 ) discuss the use of the above statistics and some additional tests in both the calibration and verification phases of model validation. They also discuss methods of data analysis for detection of errors this last topic needs additional research in order to consider uncertainties in the data which provide both the model input and the output to which model predictions are compared. [Pg.169]

Validation samples, n - a set of samples used in validating a calibration model. Validation samples are not generally part of the set of calibration samples. Reference component concentrations or property values are known (measured using a reference method), and are compared to those estimated using the model. [Pg.512]

The implemented model must be tested with regard to correctness and completeness. Therefore, i.e., to validate the model and ensure the credibility of the simulation results, suitable scenarios with a broad spectrum of different events are reproduced with the model and compared to reality (or to expectations on reality). A model validated successfully can then be used for several systematic experiments (or as part of other applications, e.g., as part of a MES). [Pg.25]

The IEM model is a simple example of an age-based model. Other more complicated models that use the residence time distribution have also been developed by chemical-reaction engineers. For example, two models based on the mixing of fluid particles with different ages are shown in Fig. 5.15. Nevertheless, because it is impossible to map the age of a fluid particle onto a physical location in a general flow, age-based models cannot be used to predict the spatial distribution of the concentration fields inside a chemical reactor. Model validation is thus performed by comparing the predicted outlet concentrations with experimental data. [Pg.214]

Validation of the model. Validation of this model using empirical data was not done, and this may cast doubt on some of its predictions. Of particular interest is the prediction that hydroquinone production is greater following phenol administration as compared to benzene administration. This is in opposition to the prediction of the Medinsky et al. (1995) model, described below. [Pg.110]

Validation of the model. Validation of the model was performed using data from rat and mouse liver microsome preparations (Schlosser et al. 1993). The assumption that benzene and its metabolites compete for the same enzyme reaction site was supported in part by the observation of a lag time in the benzene-to-hydroquinone reaction as compared to the phenol-to-hydroquinone reaction. This lag could be explained by the fact that benzene is first hydrolyzed to phenol, which is then hydrolyzed to hydroquinone, and if all compounds are substrates for P-450 2E1, the kinetics of this pathway would be slowed compared to those of the direct phenol-to-hydroquinone pathway. The model also adequately predicted phenol depletion and concomitant hydroquinone formation resulting from phenol incubations. [Pg.111]

To test the reliability of the previous method, the authors compared it to an independent measurement of oj. They thus propose an extended version of the previous mean-fleld model, valid at any stage of the coalescence regime, even in presence of broad droplet size distributions. It is obtained by considering that the variation of the total number of coalescence events is proportional to the total surface area per unit volume developed by the droplets of different sizes. The total number of drops and total surface are replaced by summations over all the granulometric size intervals ... [Pg.155]

Validation of models is desired but can be difficult to achieve. Models are empirically validated by examining how output data (predictions) compare with observed data (such comparisons, of course, must be conducted on data sets that have not been used to create or specify the model). However, model validations conducted in this manner are difficult given limitations on data sources. As an alternative approach, model credibility can be assessed by a careful examination of the subcomponents of the model and inputs. One should ask the question Does the selection of input variables and the way they are processed make sense Also, confidence in the model may be augmented by peer reviews and the opinion of the scientific community. Common faults and shortcomings are... [Pg.159]

The statistical prediction errors for the unknowns are compared to the maximum statistical prediction error found from model validation in order to assess the reliability of the prediction. Prediction samples which have statistical prediction errors that are significantly larger than this criterion are investigated funher. In the model validation, the maximum error observed for component A is 0.025 (Figure 5-1 In) and 0.019 for component B (Figure 5.11b). For unknown 1, the statistical prediction errors are within this range. For the other unknowns, the statistical prediction errors are much larger. Therefore, the predicted concentrations should not be considered valid. [Pg.287]

Once the above-discussed components of the model have been determined, they are added to the final model of a monolith (or even filter) reactor. The monolith reactor model has already been described in Section III. The next stage is to validate the model by comparing the predictions of the model based on laboratory data, with the real-world data measured on an engine bench or chassis dynamometer. At this stage the reason(s) for any discrepancies between the prediction and experiment need to be determined and, if required, further work on the kinetics done to improve the prediction. This process can take a number of iterations. Model validation is described in more detail in Section IV. D. Once all this has been done the model can be used predictively with confidence. [Pg.62]

As discussed for TWC models (Section II), a DOC model can be used for catalyst sizing and system design. Figure 26 shows a validation plot comparing model prediction with measured data over the ESC (European Stationary Cycle) excellent agreement is observed. Good agreement has also been obtained with this model over other test cycles (York et al., 2005). [Pg.79]

Note that all three models give almost the same exit conversion and yield for methane and carbon dioxide and that the second unit (2) is also operating relatively closely to its thermodynamic equilibrium, though further away from it when compared to Plant (1). The close agreement between the industrial performance data and the simulated data for the reformers (1) and (2) that was obtained by three different diffusion-reaction models validates the models that we have used, at least for plants operating near their thermodynamic equilibria. [Pg.497]

ANOVA of the data confirms that there is a statistically significant relationship between the variables at the 99% confidence level. The i -squared statistic indicates that the model as fitted explains 96.2% of the variability. The adjusted P-squared statistic, which is more suitable for comparing models with different numbers of independent variables, is 94.2%. The prediction error of the model is less than 10%. Results of this protocol are displayed in Table 18.1. Validation results of the regression model are displayed in Table 18.2. [Pg.1082]

Despite the relatively short history of LES fire modeling, the accuracy of LES technique in fire simulation has been studied extensively. Early validation of FDS predecessors was performed by comparing simulations against salt water experiments [21-23], fire plumes [24,25], and room fires [26], More recently, the FDS model has been validated for fire plumes [27] and fires in enclosures in the context of the World Trade Center investigation [28,29] and the fire model validation project sponsored by the U.S. Nuclear Regulatory Commission [30], Some of the above cases and numerous others have been collected to the Validation guide of FDS [4, Vol 3] (not yet published as a separate document). [Pg.555]


See other pages where Comparative modelling validation is mentioned: [Pg.116]    [Pg.329]    [Pg.98]    [Pg.257]    [Pg.351]    [Pg.388]    [Pg.175]    [Pg.231]    [Pg.438]    [Pg.300]    [Pg.73]    [Pg.141]    [Pg.572]    [Pg.128]    [Pg.119]    [Pg.149]    [Pg.72]    [Pg.15]    [Pg.103]    [Pg.63]    [Pg.57]    [Pg.103]    [Pg.449]    [Pg.163]    [Pg.54]    [Pg.182]    [Pg.61]    [Pg.116]    [Pg.60]    [Pg.317]    [Pg.129]   
See also in sourсe #XX -- [ Pg.453 ]




SEARCH



Comparative modeling

Comparative modelling

Modeling validation

Models validity

Validation, comparative modelling process

© 2024 chempedia.info