Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Model validation workshops

Figure 1 presents an overview of the model testing/valida-tion process as developed at the Pellston workshop. A distinction is drawn between validation of empirical versus theoretical models as discussed by Lassiter (4 ). In reality, many models are combinations of empiricism and theory, with empirical formulations providing process descriptions or interactions lacking a sound, well-developed theoretical basis. The importance of field data is shown in Figure 1 for each step in the model validation process considerations in comparing field data with model predictions will be discussed in a later section. [Pg.154]

Although the existence or absence of a particular process can often be determined from observed data, an assessment of how well an algorithm represents the process is often difficult to make due to observation errors, natural variations in field data, and lack of sufficient data on individual component processes. In such circumstances, model validity must be inferred or possibly based on comparisons with laboratory data obtained under controlled conditions. Often laboratory data provide the basis for developing an algorithm since field data are so much more difficult and expensive to collect and interpret. Examples of system representation errors and their analysis were presented at the Pellston workshop (6 ). [Pg.160]

Future needs in support of model validation and performance testing must continue to be in the area of coordinated, well-designed field data collection programs supplemented with directed research on specific topics. The FAT workshop produced a listing of the field data collection and research needs for the air, streams/lakes/estuaries, and runoff/unsaturated/saturated soil media categories, as follows ... [Pg.169]

Dimitriades, B., and M. Dodge, Eds., Proceedings of the Empirical Kinetic Modeling Approach (EKMA) Validation Workshop, EPA Report No. EPA-600/9-83-014, August 1983. [Pg.934]

Herzig, S.J.I., Paredis, C.J.J. Bayesian reasoning over models. In Workshop on Model-Driven Engineering, Verification, and Validation (2014). http //ceur-ws.org/Vol-1235/paper-09.pdf... [Pg.381]

In the past few years a variety of workshops and symposia have been held on the subjects of model verification, field validation, field testing, etc. of mathematical models for the fate and transport of chemicals in various environmental media. Following a decade of extensive model development in this area, the emphasis has clearly shifted to answering the questions "How good are these models ", "How well do they represent natural systems ", and "Can they be used for management and regulatory decision-making "... [Pg.151]

The greatest need in model performance testing and validation is clearly the use of quantitative measures to describe comparisons of observed and predicted values. As noted above, although a rigorous statistical theory for model performance assessments has yet to be developed, a variety of statistical measures has been used in various combinations and the frequency of use has been increasing in recent years. The FAT workshop (3.) identified three general types of comparisons that are often made in model performance testing ... [Pg.168]

Le Ferrec E, Chesne C, Artursson P, Brayden D, Fabre G, Gires P, Guillou F, Rousset M, Rubas W, Scarino ML (2001) In vitro models of the intestinal barrier. The report and recommendations of ECVAM Workshop 46. European Centre for the Validation of Alternative methods. Altern Lab Anim 29 649-668. [Pg.210]

The most important one is that the model should be appropriately validated to confirm the reliability of its predictions. First rules of the validation were worked out in March 2002 at an international workshop held in Setubal, Portugal ( Setubal Rules ). In November 2004, the rules were discussed and modified by the OECD Work Program on QSAR they are now known as the OECD Principles. According to these principles, each QSAR model should be associated with (a) a well-defined endpoint (b) an unambiguous algorithm (c) a defined domain of applicability (d) appropriate measures of goodness-of-fit, robustness and predic-tivity and (v) a mechanistic interpretation, if possible [15, 16]. [Pg.204]

Worth, A.P. and Cronin, M.T.D., Report of the workshop on the validation of (Q)SARs and other computational prediction models, Proceedings of the Fourth World Congress on Alternatives to Animal Use in Life Sciences, Alterantives Lab. Anim., 2004. [Pg.213]

One key issue is the requirement for animal testing. The report and recommendations of the European Centre for the Validation of Alternative Methods (ECVAM) workshop (Leahy et al., 1997) clearly state that a major priority is to reduce the number of animals used in pharmacokinetic studies in drug development. As much information as possible should be obtained from alternative sources, such as computer modeling or using data already generated for similar compounds. [Pg.262]

M. P. Manahan, K. E. Newman, D. D. Macdonald, A. J. Peterson, Experimental Validation of the Basis for the Coupled Environment Fracture Model, in Rw. EPRI Workshop on Secondary-Side Initiated IGA/IGSCC, Minneapolis, MN, October 14-15, 1993. EPRI, Published while at the Center for Advanced Materials, The Pennsylvania State University, University Park, PA 16802 in cooperation with MPM Research and Consulting, 915 Pike St. PO Box 840, Lemont, PA in cooperation with Global Technical Consultants, Inc., Centre Hall, PA 16828 and in cooperation with Niagara Mohawk Power Corp. Research and Development, 300 Erie Blvd. W. Syracuse, NY 13202, 1993. [Pg.193]

Van de Sandt J, Roguet R, Cohen C, et al. (1999) The Use of Human Keratinocytes and Human Skin Models for Predicting Skin Irritation. The Report and Recommendations of ECVAM Workshop 38. European Commission, Institute for Health and Consumer Protection, European Centre for Validation of Alternative Methods (ECVAM). [Pg.2679]

Concerning the structural validation of the simulation model, the adjustment of the numerous parameters is particularly critical these parameters include the number of actors (more than 20 attributes for each actor), the number of tools (five attributes for each tool), the dispersion and the variation of the activity durations, the probability of occurrence of certain activities, and many more. The values of these parameters can result in extremely complex system dynamics. Therefore, in the first runs of the organizational simulation, the number of persons was varied, and afterwards it was set to the optimal number. Subsequently, the number of tools was also varied. The other factors were not examined in the first test runs. Then, the influence of the number of actors and tools on the simulation results for the total time of project duration was examined in order to judge the internal validity of the simulation model. To do so, the expected durations of the individual activities were established in multiple expert workshops. As described in the following, these test runs... [Pg.467]

J.C. Wren, G. Glowa and J.M. Ball, A Simplified Iodine Chemistry and Transportation Model Model Description and Some Validation Calculations , In Proceedings of OECD Workshop on Iodine Aspects of Severe Accident Management, NEA/CSNI/R(99)7, Vantaa, Finland, 1999. [Pg.72]

Carissimo B., S. F. Jagger, N. C. Daish, A. Halford, S. Selmer-Olsen, K. Riikonen, J. M. Perroux, J. WUrtz, J. Bartzis, N. J. Duijm, K. Ham, M. Schatzmann, and R. HaU. 1999. The SMEDIS Database and Validation Exercises, presented at SMEDIS Workshop on the 6th International Conference on Heumonisation within Atmospheric Dispersion Modeling for Regulatory Purposes, Rouen, October 11-14. [Pg.631]

In general, historic experiments were performed primarily to explore characteristics and behaviors of tsunamis. On the other hand, a majority of recent experiments aims at providing adequate benchmark data sets for validation of muneric models. For example, benchmarking exercises for numeric models were conducted at the 1995 Friday Harbor Workshop and the 2004 Catalina Island Workshop Objectives of laboratory experiments have evolved together with advances in measuring instruments. [Pg.1078]

One of the first experiments was a flume study of tsunami runup on a vertical wall to study the effect of complex bathymetry on this highly nonlinear phenomenon. This study was used as Benchmark Problem 3 in the International Workshop on Long Wave Modek to validate numeric models. Kanoglu and Synolakis actively participated and validated their anal3dic method using this data. Bathymetric contours from Revere Beach, MA, were molded in the flume. This model of opportunity was selected to take advantage of a realistic complex bathymetry for a much reduced construction cost. The compound-slope, fixed-bed bathymetry consisted of three different slopes (rise run = 1 53, 1 150, and 1 13) and a flat section in the deep end (Fig. 39.6). The vertical wall was located at the landward end of the 1 13 slope. The water depth in the flat section of the flume measured 21.8cm. [Pg.1088]

Darr, S.R., Hartwig, J.W., 2015. Validation of the flow through screen pressure drop model for screen channel liquid acquisition devices. 26th Space Cryogenics Workshop, Phoenix, AZ, June 24-26. [Pg.428]


See other pages where Model validation workshops is mentioned: [Pg.152]    [Pg.152]    [Pg.152]    [Pg.152]    [Pg.167]    [Pg.218]    [Pg.356]    [Pg.37]    [Pg.1089]    [Pg.156]    [Pg.153]    [Pg.198]    [Pg.403]    [Pg.431]    [Pg.3]    [Pg.299]    [Pg.334]    [Pg.251]    [Pg.82]    [Pg.83]    [Pg.22]    [Pg.23]    [Pg.154]    [Pg.461]    [Pg.74]    [Pg.1097]    [Pg.117]    [Pg.146]    [Pg.254]   


SEARCH



Modeling validation

Models validity

© 2024 chempedia.info