Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Comparative modeling evaluation

Alwyn Jones, T. and Kleywegt, G. J. (1999) CASP3 comparative modeling evaluation. Proteins Suppl. 3, 30-46. [Pg.503]

Frazao C, C Topham, V Dhanaraj and T L Blundell 1994. Comparative Modelling of Human Rer Retrospective Evaluation of the Model with Respect to the X-ray Crystal Structure, Pure and A Chemistry 66 43-50. [Pg.575]

Figure 12 ModBase, a database of comparative protein stracture models. Screenshots of the following ModBase panels are shown A form for searching for the models of a given protein, summary of the search results, summary of the models of a given protein, details about a single model, alignment on which a given model was based, 3D model displayed by RASMOL [237], and a model evaluation by the Prosall profile [217],... Figure 12 ModBase, a database of comparative protein stracture models. Screenshots of the following ModBase panels are shown A form for searching for the models of a given protein, summary of the search results, summary of the models of a given protein, details about a single model, alignment on which a given model was based, 3D model displayed by RASMOL [237], and a model evaluation by the Prosall profile [217],...
N Srimvasan, TL Blundell. An evaluation of the performance of an automated procedure for comparative modelling of protein tertiary structure. Protein Eng 6 501-512, 1993. [Pg.304]

The merits and demerits of the many computer-simulation approaches to grain growth are critically analysed in a book chapter by Humphreys and Hatherly (1995), and the reader is referred to this to gain an appreciation of how alternative modelling strategies can be compared and evaluated. A still more recent and very clear critical comparison of the various modelling approaches is by Miodownik (2001). [Pg.476]

System Representation Errors. System representation errors refer to differences in the processes and the time and space scales represented in the model, versus those that determine the response of the natural system. In essence, these errors are the major ones of concern when one asks "How good is the model ". Whenever comparing model output with observed data in an attempt to evaluate model capabilities, the analyst must have an understanding of the major natural processes, and human impacts, that influence the observed data. Differences between model output and observed data can then be analyzed in light of the limitations of the model algorithm used to represent a particularly critical process, and to insure that all such critical processes are modeled to some appropriate level of detail. For example, a... [Pg.159]

The model is evaluated by means of provided case test data. Industry case data are modified by values and selected volume parameters for confidentiality reasons, Therefore, optimization results will not match directly with the actual business. However, the provided data are realistic in order to test sensitivity of the model and to compare model reactions applying different scenarios. Two test types are conducted ... [Pg.214]

Compare modeling predictions with the experimental data shown in Fig. 14.11, assuming plug flow. Evaluate how well the the model describes methane oxidation under these conditions. Using the model, assess whether addition of hydrogen may enhance methanol selectivity. [Pg.615]

To compare and evaluate different kinds of models created during the model development process graphical and statistical methods should be applied. A good description of the model building process can be found elsewhere [14]. [Pg.461]

Prior knowledge and various hypotheses are condensed into models. NONMEM determines the parameter vector including fixed and random effects of each model using the maximum likelihood algorithm. NONMEM uses each model to predict the observed data set and selects the best PPK parameter vector minimising the deviation between model prediction and observed data. Comparing model fits by the criteria discussed in the section Evaluation should decide which hypothesis is the most likely. As a general rule, the model should be as simple as possible and the number of parameters should be at a minimum. [Pg.748]

Figure 3.3 and Table 3.1 are shown here to indicate the spread of available observations for model evaluation. Figure 3.3 shows a comparison of water vapor mixing ratio as predicted by WRF/Chem for the same time period, but compared to aircraft observations. Similar comparisons are available for other meteorological parameters as well as many chemical constituents, including Ozone, PM species and ozone precursors (Table 3.1). Detailed results are displayed on the WEB at http //www.al.noaa.gov/lCARTT/modeleval. [Pg.50]

Transport and dispersion was evaluated without any form of tuning by comparing a simulation of the ETEX-1 release to the official measurements of surface concentration. To facilitate comparisons with models evaluated during ATMES 11 (Atmospheric Transport Model Evaluation Study) an identical statistical methodology was employed (Mosca et al. 1998). Background values were subtracted so that only the pure tracer concentration was used. Measurements of zero concentration (concentrations below the background level) were included in time series to the extent that they lay between two non-zero measurements or within two before or two after a non-zero measurement. Hereby, spurious correlations between predicted and measured zero-values far away from the plume track are reduced. [Pg.65]

To evaluate the deposition routines a simulation of the Chernobyl accident was carried out and compared to measurements of total deposited Cesium 137 (Cs-137). The measurements were extracted from the Radioactivity Environmental Monitoring database at the Joint Research Centre, Ispra, Italy (http //rem.jrc.cec.eu.int/). The comparison date was chosen to be 1 May 1986 at 12 00 UTC, since at this time the greatest number of measurements was available. Statistical measures were calculated following the recommendations of the Atmospheric Transport Model Evaluation Study (ATMES) final report (Klug et al. 1992). [Pg.66]

Venclovas C, Zemla A, Fidehs K, Moult J. Criteria for evaluating protein structures derived from comparative modeling. Proteins 1997 29(Suppl. 1) 7-13. [Pg.457]

Schoonman, M.J. Knegtel, R.M. Grootenhuis, P.D. Practical evaluation of comparative modelling and threading methods. Comput. Chem. 1998, 22 (5), 369-375. [Pg.181]

A study on UVR susceptibility in cod eggs was undertaken in parallel with the exposure of Calanusfinmarchicus reported above [2,4,111]. While the BWFs for cod eggs resembled that of naked DNA, and UVR effects were indeed observed under natural solar intensities, it was nevertheless concluded that cod eggs and embryos were far less susceptible to natural UVR compared with the tested copepods. While a model evaluation suggested that a more than 50% increase in damage could occur under realistic ozone depletion scenarios for Calanus, this was almost negligible for cod eggs or embryos. This does not imply that UVR is... [Pg.422]


See other pages where Comparative modeling evaluation is mentioned: [Pg.456]    [Pg.456]    [Pg.555]    [Pg.276]    [Pg.279]    [Pg.279]    [Pg.286]    [Pg.294]    [Pg.297]    [Pg.298]    [Pg.300]    [Pg.151]    [Pg.47]    [Pg.34]    [Pg.188]    [Pg.390]    [Pg.231]    [Pg.129]    [Pg.274]    [Pg.283]    [Pg.100]    [Pg.1476]    [Pg.1627]    [Pg.57]    [Pg.618]    [Pg.146]    [Pg.422]    [Pg.56]    [Pg.173]    [Pg.405]    [Pg.751]    [Pg.188]    [Pg.214]    [Pg.215]   
See also in sourсe #XX -- [ Pg.294 ]




SEARCH



Comparative modeling

Comparative modelling

Constructing and Evaluating a Comparative Model

Modelling evaluation

Models evaluation

© 2024 chempedia.info