Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Evaluating the Model

Many researchers have tried to find methods to distinguish between correctly and incorrectly folded protein models. It has been shown that incorrect protein models can minimize to potential energy values similar to those of [Pg.351]

Another energy-based method looks at the potential of mean force for the p-carbon atoms. Using a database of known protein structures, the potential of mean force for all amino acid pairs was compiled. The conformational energy of sequences was then calculated for a number of different folds, and it was found that in most cases the native state had the lowest energy. Most of the exceptions were structures with large prosthetic groups, Fe—S clusters, or nonglobular proteins. [Pg.352]

In at least one case, the crystal structure of a protein was determined after the model had been built. A comparison of the model with the crystal structure provides insight into some of the sources of error inherent in protein modeling.  [Pg.352]


Evaluating the model in tenns of how well the model fits the data, including the use of posterior predictive simulations to determine whether data predicted from the posterior distribution resemble the data that generated them and look physically reasonable. Overfitting the data will produce unrealistic posterior predictive distributions. [Pg.322]

Having demonstrated that our simulation reproduces the neutron data reasonably well, we may critically evaluate the models used to interpret the data. For the models to be analytically tractable, it is generally assumed that the center-of-mass and internal motions are decoupled so that the total intermediate scattering function can be written as a product of the expression for the center-of-mass motion and that for the internal motions. We have confirmed the validity of the decoupling assumption over a wide range of Q (data not shown). In the next two sections we take a closer look at our simulation to see to what extent the dynamics is consistent with models used to describe the dynamics. We discuss the motion of the center of mass in the next section and the internal dynamics of the hydrocarbon chains in Section IV.F. [Pg.485]

Therefore, we must find a new equilibrium constant, K , that Is analogous to Ks. We have considered (21) several methods of evaluating the model parameters (the equation that gives Kjj, as a function of temperature) from the available data. [Pg.133]

The model we have used and the parameter values In the model are consistent with the available experimental data (Insofar as consistency Is possible). It Is not possible to determine without additional data whether the difficulties at small x and at high temperature are attributable to Inadequacies In the model or Inadequacies In the available experimental data, which have been used to evaluate the model parameters. [Pg.137]

Equations 4.14 and 4.15 are used to evaluate the model response and the sensitivity coefficients that are required for setting up matrix A and vector b at each iteration of the Gauss-Newton method. [Pg.54]

We consider two cases, one with a higher Peclet number than the other. Disper-sivity tt[, in the first case is set to 0.03 m in the second, it is 3 m. In both cases, the diffusion coefficient D is 10-6 cm2 s-1. Since Pe L/oti., the two cases on the scale of the aquifer correspond to Peclet numbers of 33 000 and 330. We could evaluate the model numerically, but Javandel el al. (1984) provide a closed form solution to Equation 20.25 that lets us calculate the solute distribution in the aquifer... [Pg.299]

Developed frameworks are applied to the specific industry problem to monthly plan a global chemical commodity value chain by volumes and values. Sub-objectives are to elaborate characteristics and planning requirements for a global commodity value chain in the chemical industry and to develop, implement and evaluate the respective model. Research question 2 is directed to a real industry case study demonstrating the real existence of formulated requirements, showing the applicability of the developed model in reality and evaluating the model using industry data. [Pg.21]

Validation of the Model. The McKone model used one data set to evaluate the model results (Jo et al. 1990a). The McKone model results were also compared to other existing chloroform models, with an in-depth discussion of similarities an differences between those models. [Pg.137]

An important aspect of variable selection that is often overlooked is the hazard brought about through the use of cross-validation for two quite different purposes namely (1) as an optimization criterion for variable selection and other model optimization tasks (including selection of the optimal number of PLS LVs or PCR PCs) and (2) as an assessment of the quality of the final model built using all samples. In this case, one can get highly optimistic estimates of a model s performance, because the same criterion is used to both optimize and evaluate the model. As a result, when doing variable selection, especially with a limited number of calibration samples, it is advisable to do an additional outer loop cross-validation across the entire model... [Pg.424]

Evaluating the model (A 7.1) one finds that it describes a second order pha.se transition occurring for h = 0 and a critical value r = re. The transition is signaled by long range correlations of the spin field or power-type... [Pg.118]

For the orientation search (often called a rotation search), the computer is looking for large values of the model Patterson function Pmodel( ,v,w) at locations corresponding to peaks in the Patterson map of the desired protein. A powerful and sensitive way to evaluate the model Patterson is to compute the minimum value of Pmodel(w,v,w) at all locations of peaks in the Patterson map of the desired protein. A value of zero for this minimum means that the trial orientation has no peak in at least one location where the desired protein exhibits a peak. A high value for this minimum means that the trial orientation has peaks at all locations of peaks in the Patterson map of the desired protein. [Pg.131]

In this section, we have used the example of C02 removal from flue gases using aqueous MEA to demonstrate the development and application of a rigorous model for a chemically reactive system. Modem software enables rigorous description of complex chemically reactive systems, but it is very important to carefully evaluate the models and to tune them using experimental data. [Pg.26]

These experiments differed from many others because the models developed were based on spectra of the pure raw materials, products, and the expected interferents. Reference samples were collected and measured in order to independently evaluate the model, but were not used in model development. This approach promises greater efficiency and productivity for situations where many different reactions are being studied, instead of the more common industrial case of the same reaction run repeatedly. However, the researchers were not satisfied with their models and identified several probable limiting factors that serve as good reminders for all process analytical projects. [Pg.149]

Where present, number in parentheses is the number of available descriptors from which the descriptors used in the model were chosen. bMethod indicates the method of evaluating the model LOO indicates leave-one-one cross-validation, single training set indicates that the same dataset used to build the model was used to determine accuracy. [Pg.146]

The concept of double layer structure is far from being well established and evaluated. The models presented above give emphasis to electrostatic considerations. Chemical models have been developed that consider the electronic distribution of the atoms in the electrode, which is related to their work function. This was only possible after experimental... [Pg.52]

Both the subspace angle and the expected prediction difference can be used to evaluate the model discrimination capability of a design d over a model space T. In Section 4, the selection of orthogonal designs using criteria based on these measures is discussed. [Pg.215]

Approaches for aggregating exposure for simple scenarios have been proposed in the literature (Shurdut et al., 1998 Zartarian et al., 2000). The USEPA s National Exposure Research Laboratory has developed the Stochastic Human Exposure and Dose Simulation (SHEDS) model for pesticides, which can be characterized as a first-generation aggregation model and the developers conclude that to refine and evaluate the model for use as a regulatory decision-making tool for residential scenarios, more robust data sets are needed for human activity patterns, surface residues for the most relevant snrface types, and cohort-specific exposure factors (Zartarian et al, 2000). The SHEDS framework was used by the USEPA to conduct a probabilistic exposure assessment for the specific exposure scenario of children contacting chromated copper arsenate (CCA)-treated playsets and decks (Zartarian et al, 2003). [Pg.373]

During the first European Tracer Experiment (ETEX-1) a non-depositing tracer gas (Perflouro-Methyl-Cyclo-Hexane) was emitted from a site in Northern France (Brittany (2°00 30", 48°03 30")). The average emission rate was 7.95 g s and it commenced on 23 October at 16 00 UTC lasting for 11 h and 50 min. The spatial and temporal development of the tracer cloud was measured at 168 measurement stations in Europe and both real time and retrospective model inter-comparison projects were carried out (Graziani et al. 1998 Mosca et al. 1998). The purpose of this experiment was to evaluate the models ability to transport and disperse a tracer. [Pg.64]

Gerischer s distribution curves can be interpreted as representing the energy dependence of the electron transfer rate constants involving the reduced and oxidized species. Only a few electrochemical studies have attempted to evaluate the model and quantify the distributions and reorganizational parameters [17, 18]. Nevertheless, it has become common practice to draw a pictorial representation of the distributions when discussing interfacial electron transfer kinetics relevant to dye sensitization. [Pg.2732]

Having made all of these cautionary statements, one still can state something useful about the overall accretion timescales. All recent combined accretion/continuous core formation models (Halliday, 2000 Halliday et al., 2000 Yin et al., 2002) are in agreement that the timescales are in the range 10 -10 yr, as predicted by Wetherill (1986). Therefore, we can specihcally evaluate the models of planetary accretion proposed earlier as follows. [Pg.522]


See other pages where Evaluating the Model is mentioned: [Pg.537]    [Pg.294]    [Pg.57]    [Pg.26]    [Pg.198]    [Pg.255]    [Pg.521]    [Pg.524]    [Pg.415]    [Pg.171]    [Pg.84]    [Pg.96]    [Pg.193]    [Pg.45]    [Pg.284]    [Pg.284]    [Pg.112]    [Pg.213]    [Pg.135]    [Pg.18]    [Pg.191]    [Pg.192]    [Pg.1568]    [Pg.444]   


SEARCH



Evaluating an Explicit Flow Model for the Protein Routing Question

Evaluation of the Model

Evaluation of the Single Pore Model

Modelling evaluation

Models evaluation

Numerical Evaluation of the Model

Special HIPS blends prepared to evaluate the toughening model

The UCLA Evaluation Model

© 2024 chempedia.info