Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Validated blind predictions

Performing a validated blind prediction is a difficult task and requires the availability of the software, a test scenario, and the experimental facilities for validating the study. For protein-protein docking, a blind prediction of the complex between TEM-1 /Mactamase and the inhibitor BLIP was performed by 6 independent groups. All groups were able to identify the correct complex within 2 A RMSD [153]. [Pg.356]

Croera, C., Gagliardi, G., Foti, R., Parchment, R., Parent-Massin, D., Schoeters, G., Sibiril, Y, Van Den Heuvel, R. and Grialdo, L. (2003) Application of the CFU-GM assay to predict acute drug-induced neturopenia an interanational blind trial to validate a prediction model of the maximum tolerated dose (MTD) of myelosuppresive xenobiotics. Toxicol Sciences, 75, 355-367. [Pg.436]

Like QSAR models, hierarchical schemes are ordinarily optimized in two steps (1), calibration of the component model parameters to meet accepted criteria for performance (e.g., predictivity, number of false negatives) using a training set of chemicals (2) validation of the scheme by assessing its ability to blind-predict test chemicals of known activity. It is presumed that the chemicals in the training and test sets share the same chemical space, range of activity, mode-of-action, and so on. The entire scheme and each component model are relined during validation. [Pg.164]

Theoretical predictions are risky. Therefore for almost all such prediction experimental validation is required. Nevertheless, often the models can indicate appropriate ways for validation or further experiments. These experiments can be expected to be time-consuming, and expensive. Furthermore, the protein actually needs to be available for the suggested experiments. All of this limits the applicability of experimental validation. Therefore, it is mandatory to reduce errors as much as possible and to indicate the expected error range via computer-based predictions. This is not a trivial problem for structure prediction, though. An estimation of the performance and accuracy of the respective methods can be obtained from large scale comparative benchmarking, from successful blind predictions and from a community wide assessment experiment (CASP [109, 229]/ CAFASP [283]). These are addressed in turn in the following ... [Pg.302]

To cross-validate the results, each group of structurally related proteins is left out of the training set in turn and used to test the network. Such a partitioning scheme (in contrast to a jackknife one, for example) minimizes the likelihood of biasing the results in favor of structural descriptors (see Section II). Its use yields true predictions (denoted cv ) in contrast to fits of the data, in which aU the proteins are included during the training (denoted tm )- The latter tend to yield inflated accuracy statistics, but we describe them here as well for comparison with earlier studies [12,13,20,47], which failed to cross-validate their results [however, it should be noted that the relationship in Ref. 12 has been used successfully for blind predictions (K. W. Plaxco and D. Baker, personal communication)]. [Pg.16]

For time-series data, the contiguous block method can provide a good assessment of the temporal stability of the model, whereas the Venetian blinds method can better assess nontemporal errors. For batch data, one can either specify custom subsets where each subset is assigned to a single batch (i.e., leave one batch out cross-validation), or use Venetian blinds or contiguous blocks to assess within-batch and between-batch prediction errors, respectively. For blocked data that contains replicates, one must be very careful with the Venetian blinds and contiguous block methods to select parameters such that the rephcate sample trap and the external subset traps, respectively, are avoided. [Pg.411]

The underlying concept of the validation experiments is to provide a prediction of the performance to be expected over an extended period in routine use, and a minimal dataset will not satisfy this expectation. Therefore, a typical validation design will include the six different sources of matrix, usually at each of three concentrations bracketing the MRL, repeated as analyst spikes in three or four analytical runs, followed by one or two additional runs where the materials are provided as unknowns (blind) to the analyst. The design is usually repeated for each required matrix (e.g., each species-tissue combination) for the initial target species and may be also be required when the method is applied routinely to other species. However, when there are obvious commonalities (such as tissues from different ruminants), method extension may require only a reduced dataset, based on experience with the method. [Pg.284]

Shedden K et al. Gene expression-based survival prediction in lung adenocarcinoma a multi-site, blinded validation study. Nat Med 2008 14 822-827. [Pg.671]


See other pages where Validated blind predictions is mentioned: [Pg.356]    [Pg.356]    [Pg.30]    [Pg.338]    [Pg.97]    [Pg.168]    [Pg.64]    [Pg.536]    [Pg.660]    [Pg.502]    [Pg.306]    [Pg.136]    [Pg.204]    [Pg.159]    [Pg.282]    [Pg.360]    [Pg.12]    [Pg.217]    [Pg.219]    [Pg.224]   


SEARCH



Blind

Blind prediction

Blinding

Predictive validity

© 2024 chempedia.info