Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Optimization probabilistic

Tallanidis A (2011) Optimal probabilistic design of seismic dampers fru the protection of isolated bridges against near-fault seismic excitations. Eng Stmet 33 3496-3508... [Pg.3824]

The term evolutionary algorithm (EA) refers to a class of population based metaheuristic (probabilistic) optimization algorithms which imitate the Darwinian evolution ( survival ofthe fittest ). However, the biological terms are used as metaphors rather than in their exact meaning. The population of individuals denotes a set of solution candidates or points of the solution space. Each individual represents a point in the search space which is coded in the individual s representation (genome). The fitness of an individual is usually defined on the basis of the value of the objective function and determines its chances to stay in the population and to be used to generate new solution points. [Pg.202]

MacKay s textbook [114] offers not only a comprehensive coverage of Shannon s theory of information but also probabilistic data modeling and the mathematical theory of neural networks. Artificial NN can be applied when problems appear with processing and analyzing the data, with their prediction and classification (data mining). The wide range of applications of NN also comprises optimization issues. The information-theoretic capabilities of some neural network algorithms are examined and neural networks are motivated as statistical models [114]. [Pg.707]

Constraints on the diagonal element of the density matrix can be useful in the context of the density matrix optimization problem, Eq. (8). As Weinhold and Wilson [23] stressed, the A-representability constraints on the diagonal elements of the density matrix have conceptually appealing probabilistic interpretations this is not true for most of the other known A-representability constraints. [Pg.449]

Optimizing the use of probabilistic methods within the regulatory assessment process, and especially within tiered assessments, was recognized as one of the key issues that were given special consideration at the PeUston workshop that developed this book. [Pg.28]

Linear discriminant analysis (LDA) is also a probabilistic classifier in the mold of Bayes algorithms but can be related closely to both regression and PCA techniques. A discriminant function is simply a function of the observed vector of variables (K) that leads to a classification rule. The likelihood ratio (above), for example, is an optimal discriminant for the two-class case. Hence, the classification rule can be stated as... [Pg.196]

Selecting the points for crossover and mutation according to a probability distribution, either uniform or skewed towards points at which the optimized function takes high values (the latter being a probabilistic expression of the survival-of-the-fittest principle). [Pg.155]

GAs are probabilistic search methods based on the mechanics of natural selection and genetics. The basic idea in using a GA as an optimization method is to represent a population of possible solutions in a chromosome-type encoding, called strings, and evaluate these encoded solutions through simulated reproduction, crossover, and mutation to reach an optimal or near-optimal solution. The GA starts with the creation of an initial population of... [Pg.3]

In addition to exploring the nature and properties of useful drug molecules, investigating the structure and function of the other half of the pharmacodynamic equation, the receptor macromolecule, is important. Knowledge of the structure of the receptor macromolecule now permits research scientists to design lead compounds from scratch that are probabilistically well suited for further development and optimization. [Pg.40]

Having collected optimal quality data, first-rate data management is also critical. Many data that are collected can now be fed directly from the measuring instrument to computer databases, thereby avoiding the potential of human data entry error. However, this is not universally true. Therefore, careful strategies have been developed to scrutinize data as they are entered and once they are in the database. The double-entry method requires that each data set be entered twice (usually by two operators) and these entries compared by a computer for any discrepancies. This method operates on the model that two identical errors are probabilistically very unlikely, and that every time the two entries match the data are correct. In contrast, dissimilar entries are identified, the source (original) data located, and the correct data point entry confirmed. [Pg.75]

Structural Identifiability If pj is structurally identifiable it can, in principle, be estimated from the type of data present in ZN, if the data were perfect . One does therefore not take the practical limitations of the data into account Such limitations include for instance the noise level, the type of excitation of the system, the actual number of samples, and the sample distance. Likewise, one does not take into account the practical limitations associated with the optimization. Structural identifiability is therefore a necessary, but not sufficient, requirement for practical identifiability. Finally, structural identifiability may be treated with differential algebra, and an implementation that performs the calculations in a reasonable time, although with the cost of probabilistic results, has been provided by Sedoglavic [31, 32],... [Pg.123]

Although probabilistic predictions are acceptable in virtual screening experiments for lead identification, in lead optimization studies, where only a limited number of compounds is examined, confidence in docked binding modes must be very high. Therefore, the conclusion from the neuraminidase... [Pg.43]

Fig. 16. Probabilistic performance measure for finding the top-ranked phage, where phage are ranked by decreasing target affinity. Individual plots are functions of the initial concentration of target molecules [Ttot] and the stringency parameter g. Calculations were also performed by Levitan to determine the probability of picking r of the top-ranked phage (data not shown). From top to bottom, the plots represent the results after two, four, and seven iterations of selection. Note that after seven iterations, there is a wide range of optimal parameters. Reprinted from Levitan (1998) with permission, 1998 by Academic Press. Fig. 16. Probabilistic performance measure for finding the top-ranked phage, where phage are ranked by decreasing target affinity. Individual plots are functions of the initial concentration of target molecules [Ttot] and the stringency parameter g. Calculations were also performed by Levitan to determine the probability of picking r of the top-ranked phage (data not shown). From top to bottom, the plots represent the results after two, four, and seven iterations of selection. Note that after seven iterations, there is a wide range of optimal parameters. Reprinted from Levitan (1998) with permission, 1998 by Academic Press.
Stochastic objective function. The preceding MPC formulation assumes that future process outputs are deterministic over the finite optimization horizon. For a more realistic representation of future process outputs, one may consider a probabilistic (stochastic) prediction for y[/c + i k] and formulate an objective function that contains the expectation of appropriate functionals. For example, if y[k + i k] is probabilistic, then the expectation of the functional in Eq. (4) could be used. This formulation, known as open-loop optimal feedback, does not take into account the fact that additional information would be available at future time points k + i and assumes that the system will essentially run in open-loop fashion over the optimization horizon. An alternative, producing a closed-loop optimal feedback law relies... [Pg.140]

Roberts has generalized the model in other directions by considering the constants Cj, C2 and C3 to be stochastic variables and has discussed what can be done in the way of optimal control when only probabilistic information about them is available (Roberts, 1960b). [Pg.169]

Annealing is not the only process that may be used as a model for the development of probabilistic optimization algorithms. Bohachevsky, Johnson, and Stein [5] identified an analogy between stochastic optimization and a biased game of chance and used that analogy to estimate... [Pg.3]

Of course, the path followed during the optimization search is also strongly dependent on the frequency with which uphill moves are accepted according to the standard SA probabilistic criterion, written here as ... [Pg.212]

Probabilistic neural network (PNN) is similar to GRNN except that it is used for classification problems [54], It has been used for pharmacodynamics [55], pharmacokinetics [34,56] studies and has recently been applied for genotoxicity [43,50,57] and torsade de pointes prediction [58], PNN classifies compounds into their data class through the use of Bayes s optimal decision rule ... [Pg.224]

To some extent, a disciplinary divide is at work here, as probabilistic models derived from population biology and selection theory differ fundamentally from engineering models, which depend on. .. the surface area of isometric bodies, or the structure of branching networks (McNab, 2002, p. 35). This divide entails differences not only in analytic approach, but also in evaluative criteria that have both polarized the dispute and made it difficult to resolve empirically. However, my point is that these tensions do not require a forced choice between explanatory accounts, which are not intrinsically irreconcilable. Internal constraints may fix the allometric baseline, which selection may modify under certain circumstances. One of the postulates of West and co-workers model is that organisms evolve toward an optimal state in which the energy required for resource distribution is minimized (West and Brown, 2004, p. 38). Toward is the key word here, and the extent to which evolution attains any particular optimality target often reflects compromise with other selective demands physical first principles may constrain what is optimal, but do not always determine what is actual. [Pg.333]

To select the optimal subset of compounds, that is, a set of compounds having the maximal chemical diversity, a probabilistic search algorithm is applied, which consists in selecting a subset of compounds based on a probability assigned to each compound. This algorithm optimizes the joint entropy (/H) of the subset of selected compounds. The task is performed iteratively, assigning each ith compound an initial uniform probability Pi = 1 fn, then calculating the score S that is added to the previous compound probability as... [Pg.88]

Segall, M. D., Beresford, A. P., Gola, J. M. R., Hawksley, D., and Tarbit, M. H. (2006) Focus on success using a probabilistic approach to achieve an optimal balance of compound properties in drug discovery. Expert Opin. DrugMetab. Toxicol. 2 (2), 325-337. [Pg.30]


See other pages where Optimization probabilistic is mentioned: [Pg.650]    [Pg.650]    [Pg.33]    [Pg.110]    [Pg.249]    [Pg.416]    [Pg.158]    [Pg.65]    [Pg.202]    [Pg.257]    [Pg.301]    [Pg.24]    [Pg.2]    [Pg.191]    [Pg.40]    [Pg.423]    [Pg.93]    [Pg.210]    [Pg.32]    [Pg.23]    [Pg.25]    [Pg.38]    [Pg.52]    [Pg.52]    [Pg.149]    [Pg.360]    [Pg.575]    [Pg.102]   
See also in sourсe #XX -- [ Pg.3 ]




SEARCH



© 2024 chempedia.info