Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Functionality heuristic

For multicriteria optimization, the individual criteria are described by means of fuzzy sets and are aggregated then to an appropriate objective function. To define the membership functions of those objective functions, heuristic knowledge can be included. [Pg.333]

One can also derive the exponential distribution function heuristically. The random binary model assumes the number of bubbles, larger than a specific volume, m, decreases with volume in direct proportion to itself. That is, the greater the abundance of bubbles larger than a given size, the stronger the effect of random coalescences that reduce their number. This proportionality can be written as... [Pg.418]

Task A is done in the same fashion as in manual logic algorithm construction [Deville 90] (see Section 4.2) the induction parameter must be simple, and of an inductive type. This selection can be automated by type inference from the given examples. In case more detailed specification knowledge is available, the Functionality Heuristic (Heuristic 4-1) and the Directionality Heuristic (Heuristic 4-2) may even be used, the latter being of higher precedence in case they yield contradictory results. A reasonable implementation of this synthesis mechanism would actually even accept preference hints from the specifier. We assume that the parameter is selected as induction parameter, where [Pg.162]

In a series of impressive publications. Maxwell [95-98] provided most of the fundamental concepts constituting the statistical theory recognizing that the molecular motion has a random character. When the molecular motion is random, the absolute molecular velocity cannot be described deterministically in accordance with a physical law so a probabilistic (stochastic) model is required. Therefore, the conceptual ideas of kinetic theory rely on the assumption that the mean flow, transport and thermodynamic properties of a collection of gas molecules can be obtained from the knowledge of their masses, number density, and a probabilistic velocity distribution function. The gas is thus described in terms of the distribution function which contains information of the spatial distributions of molecules, as well as about the molecular velocity distribution, in the system under consideration. An important introductory result was the Maxwellian velocity distribution function heuristically derived for a gas at equilibrium. It is emphasized that a gas at thermodynamic equilibrium contains no macroscopic gradients, so that the fluid properties like velocity, temperature and density are uniform in space and time. When the gas is out of equilibrium non-uniform spatial distributions of the macroscopic quantities occur, thus additional phenomena arise as a result of the molecular motion. The random movement of molecules from one region to another tend to transport with them the macroscopic properties of the region from which they depart. Therefore, at their destination the molecules find themselves out of equilibrium with the properties of the region in which they arrive. At the continuous macroscopic level the net effect... [Pg.186]

Note [240] that the phase in Eq. (13) is gauge independent. Based on the above mentioned heuristic conjecture (but fully justified, to our mind, in the light of our rigorous results), Resta noted that Within a finite system two alternative descriptions [in temis of the squared modulus of the wave function, or in temis of its phase] are equivalent [247]. [Pg.114]

The essence of the LST for one-dimensional lattices resides in the fact that an operator TtN->N+i could be constructed (equation 5.71), mapping iV-block probability functions to [N -f l)-block probabilities in a manner which satisfies the Kolmogorov consistency conditions (equation 5.68). A sequence of repeated applications of this operator allows us to define a set of Bayesian extended probability functions Pm, M > N, and thus a shift-invariant measure on the set of all one-dimensional configurations, F. Unfortunately, a simple generalization of this procedure to lattices with more than one dimension, does not, in general, produce a set of consistent block probability functions. Extensions must instead be made by using some other, approximate, method. We briefly sketch a heuristic outline of one approach below (details are worked out in [guto87b]). [Pg.258]

The previous section introduced the backpropagation rule for multi-layer percep-trons. This section briefly discusses tfie model development cycle necessary ftu-obtaining a properly functioning net. It also touches upon some of the available heuristics for determining the proper size of hidden layers. [Pg.546]

These, such as the black box that was the receptor at the turn of the century, usually are simple input/output functions with no mechanistic description (i.e., the drug interacts with the receptor and a response ensues). Another type, termed the Parsimonious model, is also simple but has a greater number of estimatable parameters. These do not completely characterize the experimental situation completely but do offer insights into mechanism. Models can be more complex as well. For example, complex models with a large number of estimatable parameters can be used to simulate behavior under a variety of conditions (simulation models). Similarly, complex models for which the number of independently verifiable parameters is low (termed heuristic models) can still be used to describe complex behaviors not apparent by simple inspection of the system. [Pg.43]

Direct search methods use only function evaluations. They search for the minimum of an objective function without calculating derivatives analytically or numerically. Direct methods are based upon heuristic rules which make no a priori assumptions about the objective function. They tend to have much poorer convergence rates than gradient methods when applied to smooth functions. Several authors claim that direct search methods are not as efficient and robust as the indirect or gradient search methods (Bard, 1974 Edgar and Himmelblau, 1988 Scales, 1986). However, in many instances direct search methods have proved to be robust and reliable particularly for systems that exhibit local minima or have complex nonlinear constraints (Wang and Luus, 1978). [Pg.78]

The Sequential Simplex or simply Simplex method relies on geometry to create a heuristic rule for finding the minimum of a function. It is noted that the Simplex method of linear programming is a different method. [Pg.81]

The objective functions for both k-means clustering and the F-nearest neighbor heuristic given by Eqs. (20) and (21) use information only from the inputs. Because of this capacity to cluster data, local methods are particularly useful for data interpretation when the clusters can be assigned labels. [Pg.30]

In this essay, I argue for a new perspective on units of evolutionary transition. I analyze the process of reproduction, which leads to a conception of units of evolution as reproducers. These units resolve to more familiar ideas of replicators or interactors at levels of spatial organization when explicit spatial and functional models are imposed on abstract reproducers. I also sketch a heuristically promising program of reductionistic research that flows from the new perspective. [Pg.212]

Banga et al. [in State of the Art in Global Optimization, C. Floudas and P. Pardalos (eds.), Kluwer, Dordrecht, p. 563 (1996)]. All these methods require only objective function values for unconstrained minimization. Associated with these methods are numerous studies on a wide range of process problems. Moreover, many of these methods include heuristics that prevent premature termination (e.g., directional flexibility in the complex search as well as random restarts and direction generation). To illustrate these methods, Fig. 3-58 illustrates the performance of a pattern search method as well as a random search method on an unconstrained problem. [Pg.65]

There are conjectures that the above three schemes may be quickly fixed by running a pseudo random generator through a live data scheme twice, where the later run will clear all the effects of the earlier one. This may make the non-functional CLBs that were used as the pseudo-random generator appear functional. While this heuristics is interesting, we will show later in the appendix that it does not work for this case. It may not be a trivial task to fix these schemes. [Pg.11]


See other pages where Functionality heuristic is mentioned: [Pg.37]    [Pg.190]    [Pg.59]    [Pg.183]    [Pg.37]    [Pg.190]    [Pg.59]    [Pg.183]    [Pg.39]    [Pg.111]    [Pg.576]    [Pg.54]    [Pg.130]    [Pg.451]    [Pg.81]    [Pg.745]    [Pg.457]    [Pg.548]    [Pg.519]    [Pg.606]    [Pg.44]    [Pg.77]    [Pg.78]    [Pg.252]    [Pg.3]    [Pg.641]    [Pg.74]    [Pg.39]    [Pg.40]    [Pg.184]    [Pg.298]    [Pg.184]    [Pg.190]    [Pg.196]    [Pg.225]    [Pg.188]    [Pg.65]   
See also in sourсe #XX -- [ Pg.59 , Pg.162 , Pg.183 ]




SEARCH



Evaluation function, heuristic

Heuristics

© 2024 chempedia.info