Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Probability basic ideas

Rules. Rules, first pioneered by early appHcations such as Mycin and Rl, are probably the most common form of representation used in knowledge-based systems. The basic idea of rule-based representation is simple. Pieces of knowledge are represented as IE—THEN rules. IE—THEN rules are essentially association pairs, specifying that IE certain preconditions are met, THEN certain fact(s) can be concluded. The preconditions are referred to as the left-hand side (LHS) of the rule, while the conclusions are referred to as the right-hand side (RHS). In simple rule-based systems, both the... [Pg.532]

Among other contributions of Arrhenius, the most important were probably in chemical kinetics (Chapter 11). In 1889 he derived the relation for the temperature dependence of reaction rate. In quite a different area in 1896 Arrhenius published an article, "On the Influence of Carbon Dioxide in the Air on the Temperature of the Ground." He presented the basic idea of the greenhouse effect, discussed in Chapter 17. [Pg.86]

The WKB theory we developed and briefly described below is formally theoretically equivalent to the instanton theory [56-58, 71-76], but is more straightforward and practical, and probably easier to understand. Let us start with the ID case in order to comprehend the basic ideas. The semiclassical wave function is given as usual by... [Pg.115]

The basic idea of importance sampling can be illustrated simply in the example of the transformation from 0 to 1 along A, as described above. In lieu of sampling from the true probability distribution, P A), we design simulations in which A is sampled according to P A). The latter probability should be chosen so that it is more uniform than P A). The relation between the two probabilities may then be expressed as follows ... [Pg.25]

In this section, we take an approach that is characteristic of conventional perturbation theories, which involves an expansion of a desired quantity in a series with respect to a small parameter. To see how this works, we start with (2.8). The problem of expressing ln(exp (—tX)) as a power series is well known in probability theory and statistics. Here, we will not provide the detailed derivation of this expression, which relies on the expansions of the exponential and logarithmic functions in Taylor series. Instead, the reader is referred to the seminal paper of Zwanzig [3], or one of many books on probability theory - see for instance [7], The basic idea of the derivation consists of inserting... [Pg.40]

Finally, there is a large body of experimental and theoretical contributions from investigators who are mainly interested in the dynamic and conformational properties of chain molecules. The basic idea is that the cyclisation probability of a chain is related to the mean separation of the chain ends (Morawetz, 1975). Up to date comprehensive review articles are available on the subject (Semiyen, 1976 Winnik, 1977, 1981a Imanishi, 1979). Rates and equilibria of the chemical reactions occurring between functional groups attached to the ends or to the interior of a flexible chain molecule are believed to provide a convenient testing ground for theories of chain conformations and chain dynamics in solution. [Pg.3]

In 1983, Sasaki et al. obtained rough first approximations of the mid-infrared spectra of o-xylene, p-xylene and m-xylene from multi-component mixtures using entropy minimization [83-85] However, in order to do so, an a priori estimate of the number S of observable species present was again needed. The basic idea behind the approach was (i) the determination of the basis functions/eigenvectors V,xv associated with the data (three solutions were prepared) and (ii) the transformation of basis vectors into pure component spectral estimates by determining the elements of a transformation matrix TsXs- The simplex optimization method was used to optimize the nine elements of Tixi to achieve entropy minimization, and the normalized second derivative of the spectra was used as a measure of the probability distribution. [Pg.177]

The evaluation of elements such as the M n,fin s is a very difficult task, which is performed with different levels of accuracy. It is sufficient here to mention again the so called sudden approximation (to some extent similar to the Koopmans theorem assumption we have discussed for binding energies). The basic idea of this approximation is that the photoemission of one-electron is so sudden with respect to relaxation times of the passive electron probability distribution as to be considered instantaneous. It is worth noting that this approximation stresses the one-electron character of the photoemission event (as in Koopmans theorem assumption). [Pg.207]

The stochastic motion of particles in condensed matter is the fundamental concept that underlies diffusion. We will therefore discuss its basic ideas in some depth. The classical approach to Brownian motion aims at calculating the number of ways in which a particle arrives at a distinct point m steps from the origin while performing a sequence of z° random steps in total. Consider a linear motion in which the probability of forward and backward hopping is equal (= 1/2). The probability for any sequence is thus (1/2). Point m can be reached by z° + m)/2 forward plus (z° m)/2 backward steps. The number of distinct sequences to arrive at m is therefore... [Pg.103]

A current hypothesis, which is receiving considerable attention, is that one can indeed produce a surface which actively repels proteins and other macromolecules123 124, 133). The basic idea is presented in Fig. 25, which shows that a neutral hydrophilic polymer, which exhibits considerable mobility or dynamics in the aqueous phase, can actively repel macromolecules from the interface by steric exclusion and interface entropy methods. This method has been well-known and applied in the field of colloid stability for many years 120). The most effective polymer appears to be polyethylene oxide, probably because of its very high chain mobility and only modest hydrogen bonding tendencies 121 123>. [Pg.46]

The use of activation analysis in criminal investigations (forensic activation analysis) is also well established. The basic idea here is to match the trace-element distributions found in bullets, paint, oil, and so on found at the scene of a crime with the trace-element distributions in objects found with criminal suspects. Such identification is rapid and nondestructive (allowing the actual evidence to be presented in court). Moreover, the probability of its correctness can be ascertained quantitatively. Other prominent examples of the use of forensic activation analysis involve confirmation of the notion that Napoleon was poisoned (by finding significant amounts of arsenic in hair from his head) and the finding that the activation analysis of the wipe samples taken from a suspect s hand can reveal not only if he or she has fired a gun recently but also the type of gun and ammunition used. [Pg.372]

To introduce some of the basic ideas of molecular orbital theory, let s look again at orbitals. The concept of an orbital derives from the quantum mechanical wave equation, in which the square of the wave function gives the probability of finding an electron within a given region of space. The kinds of orbitals that we ve been concerned with up to this point are called atomic orbitals because they are characteristic of individual atoms. Atomic orbitals on the same atom can combine to form hybrids, and atomic orbitals on different atoms can overlap to form covalent bonds, but the orbitals and the electrons in them remain localized on specific atoms. [Pg.278]

One of the goals of current efforts in the field is to discriminate between possible ground-state isomers of inter- and of intramolecular adducts. The basic idea behind this effort is that the geometry of the molecular pair involved in the electron transfer may affect the probability of electron transfer. The work of several groups (Piuzzi, Levy, Itoh) showed that this is indeed the case several different spectroscopic series of lines were found for a given system, which were assigned to different structural isomers. As discussed in Section 4.2, the barrier to electron transfer appears to be different for different isomers. The task is to correlate the observed spectra with calculated structures. [Pg.3140]

Of the statistical simulations, two major types are distinguished cellular automata (CA) and Monte Carlo (MC) simulations. The basic ideas concerning CA go back to Wiener and Rosenblueth [1] and Von Neumann [2]. CA exist in many variants, which meikes the distinction between MC and CA not always clear. In general, in both techniques, the catalyst surface is represented by a matrix of m x n elements (cells, sites) with appropriate boundary conditions. Each element can represent an active site or a collection of active sites. The cells evolve in time according to a set of rules. The rules for the evolution of cells include only information about the state of the cells and their local neighborhoods. Time often proceeds in discrete time steps. After each time step, the values of the cells are updated according to the specified rules. In cellular automata, all cells are updated in each time step. In MC simulations, both cells and rules are chosen randomly, and sometimes the time step is randomly chosen as well. Of course, all choices have to be made with the correct probabilities. [Pg.738]

Model uncertainties are generally represented by constructs that are commonly referred to as probability trees. The basic idea behind a probability tree is that the sum of the probabilities of all the models under consideration is one. Selecting and assigning probabilities to theories can be resolved by resorting to expert opinion. If a more formal process is necessary or desired - possibly to enable the evaluation to be carried out by a computer, an algorithm may be used. Since presumably the same considerations are involved, the same criteria that are used to select a best model may also be used to weight them. [Pg.1173]


See other pages where Probability basic ideas is mentioned: [Pg.1]    [Pg.578]    [Pg.1]    [Pg.578]    [Pg.293]    [Pg.17]    [Pg.503]    [Pg.458]    [Pg.10]    [Pg.81]    [Pg.287]    [Pg.398]    [Pg.82]    [Pg.51]    [Pg.390]    [Pg.358]    [Pg.85]    [Pg.112]    [Pg.56]    [Pg.174]    [Pg.185]    [Pg.241]    [Pg.169]    [Pg.510]    [Pg.91]    [Pg.272]    [Pg.149]    [Pg.96]    [Pg.2985]    [Pg.102]    [Pg.67]    [Pg.865]    [Pg.8]    [Pg.16]    [Pg.361]    [Pg.6]   


SEARCH



Basic idea

Ideas

© 2024 chempedia.info