Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Physical observables as random variables

5 Physical observables as random variables 1.5.1 Origin of randomness in physical systems [Pg.38]

Classical mechanics is a deterministic theory, in which the time evolution is uniquely determined for any given initial condition by the Newton equations (1.98). In quantum mechanics, the physical information associated with a given wave-function has an inherent probabilistic character, however the wavefunction itself is uniquely determined, again from any given initial wavefunction, by the Schrodinger equation (1.109). Nevertheless, many processes in nature appear to involve a random component in addition to their systematic evolution. What is the origin of this random character There are two answers to this question, both related to the way we observe physical systems  [Pg.38]

The correlation distance root for p (r) is defined as the distance above which p (r) and p (r + nrcor) are statistically independent (n is a unit vector in any direction). [Pg.38]


As discussed in Section 1.5, the characterization of observables as random variables is ubiquitous in descriptions of physical phenomena. This is not immediately obvious in view of the fact that the physical equations of motion are deterministic and this issue was discussed in Section 1.5.1. Random functions, ordered sequences of random variable, were discussed in Section 1.5.3. The focus of this chapter is a particular class of random functions, stochastic processes, for which the ordering parameter is time. Time is a continuous ordering parameter, however in many practical situations observations of the random function z(Z) are made at discrete time 0 < Zi < t2, , < tn < T. In this case the sequence z(iz) is a discrete sample of the stochastic process z(i). [Pg.219]

The set of all possible outcomes of a measurement considered as a random variable is usually called the population. The parameters of the density function associated with a particular population, e.g., mean or variance, are not physically accessible since their determination would require an infinite number of measurements. A measurement, or more commonly a set of measurements ( points or observations ), produces a finite set of outcomes called a sample. Any convenient number describing in a compact form some property of the sample is called a statistic, e.g., the sample mean... [Pg.184]

A general phenomenon associated with sums of many random variables has far reaching implications on the random nature of many physical observables. Its mathematical expression is known as the central limit theorem. Let xi,X2,..., x ... [Pg.5]

Markov chains are random processes in which changes occur only at fixed times. However, many of the physical phenomena observed in everyday life are based on changes that occur continuously over time. Examples of these continuous processes are equipment breakdowns, arrival of telephone calls, and radioactive decay. Markov processes are random processes in which changes occur continuously over time, where the future depends only on the present state and is independent of history. This property provides the basic framework for investigations of system reliability, dependability, and safety. There are several different types of Markov processes. In a semi-Markov process, time between transitions is a random variable that depends on the transition. The discrete and continuous-time Markov processes are special cases of the semi-Markov process (as will be further explained). [Pg.248]

Figure 11. The error threshold of replication and mutation in genotype space. Asexually reproducing populations with sufficiently accurate replication and mutation, approach stationary mutant distributions which cover some region in sequence space. The condition of stationarity leads to a (genotypic) error threshold. In order to sustain a stable population the error rate has to be below an upper limit above which the population starts to drift randomly through sequence space. In case of selective neutrality, i.e. the case of equal replication rate constants, the superiority becomes unity, Om = 1, and then stationarity is bound to zero error rate, pmax = 0. Polynucleotide replication in nature is confined also by a lower physical limit which is the maximum accuracy which can be achieved with the given molecular machinery. As shown in the illustration, the fraction of mutants increases with increasing error rate. More mutants and hence more diversity in the population imply more variability in optimization. The choice of an optimal mutation rate depends on the environment. In constant environments populations with lower mutation rates do better, and hence they will approach the lower limit. In highly variable environments those populations which approach the error threshold as closely as possible have an advantage. This is observed for example with viruses, which have to cope with an immune system or other defence mechanisms of the host. Figure 11. The error threshold of replication and mutation in genotype space. Asexually reproducing populations with sufficiently accurate replication and mutation, approach stationary mutant distributions which cover some region in sequence space. The condition of stationarity leads to a (genotypic) error threshold. In order to sustain a stable population the error rate has to be below an upper limit above which the population starts to drift randomly through sequence space. In case of selective neutrality, i.e. the case of equal replication rate constants, the superiority becomes unity, Om = 1, and then stationarity is bound to zero error rate, pmax = 0. Polynucleotide replication in nature is confined also by a lower physical limit which is the maximum accuracy which can be achieved with the given molecular machinery. As shown in the illustration, the fraction of mutants increases with increasing error rate. More mutants and hence more diversity in the population imply more variability in optimization. The choice of an optimal mutation rate depends on the environment. In constant environments populations with lower mutation rates do better, and hence they will approach the lower limit. In highly variable environments those populations which approach the error threshold as closely as possible have an advantage. This is observed for example with viruses, which have to cope with an immune system or other defence mechanisms of the host.
The majority of statistical tests, and those most widely employed in analytical science, assume that observed data follow a normal distribution. The normal, sometimes referred to as Gaussian, distribution function is the most important distribution for continuous data because of its wide range of practical application. Most measurements of physical characteristics, with their associated random errors and natural variations, can be approximated by the normal distribution. The well known shape of this function is illustrated in Figure 1. As shown, it is referred to as the normal probability curve. The mathematical model describing the normal distribution function with a single measured variable, x, is given by Equation (1). [Pg.2]


See other pages where Physical observables as random variables is mentioned: [Pg.38]    [Pg.39]    [Pg.41]    [Pg.43]    [Pg.39]    [Pg.41]    [Pg.43]    [Pg.38]    [Pg.39]    [Pg.41]    [Pg.43]    [Pg.39]    [Pg.41]    [Pg.43]    [Pg.510]    [Pg.238]    [Pg.93]    [Pg.39]    [Pg.238]    [Pg.397]    [Pg.100]    [Pg.130]    [Pg.444]    [Pg.81]    [Pg.418]    [Pg.223]    [Pg.289]    [Pg.89]    [Pg.119]    [Pg.403]    [Pg.2959]    [Pg.223]    [Pg.622]    [Pg.355]    [Pg.381]   


SEARCH



Physical Observations

Physical observables

Physical variability

Random variables

Variables /observations

© 2024 chempedia.info