Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Joint random variable

For the formal theorems and proofs, it is useful if the reader is familiar with elementary information theory (see [Shan48] and [Gall68, Sections 2.2 and 2.3]). The most important notions are briefly repeated in the notation of [Gall68]. It is assumed that a common probability space is given where all the random variables are defined. Capital letters denote random variables and small letters the corresponding vadues, and terms like P(x) are abbreviations for probabilities like P(X = x). The joint random variable of X and Y is written as X, Y. The entropy of a random variable X is... [Pg.346]

Proof. For readability, the joint random variable Tx, is abbreviated as Z,... [Pg.364]

This finishes the proof of ( ). For X = 0, the joint random variable Tx contains no random variable at all, and hence ( ) is the theorem. ... [Pg.366]

We conclude this section by introducing some notation and terminology that are quite useful in discussions involving joint distribution functions. The distribution function F of a random variable associated with time increments fnf m is defined to be the first-order distribution function of the derived time function Z(t) = + fn),... [Pg.143]

A few minutes thought should convince the reader that all our previous results can be couched in the language of families of random variables and their joint distribution functions. Thus, the second-order distribution function FXtx is the same as the joint distribution function of the random variables and 2 defined by... [Pg.144]

In this connection, we shall often abuse our notation somewhat by referring to FXZx Ts as the joint distribution function of the random variables X(t + rx) and X(t + r2) instead of employing the more precise but cumbersome language used at the beginning of this paragraph. In the same vein, the distribution function FXJn.rym will be referred to loosely as the joint distribution function of the random variables X(t + rj),- -, X(t + r ), Y(t + ri), -,Y t + r m). [Pg.144]

Once again, it should be emphasized that the functional form of a set of random variables is important only insofar as it enables us to calculate their joint distribution function in terms of other known distribution functions. Once the joint distribution function of a group of random variables is known, no further reference to their fractional form is necessary in order to use the theorem of averages for the calculation of any time average of interest in connection with the given random variables. [Pg.144]

Joint Moments and Characteristic Functions.—The joint momenta cckli...ikn of a family of n random variables dm are defined by the expression... [Pg.145]

In other words, knowledge of the joint characteristic function of a family of random variables is tantamount to knowledge of their joint probability density function and vice versa. [Pg.147]

The joint characteristic function is related to the characteristic functions of the individual random variables by means of the formula... [Pg.147]

We conclude this section by deriving an important property of jointly gaussian random variables namely, the fact that a necessary and sufficient condition for a group of jointly gaussian random variables 9i>- >< to be statistically independent is that E[cpjCpk] = j k. Stated in other words, linearly independent (uncorrelated),46 gaussian random variables are statistically independent. This statement is not necessarily true for non-gaussian random variables. [Pg.161]

A random process can be (and often is) defined in terms of the random variable terminology introduced in Section 3.8. We include this alternate definition for completeness. Limiting ourselves to a single time function X( ), it is seen that X(t) is completely specified as a random process by the specification all possible finite-order joint distribution functions of the infinite set of random variables T, — oo < t < oo, defined by the equations... [Pg.162]

Now consider N pairs of random variables each having the distribution above, and each pair being statistically independent of all other pairs. Define w = 2 -t wn> = 2 -i and define HN(r,t) to be the joint moment generating function of w,z. [Pg.232]

Here n is an operator of molecular axis orientation. In the classical description, it is just a unitary vector, directed along the rotator axis. Angle a sets the declination of the rotator from the liquid cage axis. Now a random variable, which is conserved for the fixed form of the cell and varies with its hopping transformation, is a joint set of vectors e, V, where V = VU...VL,.... Since the former is determined by a break of the symmetry and the latter by the distance between the molecule and its environment, they are assumed to vary independently. This means that in addition to (7.17), we have... [Pg.242]

The above definition can be extended to include an arbitrary number of random variables. For example, the one-point joint velocity PDF /u(V x, t) describes all three velocity... [Pg.48]

For Gaussian random variables, an extensive theory exists relating the joint, marginal, and conditional velocity PDFs (Pope 2000). For example, if the one-point joint velocity PDF is Gaussian, then it can be shown that the following properties hold ... [Pg.50]

More precisely, the Fourier coefficients in (4.27) can be replaced by random variables with the following properties k. U = 0 and (U ) = 0 for all k such that k > kc. An energy-conserving scheme would also require that the expected value of the residual kinetic energy be the same for all choices of the random variable. The LES velocity PDF is a conditional PDF that can be defined in die usual manner by starting from die joint PDF for the discrete Fourier coefficients U. ... [Pg.126]

Alternatively, an LES joint velocity, composition PDF can be defined where both (j> andU are random variables Aj 0 U 4 U >4 x, t). In either case, the sample space fields U and0 are assumed to be known. [Pg.128]

However, care must be taken to avoid the singularity that occurs when C is not full rank. In general, the rank of C will be equal to the number of random variables needed to define the joint PDF. Likewise, its rank deficiency will be equal to the number of random variables that can be expressed as linear functions of other random variables. Thus, the covariance matrix can be used to decompose the composition vector into its linearly independent and linearly dependent components. The joint PDF of the linearly independent components can then be approximated by (5.332). [Pg.239]

As we saw in Chapter 1, the one-point joint velocity, composition PDF contains random variables representing the three velocity components and all chemical species at a particular spatial location. The restriction to a one-point description implies the following. [Pg.260]

We have seen that the joint velocity, composition PDF treats both the velocity and the compositions as random variables. However, as noted in Section 6.1, it is possible to carry out transported PDF simulations using only the composition PDF. By definition, x, t) can be found from /u,< >(V, 0 x, t) using (6.3). The same definition can be used with the transported PDF equation derived in Section 6.2 to find a transport equation for / (0 x, r). [Pg.268]

The most common choice is for the components of Z to be uncorrelated standardized Gaussian random variables. For this case, ez z) = z = diag(szj,. .., szNs), i.e., the conditional joint scalar dissipation rate matrix is constant and diagonal. [Pg.300]

In the joint velocity, composition PDF description, the user must supply an external model for the turbulence time scale r . Alternatively, one can develop a higher-order PDF model wherein the turbulence frequency > is treated as a random variable (Pope 2000). In these models, the instantaneous turbulence frequency is defined as... [Pg.340]

An important concept is the marginal density function which will be better explained with the joint bivariate distribution of the two random variables X and Y and its density fXY(x, y). The marginal density function fxM(x) is the density function for X calculated upon integration of Y over its whole range of variation. If X and Y are defined over SR2, we get... [Pg.201]

Two random variables X and Y are independent if their joint density function fXY can be factored as a product of two density functions, each involving one variable, e.g.,... [Pg.201]

The confidence intervals defined for a single random variable become confidence regions for jointly distributed random variables. In the case of a multivariate normal distribution, the equation of the surface limiting the confidence region of the mean vector will now be shown to be an n-dimensional ellipsoid. Let us assume that X is a vector of n normally distributed variables with mean n-column vector p and covariance matrix Ex. A sample of m observations has a mean vector x and an n x n covariance matrix S. [Pg.212]

The simplest of these models which permits a detailed discussion of the decay of correlations is a random walk model in which a set of random walkers whose positions are initially correlated is allowed to diffuse the motion of any single random walker being independent of any other member of the set. Let us assume that there are r particles in the set and motion occurs on a discrete lattice. The state of the system is, therefore, completely specified by the probabilities Pr(nlf n2,..., nr /), (tij = — 1, 0, 1, 2,. ..) in which Pr(n t) is the joint probability that particle 1 is at n1( particle 2 is at n2, etc., at time l. We will also use the notation Nj(t) for the random variable that is the position of random walker j at time t. Reduced probability distributions can be defined in terms of the Pr(n t) by summation. We will use the notation P nh, rth,..., ntj I) to denote the distribution of random walkers iu i2,..., i at time t. We define... [Pg.200]

Continuing to use the data in Exercise 1, consider, once again, only the nonzero observations. Suppose that the sampling mechanism is as follows y and another normally distributed random variable, z, have population correlation 0.7. The two variables, y and z are sampled jointly. When z is greater than zero, y is reported. When z is less than zero, both z and y are discarded. Exactly 35 draws were required in order to obtain the preceding sample. Estimate p and a. [Hint Use Theorem 20.4.]... [Pg.113]

A physical system S that evolves probabilistically in time can be mathematically described by a time-dependent random variable X(t). It is assumed that (1) one can measure values xlt x2, x3,.. . , xn of X(t) at instants ti,t2,ti,...,tn representing the possible states of the system S and (2) one can define a set of joint probability distribution functions... [Pg.78]

For scalar continuous random variables X and Y with joint probability density f (x, y), marginals and conditionals are refined as... [Pg.364]

A function that gives the probability that each of two or more random variables takes at a particular value, joint variation... [Pg.177]


See other pages where Joint random variable is mentioned: [Pg.365]    [Pg.353]    [Pg.365]    [Pg.353]    [Pg.144]    [Pg.145]    [Pg.147]    [Pg.154]    [Pg.233]    [Pg.159]    [Pg.224]    [Pg.262]    [Pg.289]    [Pg.200]    [Pg.206]    [Pg.119]    [Pg.84]    [Pg.140]    [Pg.363]   
See also in sourсe #XX -- [ Pg.346 ]




SEARCH



Jointly distributed random variables

Random variables

© 2024 chempedia.info