Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Random-variable technique

Partial differential equations may be written directly using an infinitesimal generator technique, called the random-variable technique, given in Bailey [387]. For intensity functions of the form (9.33), we define the operator notation... [Pg.266]

Monte Carlo simulation is a numerical experimentation technique to obtain the statistics of the output variables of a function, given the statistics of the input variables. In each experiment or trial, the values of the input random variables are sampled based on their distributions, and the output variables are calculated using the computational model. The generation of a set of random numbers is central to the technique, which can then be used to generate a random variable from a given distribution. The simulation can only be performed using computers due to the large number of trials required. [Pg.368]

This is a technique developed during World War II for simulating stochastic physical processes, specifically, neutron transport in atomic bomb design. Its name comes from its resemblance to gambling. Each of the random variables in a relationship is represented by a distribution (Section 2.5). A random number generator picks a number from the distribution with a probability proportional to the pdf. After physical weighting the random numbers for each of the stochastic variables, the relationship is calculated to find the value of the independent variable (top event if a fault tree) for this particular combination of dependent variables (e.g.. components). [Pg.59]

The technique for calculating the moments of a random variable from the characteristic function of the random variable can be derived by first differentiating both sides of Eq. (3-77) n-times with respect to v... [Pg.127]

Finally, observe from Eq. (4-112) that D(Xlfy) and D(xm,y) are both defined as sums of random variables, and thus, using Eqs. (4-116) and (4-117) the problem of bounding Pe has been reduced to the problem of bounding the tails of the distributions of sums of random variables. This is best done by the Chernov bound technique, briefly described in the following paragraphs. For a more detailed exposition, see Fano,16 Chapter 8. [Pg.230]

The term Monte Carlo is often used to describe a wide variety of numerical techniques that are applied to solve mathematical problems by means of the simulation of random variables. The intuitive concept of a random variable is a simple one It is a variable that may take a given value of a set, but we do not know in advance which value it will take in a concrete case. The simplest example at hand is that of flipping a coin. We know that we will get head or tail, but we do not know which of these two cases will result in the next toss. Experience shows that if the coin is a fair one and we flip it many times, we obtain an average of approximately half heads and half tails. So we say that the probability p to obtain a given side of the coin is k A random variable is defined in terms of the values it may take and the related probabilities. In the example we consider, we may write... [Pg.668]

The third step is to select the number of iterations or calculations of dose that are to be performed as a part of each simulation. For the analysis here, a total of 10,000 iterations based on the selection of input variables from each defined distribution were performed as part of each simulation. The large number of iterations performed, as well as the Latin hypercube sampling (non-random sampling) technique employed by the Crystal Ball simulation program, ensured that the input distributions were well characterized, that all portions of the distribution (such as the tails) were included in the analysis, and that the resulting exposure distributions were stable. [Pg.38]

Most techniques for process data reconciliation start with the assumption that the measurement errors are random variables obeying a known statistical distribution, and that the covariance matrix of measurement errors is given. In Chapter 10 direct and indirect approaches for estimating the variances of measurement errors are discussed, as well as a robust strategy for dealing with the presence of outliers in the data set. [Pg.26]

Monte Carlo is a probabilistic technique for simulating the outcome of an equation or model involving random variables. The frequency distribution of simulated outcomes is an estimate of the distribution of random outcomes from the equation or model that is being simulated. [Pg.497]

The major objective in SPC is to use process data and statistical techniques to determine whether the process operation is normal or abnormal. The SPC methodology is based on the fundamental assumption that normal process operation can be characterized by random variations around a mean value. The random variability is caused by the cumulative effects of a number of largely unavoidable phenomena such as electrical measurement noise, turbulence, and random fluctuations in feedstock or catalyst preparation. If this situation exists, the process is said to be in a state of statistical control (or in control), and the control chart measurements tend to be normally distributed about the mean value. By contrast, frequent control chart violations would indicate abnormal process behavior or an out-of-control situation. Then a search would be initiated to attempt to identify the assignable cause or the. special cause of the abnormal behavior... [Pg.37]

A probability distribution is a mathematical description of a function that relates probabilities with specified intervals of a continuous quantity, or values of a discrete quantity, for a random variable. Probability distribution models can be non-parametric or parametric. A non-parametric probability distribution can be described by rank ordering continuous values and estimating the empirical cumulative probability associated with each. Parametric probability distribution models can be fit to data sets by estimating their parameter values based upon the data. The adequacy of the parametric probability distribution models as descriptors of the data can be evaluated using goodness-of-fit techniques. Distributions such as normal, lognormal and others are examples of parametric probability distribution models. [Pg.99]

This problem contains 31 variables and 29 equality constraints (or governing equations) including the objective function. This gives rise to 2 variables as independent (or decision) variables. For a practical reason, the saturation pressure for steam, P, and the fraction of steam generated in the evaporator, which is reused for heating, a.., are selected as the independent variables. A random search technique (26) is adopted to locate the optimal point for each given e. The results are tabulated in Table I, and the trade-off curve is plotted in Figure 3. The relationship between these two objectives is obtained by the least square method as... [Pg.314]

In this section we consider the statistical techniques, correlation and regression analysis, to study the interrelationship between two continuous random variables (Xi,X2), from the information supplied by a sample of n pairs of observations (xi.i, Xi,2), (X2,i, X2,2), , (x ,i, x ,2), from a population W. In the correlation analysis we accept that the sample has been obtained of random form, and in the regression analysis (linear or not linear) we accept that the values of one of the variables are not subject to error (independent variable X = Xi), and the dependent variable (X = X2) is related to the independent variable by means of a mathematical model (X = f(X) + s). [Pg.688]

Consider two sets of measurements of a random variable. X—for example, the percentage conversion in the same batch reactor measured using two different experimental techniques. Scatter plots of X versus run number are shown in Figure 2.5-1. The sample mean of each set is 70%, but the measured values scatter over a much narrower range for the first set (from 68% to 73%) than for the second set (from 52% to 95%). In each case you would estimate the true value of X for the given experimental conditions as the sample mean. 70%. but you wouid clearly have more confidence in the estimate for Set (a) than in that for Set (bp... [Pg.18]

Despite the success of direct methods, there are still certain structures that are not readily solved, if at all, by these methods. This may be due to a breakdown in the certainty with which the triplet relationships are developed, or to a breaJidown of the assumption that the atomic position vectors form a set of random variables, so that the probability functions that are derived may no longer be strictly valid. Some new developments for dealing with these problems have focused on improving the techniques for calculating the triplets and including in the direct methods procedure as much structural information as possible, such as... [Pg.297]

Use the random optimization technique to find extreme values for the following functions. Determine all extreme values of / and the value of the independent variables at which the extremes occur. [Pg.218]

Let the pdf of the rth highest concentration out of a sample of m values be denoted pr.m (c). Once this pdf is known, all the statistical properties of the random variable cnm can be determined. However, the integrals involved in the expressions for the expectation and higher-order moments are not always easily evaluated, and thus there arises the need for techniques of approximation. The most important result concerns the evaluation of the expected value of crm. In fact, for sufficiently large m, an approximation for E cr[Pg.1162]

The formation of aptamers rests upon a technique termed SELEX (systematic evolution of ligands by exponential enrichment). It involves the synthesis of a DNA molecule with constant sections at both ends and a randomly variable segment of nucleotides in the middle. A variable segment as short as 10 nucleotides has a combinatorial library of 4 ° or about 10 different sequences a variable sequence of 30 nucleotides generates a combinatorial library of 4 or about 10 different sequences. The DNA mixture so synthesized is then enzymatically transcribed to a combinatorial RNA library. This mixture is applied to a chromatography column in which the substance of interest (e.g., amino acid, protein, drug) is bound. RNA molecules in the combinatorial library that do not bind with the target compound are simply washed out of the column. [Pg.394]


See other pages where Random-variable technique is mentioned: [Pg.375]    [Pg.191]    [Pg.264]    [Pg.668]    [Pg.313]    [Pg.202]    [Pg.219]    [Pg.233]    [Pg.126]    [Pg.4]    [Pg.77]    [Pg.31]    [Pg.116]    [Pg.381]    [Pg.77]    [Pg.2]    [Pg.742]    [Pg.87]    [Pg.235]    [Pg.183]    [Pg.127]    [Pg.350]    [Pg.21]    [Pg.328]   
See also in sourсe #XX -- [ Pg.266 ]




SEARCH



Random technique

Random variables

Variables technique

© 2024 chempedia.info