Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

The randomness parameter

A solution based on perfect information would yield optimal first stage decisions for each realization of the random parameter Then the expected value of these decisions, known as wait-and-see (WS) can be written as (Madansky, 1960) ... [Pg.165]

The other quantity of interest is the VSS. In order to quantify it, we first need to solve the mean value problem, also referred to as the expected value (EV) problem. This can be defined as Min z(x, [) ]) where [ ] = f (Birge, 1982). The solution of the EV problem provides the first stage decisions variables evaluated at expectation of the random realizations. The expectation of the EV problem, evaluated at different realization of the random parameters, is then defined as (Birge, 1982) ... [Pg.166]

Uncertainty analysis for multiparameter models may require assigning sampling distributions to many random parameters. In which case, a single value is drawn from each of the respective sampling distributions during each Monte Carlo iteration. After each random draw, the generated values of the random parameters... [Pg.53]

First-order error analysis is a method for propagating uncertainty in the random parameters of a model into the model predictions using a fixed-form equation. This method is not a simulation like Monte Carlo but uses statistical theory to develop an equation that can easily be solved on a calculator. The method works well for linear models, but the accuracy of the method decreases as the model becomes more nonlinear. As a general rule, linear models that can be written down on a piece of paper work well with Ist-order error analysis. Complicated models that consist of a large number of pieced equations (like large exposure models) cannot be evaluated using Ist-order analysis. To use the technique, each partial differential equation of each random parameter with respect to the model must be solvable. [Pg.62]

It is difficult to calculate the likelihood of the data for most pharmacokinetic models because of the nonlinear dependence of the observations on the random parameters rj,- and, possibly, Sy. To deal with these problems, several approximate methods have been proposed. These methods, apart from the approximation, differ widely in their representation of the probability distribution of interindividual random effects. [Pg.2951]

In order to make the solution to the fractional Langevin equation a multifractal, we assume that the parameter a is a random variable. To construct the traditional measures of multifractal stochastic processes, we calculate the qt moment of the solution (148) by averaging over both the random force and the random parameter to obtain... [Pg.67]

Alternating copolymers have regular structures (AB)n with the randomness parameter = 2 (cf. Chap. 2). They may also be considered to be homopolymers of the new structural unit AB. These copolymers can be formed in two ways ... [Pg.259]

By a sample or a realization we would mean one particular arrangement of the random parameters over the whole system. For a thermodynamic (infinitely large) system the sample space is also infinite. [Pg.12]

The random parameters of radiative heat flux, convective heat flux, flame emissivity and convection heat transfer coefficient have been studied with respect to the sensitivity of their accuracy to the time-to-loss-of-strength of structures affected by fire. [Pg.2083]

The total Pf analysis to follow is based on the reduced thermal load gradient of 91 °C mentioned in the previous section which induces a conditional Pf of 0.186. Table V summarizes the 16 random variables, the statistical distribution functions assigned to them, and the corresponding distribution parameters reflecting their scatter. Some of the random parameters from the previous PDS analysis were removed since they did not strongly influence the maximum stresses in the TE and substrate materials. The four Weibull parameters (two Weibull moduli and two scale parameters) for the TE and substrate materials were assumed to be RIV with Gaussian distributions (see table V). The statistical distribution types used to describe these RIV are not based on data, but still realistically describe their uncertainty. [Pg.169]

Probabilistic response analysis consists of computing the probabilistic characterization of the response of a specific structure, given as input the probabilistic characterization of material, geometric and loading parameters. An approximate method of probabilistic response analysis is the mean-centred First-Order Second-Moment (FOSM) method, in which mean values (first-order statistical moments), variances and covariances (second-order statistical moments) of the response quantities of interest are estimated by using a mean-centred, first-order Taylor series expansion of the response quantities in terms of the random/uncertain model parameters. Thus, this method requires only the knowledge of the first- and second-order statistical moments of the random parameters. It is noteworthy that often statistical information about the random parameters is limited to first and second moments and therefore probabilistic response analysis methods more advanced than FOSM analysis cannot be fully exploited. [Pg.30]

The approximate response statistics computed through Eqs. (3) and (4) are extremely important in evaluating the variability of the response quantities of interest due to the intrinsic uncertainty of the model parameters and provide information on the statistical correlation between the different response quantities. It is noteworthy that these approximate first- and second-order response statistics can be readily obtained when response sensitivities evaluated at the mean values of the random parameters are... [Pg.30]

Computation by FE analysis of N response curves for each component of the response vector r, corresponding to the N realizations of the random parameter vector 0. [Pg.31]

In this study, the Nataf model (Ditlevsen Madsen 1996) was used to generate realizations of the random parameters 0. It requires specification of the marginal PDFs of the random parameters 0 and their correlation coefficients. It is therefore able to reproduce the given first- and second-order statistical moments of random parameters 0. The same three-dimensional three-story reinforced concrete building presented in Section 2.4, but on rigid supports, is considered as application example. Table 1 provides the marginal distributions and their statistical parameters for the material parameters modelled as correlated random variables. Other details on the modelling of the structure and the statistical correlation of the random parameters can be found in Barbato et al. (2006). [Pg.31]

In the last twenty years, various non-deterministic methods have been developed to deal with optimum design under environmental uncertainties. These methods can be classified into two main branches, namely reliability-based methods and robust-based methods. The reliability methods, based on the known probabiUty distribution of the random parameters, estimate the probability distribution of the system s response, and are predominantly used for risk analysis by computing the probability of system failure. However, variation is not minimized in reliability approaches (Siddall, 1984) because they concentrate on rare events at the tail of the probability distribution (Doltsinis and Kang, 2004). The robust design methods are commonly based on multiobjective minimization problems. The are commonly indicated as Multiple Objective Robust Optimization (MORO) and find a set of optimal solutions that optimise a performance index in terms of mean value and, at the same time, minimize its resulting dispersion due to input parameters uncertainty. The final solution is less sensitive to the parameters variation but eventually maintains feasibility with regards probabilistic constraints. This is achieved by the optimization of the design vector in order to make the performance minimally sensitive to the various causes of variation. [Pg.532]

The second type of inference is where we find an interval of possible values that has a specific probability of containing the true parameter value. In the Bayesian approach, we have the posterior distribution of the parameter given the data. Hence we can calculate an interval that has the specified posterior probability of containing the random parameter 6. These are called credible intervals. [Pg.51]

The microstructure of the equilibrium copolymer can be predicted on the basis of the randomness parameter t//, which is defined as the sum of the conditional probabilities of finding copolymer rmits followed by different units [Pg.57]

The perturbation method starts with a Taylor series expansion of the solution, the external loading, and the stochastic stiffness matrix in terms of the random variables introduced by the discretization of the random parameter field. The unknown coefficients in the expansion of the solution are obtained by equating terms of equal order in the expansion. From this, approximations of the first two statistical moments can be obtained. The perturbation method is computationally more efficient than direct Monte Carlo simulation. However, higher-order approximations will increase the computational effort dramatically, and therefore accurate results are obtained for small coefficients of variation only. [Pg.3471]

In the spectral SFEM, the random parameter fields are discretized by a KL or a polynomial chaos expansion, the solution is expanded with Hermite polynomials, and a Galerkin approach is applied to solve for the unknown expansion coefficients. The theoretical foundation has been laid in Deb et al. (2001) and Babuska et al. (2005), where local and global polynomial chaos expansions for linear elliptic boundary value problems with stochastic coefficients were investigated and where a priori error estimates have been proved for a fixed number of terms of the KL expansion. [Pg.3471]

In the following, discretization methods for the random parameter field are illustrated, and a mathematical theory for the approximate solution of stochastic elliptic boundary value problems involving a discretized random parameter field that is represented as a superposition of independent random variables is outlined. In the random domain, global and local polynomial chaos expansions are employed. The relation between local approximations of the solution and Monte Carlo simulation is considered, and reliability assessment is briefly discussed. Finally, an example serves to illustrate the different solution procedures. [Pg.3471]

In a first step, the random parameter field is discretized and replaced by a finite sum of random variables. Assume that a suitable approximation is given by a linear combination of continuous and independent random variables i([Pg.3475]

The accuracy of the MPP is influenced by the FE mesh, the truncation level M of the random parameter field, the partition of E, and the choice of the ansatz functions both in spatial and random domain. These parameters can be gradually adapted such that the MPP is computed with a prescribed accuracy. [Pg.3478]

A fundamental question that still has to be addressed in detail concerns the error of the solution due to the discretization of the random parameter field. Beyond this aspect of verification, the validation of the random field model itself, either from experimental data or from information pertaining to the microscale, remains an important issue (cf. the critique of the SFE method raised, e.g., in Ostoja-Starzewski 2011). [Pg.3482]


See other pages where The randomness parameter is mentioned: [Pg.140]    [Pg.162]    [Pg.163]    [Pg.183]    [Pg.53]    [Pg.56]    [Pg.63]    [Pg.133]    [Pg.133]    [Pg.137]    [Pg.53]    [Pg.254]    [Pg.140]    [Pg.162]    [Pg.163]    [Pg.183]    [Pg.28]    [Pg.682]    [Pg.30]    [Pg.522]    [Pg.485]    [Pg.1991]    [Pg.3470]    [Pg.3474]   
See also in sourсe #XX -- [ Pg.254 ]




SEARCH



The parameters

© 2024 chempedia.info