Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Sensitivity analysis computation, timing

Each simulation, done in real time at a computer terminal, takes but a moment to complete. It is thus quite practical to vary in turn each of the starting conditions and rate constants for different time ranges. This procedure allows the investigator to assess how important the variation of one parameter is in determining the concentration of each species. (This is a sensitivity analysis.)... [Pg.118]

The question remains open as to whether, or not, the application of concentration sensitivity analysis for mechanism reduction is feasible. Various methods for the study of reaction rates, to be discussed in the next section, are applicable for mechanism reduction, with the advantage that they use much less computer time. However, with the increasing power of computers, this point of view may become less significant, and the application of concentration sensitivities might become important as a principally different way of finding unimportant reactions. [Pg.325]

For such kind of studies it is interesting to define new methods of analysis which should be as sensitive to weak chaos as the LCIs but cheaper in computational time. [Pg.132]

A second approach is the Fourier method (Cukier et al., 1978 McRae et al., 1982). Here, c-,o, and A , are varied simultaneously in a systematic fashion and the resulting variations in Cj(t kj c,o) are analyzed by Fourier series to evaluate the multidimensional integrals. The method was originally developed for the sensitivity analysis of complex models, and Falls et al. (1979) have applied the method to an atmospheric chemical mechanism. Whereas calculations of or, (0 have not been explicitly reported, this quantity can be obtained by the Fourier method. The Fourier method is more efficient than the Monte Carlo method, but the computing time required can still be large. [Pg.220]

If ARRs can be obtained in closed symbolic form, parameter sensitivities can be determined by symbolic differentiation with respect to parameters. If this is not possible, parameter sensitivities of ARRs can be computed numerically by using either a sensitivity bond graph [1 ] or an incremental bond graph [5, 6]. Incremental bond graphs were initially introduced for the purpose of frequency domain sensitivity analysis of LTI models. Furthermore, they have also proven useful for the determination of parameter sensitivities of state variables and output variables, transfer functions of the direct model as well as of the inverse model, and for the determination of ARR residuals from continuous time models [7, Chap. 4]. In this chapter, the incremental bond graph approach is applied to systems described by switched LTI systems. [Pg.101]

Limitations (i) special care should be paid to the model formulation and the physical significance of the solution scaling of optimization variables might be required to reduce computational effort ii) the optimization problem might be difficult to solve (Hi) huge computational time can be required for multiple alternative optimizations (iv) it is difficult to perform sensitivity analysis of the optimum solution and (v) local limitations can be found when the initial guess lies outside the feasibility region. [Pg.65]

In these cases, the method of parametric sensitivity analysis has received the broadest acceptance. In this method, often the calculation of time profiles of the derivatives of species concentrations is made by the values of rate constants in individual steps as the initial computing procedure [58,59]. Also, methods are applied to analysis the rates of individual steps and to separate the contributions of individual steps in the total change of the Gibbs free energy or entropy that enter into the overall chemical transformations [60]. [Pg.86]

Numerical integration (sometimes referred to as solving or simulation) of differential equations, ordinary or partial, involves using a computer to obtain an approximate and discrete (in time and/or space) solution. In chemical kinetics, these differential equations are typically the rate laws that describe the time evolution of the system. One obtains results for the mean concentrations, without any information about the (typically very small) fluctuations that are inevitably present. Continuation and sensitivity analysis techniques enable one to extrapolate from a numerically obtained solution at one set of parameters (e.g., rate constants or initial concentrations) to the behavior of the system at other parameter values, without having to carry out a full numerical integration each time the parameters are changed. Other approaches, sometimes referred to collectively as stochastic methods (Gardiner, 1990), can provide data about fluctuations, but these require considerably more computational labor and are often impractical for models that include more than a few variables. [Pg.140]

Operations with the Jacobian matrix are known to consume high computational time in most simulations involving implicit solvers. Based on the sensitivity analysis of the Jacobian matrix, around two-third of reactions and one-third of chemical species can be eliminated from the complete mechanism without significant loss in the quality of results. [Pg.77]

According to the DDM, the consistent FE response sensitivities are computed at each time step, after convergence is achieved for response computation. Response sensitivity calculation algorithms impact the various hierarchical layers of FE response calculation, namely (1) the structure level, (2) the element level, (3) the integration point (section for frame/truss elements) level, and (4) the material level. Details on the derivation of the DDM sensitivity equation at the structure level and at the element level for classical displacement-based finite elements, specific software implementation issues, and properties of the DDM in terms of efficiency and accuracy can be found elsewhere (Kleiber et al. 1997, Conte 2001, Conte et al. 2003, Gu Conte 2003). In this study, some newly developed algorithms and recent extensions are presented which cover relevant gaps between state-of-the-art FE response-only analysis and response sensitivity computation using the DDM. [Pg.23]

An overview of the methods used previously in mechanism reduction is presented in Tomlin et al. (1997). The present work uses a combination of existing methods to produce a carbon monoxide-hydrogen oxidation scheme with fewer reactions and species variables, but which accurately reproduces the dynamics of the full scheme. Local concentration sensitivity analysis was used to identify necessary species from the full scheme, and a principle component analysis of the rate sensitivity matrix employed to identify redundant reactions. This was followed by application of the quasi-steady state approximation (QSSA) for the fast intermediate species, based on species lifetimes and quasi-steady state errors, and finally, the use of intrinsic low dimensional manifold (ILDM) methods to calculate the mechanisms underlying dimension and to verify the choice of QSSA species. The origin of the full mechanism and its relevance to existing experimental data is described first, followed by descriptions of the reduction methods used. The errors introduced by the reduction and approximation methods are also discussed. Finally, conclusions are drawn about the results, and suggestions made as to how further reductions in computer run times can be achieved. [Pg.582]

Eor complex reaction and reactor models, the sensitivity analysis and the parameter estimation bv optimization are comppter-time consuming and call for more efficient algorithms and computers. Here clearly, any improvement in the speed of such computations is desirable, and even necessary, for the practical use of fundamental models. The requirements of speedness would be rather increased if a fundamental model, instead of a black box, were used for optimal control purposes. So, we think that supercomputers will be more and more useful for solving the numerical problems involved in the mechanistic noodelling of complex gas phase reactions. [Pg.431]

In this chapter, the performance of the various methods that have been described is examined. This involves convergence, stability and economy of computer time. Some of the more sensible simulation methods are compared. Sensitivity analysis is briefly mentioned. [Pg.389]

Essentially, each of the above systems has two widely different time scales. If the initial transient is not of interest, the systems can be projected onto a one-dimensional subspace. The subspace is invariant in that no matter where one starts, after a fast transient, all trajectories get attracted to the subspace in which A and B are algebraically related to each other. In essence, what one achieves is dimension reduction of the reactant space through time scale separation. For large, complex systems sueh as oil refining, it is difficult to use the foregoing ad hoc approaches to reduce system dimensionality manually. Computer codes are available for mechanism reduction by means of the QSA/QEA and sensitivity analysis. ... [Pg.208]


See other pages where Sensitivity analysis computation, timing is mentioned: [Pg.88]    [Pg.89]    [Pg.90]    [Pg.82]    [Pg.160]    [Pg.69]    [Pg.753]    [Pg.604]    [Pg.51]    [Pg.2]    [Pg.110]    [Pg.613]    [Pg.321]    [Pg.420]    [Pg.126]    [Pg.119]    [Pg.81]    [Pg.42]    [Pg.372]    [Pg.57]    [Pg.191]    [Pg.196]    [Pg.165]    [Pg.283]    [Pg.44]    [Pg.134]    [Pg.328]    [Pg.94]    [Pg.190]    [Pg.153]    [Pg.98]    [Pg.136]    [Pg.118]    [Pg.581]    [Pg.413]    [Pg.220]    [Pg.124]   
See also in sourсe #XX -- [ Pg.84 , Pg.85 , Pg.86 ]




SEARCH



Analysis, computers

Computation time

Computational time

Computing time

Sensitivity analysis

Time Sensitivity

Timing computation

© 2024 chempedia.info