Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Constructing input distributions

When constructing input distributions for an uncertainty analysis, it is often useful to present the range of values in terms of a standard probability distribution. It is important that the selected distribution be matched to the range and moments of any available data. In some cases, it is appropriate to simply use the raw data or a custom distribution. Other more commonly used standard probability distributions include the normal distribution, the lognormal distribution, the uniform distribution, the log-uniform distribution and the triangular distribution. For the case-study presented below, we use lognormal distributions. [Pg.121]

For any case-study built around Equation 1, we have to consider, for model input, parameters that provide emissions or environmental concentrations, intermedia transfer factors, ingestion (or other intake) rates, body weight, exposure frequency and exposure duration. For our specific case-study below, we are interested in concentrations in surface waters due to deposition from the atmosphere. The relevant intermedia transfer factor is the bioconcentration factor for fish concentration from surface water concentrations. The intake data we need are the magnitude and range of fish ingestion in our exposed population. Because PBLx is a persistent compound that accumulates in fat tissues, we will focus for this case not on exposure frequency and duration but on long-term average daily consumption. [Pg.122]


The composition of the case-study includes a conceptual model, the modelling approach, construction of input distributions and variance propagation methods. When evaluating uncertainty, it is important to consider how each of these elements contributes to overall uncertainty. [Pg.119]

Mesocosms placed in shallow Finnish lakes were used to evaluate changes brought about by extended incubation of biologically treated bleachery effluent from mills that used chloride dioxide. The mesocosms had a volume of ca. 2 m and were constructed of translucent polyethere or black polyethene to simulate dark reactions. The experiments were carried out at ambient temperatures throughout the year, and sum parameters were used to trace the fate of the organically bound chlorine. In view of previous studies on the molecular mass distribution of effluents (Jokela and Salkinoja-Salonen 1992), this was measured as an additional marker. Important featmes were that (a) sedimentation occurred exclusively within the water mass within the mesocosm, (b) the atmospheric input could be estimated... [Pg.266]

In practice, there may not be sufficient operating experience and resultant data to develop a numeric-symbolic interpreter that can map with certainty to the labels of interest, Cl. Under these circumstances, if sufficient knowledge of process behaviors exists, it is possible to construct a KBS in place of available operating data. But the KBS maps symbolic forms of input data into the symbolic labels of interest and is therefore not sufficient in itself. A KBS depends on intermediate interpretations, ft, that can be generated with certainty from a numeric-symbolic mapper. This is shown in Fig. 4. In these cases, the burden of interpretation becomes distributed between the numeric-symbolic and symbolic-symbolic interpreters. Figure 4 retains the value of input mapping to preprocess data for the numeric-symbolic interpreter. [Pg.44]

At another level, certain KBS approaches provide the mechanisms for decomposing complex interpretation problems into a set of smaller, distributed and localized interpretations. Decomposition into smaller, more constrained interpretation problems is necessary to maintain the performance of any one interpreter and it makes it possible to apply different interpretation approaches to subparts of the problem. It is well recognized that scale-up is a problem for all of the interpretation approaches described. With increases in the number of input variables, potential output conclusions, complexity of subprocess interactions, and the spatial and temporal distribution of effects, the rapidity, accuracy, and resolution of interpretations can deteriorate dramatically. Furthermore, difficulties in construction, verification, and maintenance can prohibit successful implementation. [Pg.72]

The outline of this paper is as follows. First, a theoretical model of unsteady motions in a combustion chamber with feedback control is constructed. The formulation is based on a generalized wave equation which accommodates all influences of acoustic wave motions and combustion responses. Control actions are achieved by injecting secondary fuel into the chamber, with its instantaneous mass flow rate determined by a robust controller. Physically, the reaction of the injected fuel with the primary combustion flow produces a modulated distribution of external forcing to the oscillatory flowfield, and it can be modeled conveniently by an assembly of point actuators. After a procedure equivalent to the Galerkin method, the governing wave equation reduces to a system of ordinary differential equations with time-delayed inputs for the amplitude of each acoustic mode, serving as the basis for the controller design. [Pg.357]

A mixer was constructed on Si based on distributive mixing (see Figure 3.38). Two liquid streams were split into 16 ministreams to enhance diffusive mixing. Thereafter, the ministreams were recombined. This mixer has been used to study chemically induced conformational change of [466] proteins [467], and chemiluminescent reaction catalyzed by Cr3+ [468]. In another report, a similar mixer with 16 channels was built [280,469]. Mixing was also carried out in a PET chip consisting of a distributor and a dilutor to produce 16 concentrations from two inputs. [Pg.91]

In order to solve the first principles model, finite difference method or finite element method can be used but the number of states increases exponentially when these methods are used to solve the problem. Lee et u/.[8] used the model reduction technique to reslove the size problem. However, the information on the concentration distribution is scarce and the physical meaning of the reduced state is hard to be interpreted. Therefore, we intend to construct the input/output data mapping. Because the conventional linear identification method cannot be applied to a hybrid SMB process, we construct the artificial continuous input/output mapping by keeping the discrete inputs such as the switching time constant. The averaged concentrations of rich component in raffinate and extract are selected as the output variables while the flow rate ratios in sections 2 and 3 are selected as the input variables. Since these output variables are directly correlated with the product purities, the control of product purities is also accomplished. [Pg.215]


See other pages where Constructing input distributions is mentioned: [Pg.121]    [Pg.126]    [Pg.121]    [Pg.126]    [Pg.121]    [Pg.167]    [Pg.639]    [Pg.171]    [Pg.196]    [Pg.51]    [Pg.636]    [Pg.78]    [Pg.39]    [Pg.146]    [Pg.13]    [Pg.122]    [Pg.18]    [Pg.175]    [Pg.179]    [Pg.134]    [Pg.135]    [Pg.213]    [Pg.517]    [Pg.78]    [Pg.934]    [Pg.510]    [Pg.517]    [Pg.20]    [Pg.45]    [Pg.171]    [Pg.78]    [Pg.482]    [Pg.124]    [Pg.99]    [Pg.122]    [Pg.132]    [Pg.125]    [Pg.161]    [Pg.517]    [Pg.305]    [Pg.548]   


SEARCH



© 2024 chempedia.info