Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Empirical Distribution Models

FIGURE 6.3 Cumulative weight fraction IV as a function of ratio of degree of polymerization to number-average value for Schulz (solid line) and Wesslau (dashed line) models. [Pg.230]

The molecular weight distribution for a certain polymer can be described as [Pg.231]

Wx is the weight fraction of polymer with degree of polymerization x What is the value of /c What is xj What is xj Solution The condition for k is that [Pg.231]

When dealing with high molecular weights (x 100), it becomes convenient to treat distributions as being continuous rather than finite. As an example, let us assume that the cumulative weight Traction W(x) of a polymer with degree of polymerization less than X is given by [Pg.231]

The continuous models are particularly useful when analyzing experimental data obtained for distributions where a cumulative weight is obtained hrst and must be converted mathematically to a differential distribution. [Pg.232]


Distribution models are curvefits of empirical RTDs. The Gaussian distribution is a one-parameter function based on the statistical rule with that name. The Erlang and gamma models are based on the concept of the multistage CSTR. RTD curves often can be well fitted by ratios of polynomials of the time. [Pg.2083]

Recently, many experiments have been performed on the structure and dynamics of liquids in porous glasses [175-190]. These studies are difficult to interpret because of the inhomogeneity of the sample. Simulations of water in a cylindrical cavity inside a block of hydrophilic Vycor glass have recently been performed [24,191,192] to facilitate the analysis of experimental results. Water molecules interact with Vycor atoms, using an empirical potential model which consists of (12-6) Lennard-Jones and Coulomb interactions. All atoms in the Vycor block are immobile. For details see Ref. 191. We have simulated samples at room temperature, which are filled with water to between 19 and 96 percent of the maximum possible amount. Because of the hydrophilicity of the glass, water molecules cover the surface already in nearly empty pores no molecules are found in the pore center in this case, although the density distribution is rather wide. When the amount of water increases, the center of the pore fills. Only in the case of 96 percent filling, a continuous aqueous phase without a cavity in the center of the pore is observed. [Pg.373]

Figure 3.7 Frequency and cumulative frequency distributions of 3D-optimal sampling times for the Gompertz model, given the observations for subject 4. Vertical lines split the cumulative empirical distribution into equal probability regions. Figure 3.7 Frequency and cumulative frequency distributions of 3D-optimal sampling times for the Gompertz model, given the observations for subject 4. Vertical lines split the cumulative empirical distribution into equal probability regions.
An empirical distribution ratio model was first elaborated to describe the extraction of Am(III) and Ln(III) in kerosene by purified HBTMPDTP, based on mass balances and mass-action laws of HBTMPDTP dimerization in kerosene, dissociation... [Pg.163]

Process-scale models represent the behavior of reaction, separation and mass, heat, and momentum transfer at the process flowsheet level, or for a network of process flowsheets. Whether based on first-principles or empirical relations, the model equations for these systems typically consist of conservation laws (based on mass, heat, and momentum), physical and chemical equilibrium among species and phases, and additional constitutive equations that describe the rates of chemical transformation or transport of mass and energy. These process models are often represented by a collection of individual unit models (the so-called unit operations) that usually correspond to major pieces of process equipment, which, in turn, are captured by device-level models. These unit models are assembled within a process flowsheet that describes the interaction of equipment either for steady state or dynamic behavior. As a result, models can be described by algebraic or differential equations. As illustrated in Figure 3 for a PEFC-base power plant, steady-state process flowsheets are usually described by lumped parameter models described by algebraic equations. Similarly, dynamic process flowsheets are described by lumped parameter models comprising differential-algebraic equations. Models that deal with spatially distributed models are frequently considered at the device... [Pg.83]

Geochemical models of sorption and desorption must be developed from this work and incorporated into transport models that predict radionuclide migration. A frequently used, simple sorption (or desorption) model is the empirical distribution coefficient, Kj. This quantity is simply the equilibrium concentration of sorbed radionuclide divided by the equilibrium concentration of radionuclide in solution. Values of Kd can be used to calculate a retardation factor, R, which is used in solute transport equations to predict radionuclide migration in groundwater. The calculations assume instantaneous sorption, a linear sorption isotherm, and single-valued adsorption-desorption isotherms. These assumptions have been shown to be erroneous for solute sorption in several groundwater-soil systems (1-2). A more accurate description of radionuclide sorption is an isothermal equation such as the Freundlich equation ... [Pg.9]

Mathematical models can also be classified according to the mathematical foundation the model is built on. Thus we have transport phenomena-bas A models (including most of the models presented in this text), empirical models (based on experimental correlations), and population-based models, such as the previously mentioned residence time distribution models. Models can be further classified as steady or unsteady, lumped parameter or distributed parameter (implying no variation or variation with spatial coordinates, respectively), and linear or nonlinear. [Pg.62]

Probability distribution models can be used to represent frequency distributions of variability or uncertainty distributions. When the data set represents variability for a model parameter, there can be uncertainty in any non-parametric statistic associated with the empirical data. For situations in which the data are a random, representative sample from an unbiased measurement or estimation technique, the uncertainty in a statistic could arise because of random sampling error (and thus be dependent on factors such as the sample size and range of variability within the data) and random measurement or estimation errors. The observed data can be corrected to remove the effect of known random measurement error to produce an error-free data set (Zheng Frey, 2005). [Pg.27]

If a parametric distribution (e.g. normal, lognormal, loglogistic) is fit to empirical data, then additional uncertainty can be introduced in the parameters of the fitted distribution. If the selected parametric distribution model is an appropriate representation of the data, then the uncertainty in the parameters of the fitted distribution will be based mainly, if not solely, on random sampling error associated primarily with the sample size and variance of the empirical data. Each parameter of the fitted distribution will have its own sampling distribution. Furthermore, any other statistical parameter of the fitted distribution, such as a particular percentile, will also have a sampling distribution. However, if the selected model is an inappropriate choice for representing the data set, then substantial biases in estimates of some statistics of the distribution, such as upper percentiles, must be considered. [Pg.28]

A probability distribution is a mathematical description of a function that relates probabilities with specified intervals of a continuous quantity, or values of a discrete quantity, for a random variable. Probability distribution models can be non-parametric or parametric. A non-parametric probability distribution can be described by rank ordering continuous values and estimating the empirical cumulative probability associated with each. Parametric probability distribution models can be fit to data sets by estimating their parameter values based upon the data. The adequacy of the parametric probability distribution models as descriptors of the data can be evaluated using goodness-of-fit techniques. Distributions such as normal, lognormal and others are examples of parametric probability distribution models. [Pg.99]

The dispersion phenomenon has been quantitatively approached by three models. Initially, Albrecht s theory (Tang and Albrecht, 1970) was applied to the finite segments of the polymer. Then, in the case of materials such as trans-Vk, use of an empirical distribution function P N) for the conjugation length made it possible to exactly reproduce the line shapes and line intensities resulting from excitation with different laser lines ... [Pg.390]

A non-empirical kinetic model was developed for the lifetime prediction of polymer parts in their normal use conditions. This model gives access to the spatial distribution (in the sample thickness) of the structural changes at the different scales and the resulting changes of normal use properties. Its efficiency was demonstrated for many substrates in large temperature and dose rate ranges. Here, we have paid special attention to PE radio-thermal oxidation. [Pg.159]

Different results may be observed under conditions that are ostensibly the same. To keep track of this variation, we must maintain records or statistics. There are two general strategies that we may employ. First, we may simply store the results. That is, if we have a thousand observations, we can maintain access to all the individual values. The record may then be employed as an empirical distribution function, in which particular percentiles may be identified on demand. Second, we may use a mathematical model to summarize the distribution. There are two very different reasons for doing this. First, a statistical model may be used to provide a concise summary. The facility with which an analyst can store and retrieve data makes this motivation less compelling than it once was. Second, when a sparse data set is not considered representative of a large population, a model may also be used to infer or predict values that are not represented in the data set. [Pg.1173]

In this study we compare distribution of GRB galactocentric offsets with radial distributions of supernovae of types la and Ib/c, low mass and high mass X-ray binaries, and with models of dark matter (DM) halo density profile. We compare first and second moments of distributions and also their median values, moreover we apply visual technique for comparing a pair of empirical distributions — quantile-quantile plot. [Pg.144]

The smoothed bootstrap has been proposed to deal with the discreteness of the empirical distribution function (F) when there are small sample sizes (A < 15). For this approach one must smooth the empirical distribution function and then bootstrap samples are drawn from the smoothed empirical distribution function, for example, from a kernel density estimate. However, it is evident that the proper selection of the smoothing parameter (h) is important so that oversmoothing or undersmoothing does not occur. It is difficult to know the most appropriate value for h and once the value for h is assigned it influences the variability and thus makes characterizing the variability terms of the model impossible. There are few studies where the smoothed bootstrap has been applied (21,27,28). In one such study the improvement in the correlation coefficient when compared to the standard non-parametric bootstrap was modest (21). Therefore, the value and behavior of the smoothed bootstrap are not clear. [Pg.407]

How to Bootstrap. First, the number of subjects in a multistudy data set for the purposes presented needs to be kept constant to maintain the correct statistical interpretations of bootstrap, that is, correctly representing the underlying empirical distribution of the study populations. Second, the nonparametric bootstrap, as opposed to some other more parametric alternatives, was considered more suitable in order to minimize the dependence on having assumed the correct structural model. [Pg.428]

The Cl is [-0.144, -0.108] and does not contain zero, supporting the notion that the two elimination rate constants do differ. An alternative approach to the above would be to replace the Wald based confidence intervals with those produced using the nonparametric bootstrap technique. With this technique the data set is sampled with replacement at the subject level many times, and the model is fit to each of these resampled data sets, generating an empirical distribution for each model parameter. Confidence intervals can then be constructed for the model parameters based on the percentiles of their empirical distributions. [Pg.734]

There are three distinct features for each simulation model that must be addressed in the simulation plan. The first are the input-output (lO) models that describe the PK/PD-outcomes models. The inputs here are the rates of drug administration and the outputs are things such as drug concentrations or biomarkers. These lO models should have stochastic elements as part of the model such as between-subject variability and residual variability. It is of primary importance here that the complete probability distribution of the outputs be described in the planning. lO models may be mechanistic or empirical. Mechanistic models attempt to portray the model at the physiological or biochemical level while empirical models simply describe the lO model. Mechanistic models are preferred for simulation as they are more likely to be extrapolated to other studies or drugs in the future. [Pg.877]

Models that are used to predict transport of chemicals in soil can be grouped into two main categories those based on an assumed or empirical distribution of pore water velocities, and those derived from a particular geometric representation of the pore space. Velocity-based models are currently the most widely used predictive tools. However, they are unsatisfactory because their parameters generally cannot be measured independently and often depend upon the scale at which the transport experiment is conducted. The focus of this chapter is on pore geometry models for chemical transport. These models are not widely used today. However, recent advances in the characterization of complex pore structures means that they could provide an alternative to velocity based-models in the future. They are particularly attractive because their input parameters can be estimated from independent measurements of pore characteristics. They may also provide a method of inversely estimating pore characteristics from solute transport experiments. [Pg.78]


See other pages where Empirical Distribution Models is mentioned: [Pg.709]    [Pg.720]    [Pg.229]    [Pg.709]    [Pg.720]    [Pg.229]    [Pg.532]    [Pg.139]    [Pg.92]    [Pg.38]    [Pg.504]    [Pg.132]    [Pg.8]    [Pg.493]    [Pg.532]    [Pg.327]    [Pg.175]    [Pg.50]    [Pg.408]    [Pg.260]    [Pg.2307]    [Pg.475]    [Pg.336]    [Pg.1178]    [Pg.431]    [Pg.77]    [Pg.82]   


SEARCH



Distribution empirical

Distribution models

Empirical modeling

Empirical models

Model distributed

Modeling distribution

© 2024 chempedia.info