Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Error zero-time

The well-known difficulty with batch reactors is the uncertainty of the initial reaction conditions. The problem is to bring together reactants, catalyst and operating conditions of temperature and pressure so that at zero time everything is as desired. The initial reaction rate is usually the fastest and most error-laden. To overcome this, the traditional method was to calculate the rate for decreasingly smaller conversions and extrapolate it back to zero conversion. The significance of estimating initial rate was that without any products present, rate could be expressed as the function of reactants and temperature only. This then simplified the mathematical analysis of the rate fianction. [Pg.29]

With Eq. (2-42) the first-order rate constant can be calculated from concentrations at any two times. Of course, usually concentrations are measured at many times during the course of a reaction, and then one has choices in the way the estimates will be calculated. One possibility is to let r, be zero time for all calculations in this case the same value c° is employed in each calculation, so error in this quantity is transmitted to each rate constant estimate. Another possibility is to apply Eq. (2-42) to successive time intervals. If, as often happens, the time intervals are all... [Pg.31]

This limit has been set by the precision with which small osmotic heights can be read. When diffusion is present, the apparent Osmotic pressure is always less than the true Osmotic pressure and falls with time. By extrapolation to zero time we can get too low an Osmotic pressure and hence too high a Molecular weight. The magnitude of the error is due to the solute diffusion and depends on the type of measurement, following table lists some data to illustrate it. [Pg.106]

For SK-500 the rate at 573°K and 400 sec after the initiation of reactant flow is independent of reactant mole ratio for Ce C2 = 0.7 to 10. Under these conditions the 400-sec point is just beyond the maximum in the rate curve. Similar behavior was observed at one other condition. Initial rate of reaction estimated by extrapolating the decay portion of the rate curves for this data to zero time (see below) indicates a maximum in the rate at C6 C2 == 3.5 (Figure 2). Error bars represent estimated 95% confidence limits. The observed activity for HY is about twice that of SK-500, that for LaY is about two-thirds that of SK-500 (Figure 2). This is consistent with the trend expected (7) since all catalysts were activated to the same temperature. The temperature dependence of the observed rate is large for all systems studied indicating the absence of external mass transfer limitations. [Pg.564]

The extrapolation to zero time should be made from measurements at times which are not too short in order to avoid errors due to doublelayer charging. [Pg.275]

Problem 10.2 Polymerization of ethylene oxide with potassium tert-butoxide in DMSO was followed [5] by conventional dilatometry, using special procedures to eliminate zero time errors consequent on rapid initiation reactions. Given below are some of the data obtained for this system at 50°C ... [Pg.818]

In disease, the ratio of plasma volume to body weight (ml/kg) is not always as it is in health. A mean plasma volume of 63 (56-71) ml/kg was found in 6 patients with chronic liver disease who had a mean BSP retention of 3.3 (0-6)%, after a dose of 5 mg/kg (M17). The mean zerotime plasma concentration was found by extrapolation to be 8.1 mg/100 ml instead of 10.0 mg/100 ml as intended. When the BSP retention values were corrected for underestimation of plasma volume, a mean value of 15 (8-25%) was obtained. However, the procedure used in this correction was not disclosed. Errors in estimating the initial volume of distribution are not removed by basing the dose on surface area. Thus, Ingelfinger et al. (II) gave doses based on surface area and obtained zero-time plasma values within 10% of those expected in only 45 out of 50 healthy persons and only 41 out of 58 patients. Other factors involved in predicting plasma volume have been reviewed elsewhere (12, 04). [Pg.329]

We always analyze our zero-time aliquot in triplicate (i.e., six replicates for a duplicate assay to give three results) to overcome any random errors and take into account standard analytical errors, since stability data are often critical in dictating study logistics. [Pg.181]

It is possible to determine the reaction onset temperature (T0) very precisely by nonisothermal methods, which is almost impossible to do by the isothermal method. The zero-time error is therefore absent. [Pg.48]

Gregory, R. B., Analysis of positron aimihilation lifetime data by numerical Laplace inversion corrections for source terms and zero-time shift errors, Nucl. Instrum. Methods Phys. Res. A, 302,496-507(1991). [Pg.417]

A series of DMC calculations with progressively smaller time-steps may then be used to extrapolate to the zero-time-step result that removes error associated with the use of a finite time-step which is referred to as time-step bias. The requirement of a short time-step makes DMC calculations significantly more costly than VMC and several modifieations to the random walk algorithm have been introduced to minimize time-step errors [36, 37],... [Pg.260]

The semi-infinite medium is employed to study the spatiotemporal patterns that the solution of the non-Fick damped wave diffusion and relaxation equation exhibits. This medium has been used in the study of Pick mass diffusion. The boundary conditions can be different kinds, such as constant wall concentration, constant wall flux (CWF), pulse injection, and convective, impervious, and exponential decay. The similarity or Boltzmann transformation worked out well in the case of the parabolic PDF, where an error function solution can be obtained in the transformed variable. The conditions at infinite width and zero time are the same. The conditions at zero distance from the surface and at infinite time are the same. [Pg.198]

Systematic errors have deterministic character. There are e.g. systematic errors in time, varying in some systematic manner (linear drift of zero of measurement instmments, etc.). Systematic errors can be often eliminated by the calibration of instruments, by the use of standards, etc. [Pg.19]

In order to guarantee accuracy of the analysis and eliminate errors, field return data must be filtered. For this purpose, we use a step by step procedure. In the first step, we filter obvious errors from the whole data. We filter data having unknown assembly date, data with quality failure records that result in zero time to failure (TTF), and data with negative TTF or unreasonable TTF. [Pg.1872]

The anti reset-windup technique discussed above is known as external reset feedback. For most applications either it, or the modification mentioned below, is our preferred scheme. It has the disadvantage that the controller output signal, commonly labeled valve position, is really different from the actual position. It differs by the product of the error signal times the proportional gain. Lag in the reset circuit may cause further error. A modification therefore is introduced by some vendors, particularly in the newer microprocessor controls. This consists of setting the reset time equal to zero when the controller is overridden. This technique is sometimes called integral tracking. It should not be used with auto overrides. [Pg.201]

Obviously, because of condition b) this treatment must be characterized as a quasi-equilibrium treatment. It can only hold for a sufficiently slow process. However, by the application of this approach one obtains the so-called square-root-law for the time dependence of the layer thickness. But this law has an infinite slope at zero time. In other words, it does not represent a slow process initially. An estimate of the error made in this way cannot be given without the introduction of the physical conceptions of nucleation and supercooling. This fact has been overlooked at the time. [Pg.112]

Unfortunately, Eq. (1) is even more sensitive to errors in the zero of shrinkage than the log-log plot ( ), although it is independent of errors in the time zero. As a matter of fact, any equation which includes either time or shrinkage to any power other than the first power will be affected by systematic errors in time and shrinkage. [Pg.333]

Hence, we use the trajectory that was obtained by numerical means to estimate the accuracy of the solution. Of course, the smaller the time step is, the smaller is the variance, and the probability distribution of errors becomes narrower and concentrates around zero. Note also that the Jacobian of transformation from e to must be such that log[J] is independent of X at the limit of e — 0. Similarly to the discussion on the Brownian particle we consider the Ito Calculus [10-12] by a specific choice of the discrete time... [Pg.269]


See other pages where Error zero-time is mentioned: [Pg.863]    [Pg.72]    [Pg.354]    [Pg.296]    [Pg.24]    [Pg.572]    [Pg.185]    [Pg.381]    [Pg.57]    [Pg.116]    [Pg.464]    [Pg.220]    [Pg.315]    [Pg.64]    [Pg.229]    [Pg.26]    [Pg.123]    [Pg.185]    [Pg.733]    [Pg.85]    [Pg.219]    [Pg.283]    [Pg.55]    [Pg.137]    [Pg.231]    [Pg.717]    [Pg.168]    [Pg.338]   
See also in sourсe #XX -- [ Pg.55 ]

See also in sourсe #XX -- [ Pg.62 ]




SEARCH



Timing errors

Zero errors

Zero time

© 2024 chempedia.info