Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Cumulative E-factor

It is difficult to choose the appropriate starting material for calculation of cumulative E-factor such that a fair comparison could be made amongst various guanidines and we do not claim that our choice of starting material is the most appropriate. [Pg.386]

The Cumulative E-factor for each route is calculated with the assumption that scaling up or down a preceding step does not affect the reported yield and the E-factor. This is unlikely to be true in practice. [Pg.386]

In the case of routes with multiple steps, we defined a cumulative E-factor that makes use of the E-factor of preceding steps to obtain the E-factor of the entire reported route. We will use the aziridine synthesis and ring opening to form a triamine as an example to demonstrate the calculation of cumulative E-factor (Table 23.2). [Pg.386]

Based on the data in Table 23.2, the cumulative E-factor for step 1 and 2 including the waste generated from the synthesis of the aziridine is indicated below. [Pg.387]

The cumulative E-factor for step 1 and step 2 shown in Table 23.2 is 63.8. This number is used to calculate the amount of waste generated for the amount of protected triamine used in the subsequent step. [Pg.387]

Cumulative E-Factor of Various Routes of Guanidine Catalyst l theses... [Pg.387]

Based on the cumulative E-factors tabulated in Table 23.3, Feng-G has the lowest cumulative E-factor of 60. Synthesis of the bicyclic guanidines Tan-G and Corey-G suffers from the most unfavourable E-factor of >330 i.e. 1 g of catalyst generates 484 g of waste in the case of Corey-G). While Corey and Grogan have demonstrated that the catalyst could be recycled through the... [Pg.387]

Cumulative E-factor includes all steps of the synthesis and includes waste from the synthesis of intermediates in all preceding steps. [Pg.388]

Scheme 23.4 Further analysis of aldol reaction adducts using green chemistiy metrics. E-factoroverau is the cumulative E-factor of the two g... Scheme 23.4 Further analysis of aldol reaction adducts using green chemistiy metrics. E-factoroverau is the cumulative E-factor of the two g...
Under this definition, the E function is characterized by an experimental 5 factor, which can be estimated experimentally (Saad et al, 2001). Note that the cumulative B factor applied in the final reconstruction is a composite of experimental and computational causes. The computational B factor is attributable to additional blurring effects such as inaccuracy in determining the orientation of particles, which could also be described by a Gaussian function type. The dampening of the image contrast by the... [Pg.96]

The defocus values for electron micrographs can be readily estimated on the basis of the CTF rings visible in the incoherently averaged Fourier transforms of individual particle images (Zhou et at., 1994, 1996). This method has become a routine practice universally adapted for the initial evaluation and determination of CTF parameters as defined in Eqs. (2) and (3). So far, the determination of the cumulative B factors for 3-D reconstruction has been somewhat ad hoc. The cumulative B factor used in these studies is determined either by trial and error with the initial value derived from previous results, or from the incoherently averaged Fourier transforms of particle images. Different approaches have been adopted to make corrections for the CTF and the function of the micrographs. They differ in the steps where these corrections are made and in whether or how the E function is corrected. [Pg.102]

Lorentz Correction for Highly Collimated Beams. The rotational correction should be used if the powder sample is rotated within the beam in the single crystal sense, i.e. all crystallites should complete their rotation within the beam. Should the beam be collimated to dimensions below those of the sample containment then this further reduces the rotational impact on the cumulative Lorentz factor. A term Rl can be introduced to quantify the rotational Lorentz factor from 0 for no rotational element to 1 for full rotation of all crystallites within the beam (Figure 14.12). The introduction of this factor leads to ... [Pg.432]

G.E. has accumulated 211 reactor operating years of experience on BWR type plants in the U.S.A. The cumulative load factor is 59.4% until 1983. [Pg.163]

Monte Carlo Method The Monte Carlo method makes use of random numbers. A digital computer can be used to generate pseudorandom numbers in the range from 0 to 1. To describe the use of random numbers, let us consider the frequency distribution cui ve of a particular factor, e.g., sales volume. Each value of the sales volume has a certain probabihty of occurrence. The cumulative probabihty of that value (or less) being realized is a number in the range from 0 to 1. Thus, a random number in the same range can be used to select a random value of the sales volume. [Pg.824]

The dynamics of highly diluted star polymers on the scale of segmental diffusion was first calculated by Zimm and Kilb [143] who presented the spectrum of eigenmodes as it is known for linear homopolymers in dilute solutions [see Eq. (77)]. This spectrum was used to calculate macroscopic transport properties, e.g. the intrinsic viscosity [145], However, explicit theoretical calculations of the dynamic structure factor [S(Q, t)] are still missing at present. Instead of this the method of first cumulant was applied to analyze the dynamic properties of such diluted star systems on microscopic scales. [Pg.90]

One can identify two major categories of uncertainty in EIA data (scientific) uncertainty inherited in input data (e.g., incomplete or irrelevant baseline information, project characteristics, the misidentification of sources of impacts, as well as secondary, and cumulative impacts) and in impact prediction based on these data (lack of scientific evidence on the nature of affected objects and impacts, the misidentification of source-pathway-receptor relationships, model errors, misuse of proxy data from the analogous contexts) and decision (societal) uncertainty resulting from, e.g., inadequate scoping of impacts, imperfection of impact evaluation (e.g., insufficient provisions for public participation), human factor in formal decision-making (e.g., subjectivity, bias, any kind of pressure on a decision-maker), lack of strategic plans and policies and possible implications of nearby developments (Demidova, 2002). [Pg.21]

Vertzoni et al. (30) recently clarified the applicability of the similarity factor, the difference factor, and the Rescigno index in the comparison of cumulative data sets. Although all these indices should be used with caution (because inclusion of too many data points in the plateau region will lead to the outcome that the profiles are more similar and because the cutoff time per percentage dissolved is empirically chosen and not based on theory), all can be useful for comparing two cumulative data sets. When the measurement error is low, i.e., the data have low variability, mean profiles can be used and any one of these indices could be used. Selection depends on the nature of the difference one wishes to estimate and the existence of a reference data set. When data are more variable, index evaluation must be done on a confidence interval basis and selection of the appropriate index, depends on the number of the replications per data set in addition to the type of difference one wishes to estimate. When a large number of replications per data set are available (e.g., 12), construction of nonparametric or bootstrap confidence intervals of the similarity factor appears to be the most reliable of the three methods, provided that the plateau level is 100. With a restricted number of replications per data set (e.g., three), any of the three indices can be used, provided either non-parametric or bootstrap confidence intervals are determined (30). [Pg.237]

The first difference is that non-cumulative data refer to amount of drug dissolved within a certain time period and not at a specific time point, i.e., in this case the observed variable is the amount dissolved, W(t, t2), between the time points t1 and t2 it2 > t ). Consequently, in contrast to their application to cumulative data (30) where the difference factor and the Rescigno index refer to area differences, for non-cumulative data these indices refer to the difference between the dissolved amount of the test and the reference product in a given time interval. [Pg.242]

The second difference relates to the definition of a cutoff time point for the evaluation of the difference factor and the Rescigno index. When cumulative data are available, evaluation of the difference factor or the Rescigno index usually requires a reference data set in order to define the cutoff time point for index evaluation (30). For the evaluation of fl and the , i.e., when the difference factor and the Rescigno index are evaluated from non-cumulative data, this difficulty does not exist, provided that the release process has been monitored up to the end (i.e., until dissolution of the drug is complete). At this point, it is worth mentioning that a similar conclusion cannot be drawn for the similarity factor (31) because application of this index to non-cumulative data is set apart by the careful scaling procedure required, in addition to the existence of a reference data set. The reason is that this index can continue to change even after dissolution of both products is complete. [Pg.243]

The solution structure of a thioredoxin from B. acidocaldarius (Topt = 60 °C) has been studied by NMR and compared with that of E. coli determined by X-ray analysis. It was found that the higher thermostability of the former is due to cumulative effects, the main factor being an increased number of ionic interactions cross-linking different secondary structural elements. Multidimensional heteronuclear NMR spectroscopy was also employed to characterize thioredoxin homologues found in the hyperthermophilic... [Pg.133]

The observation that branches A and B in Fig. 6.25 merge at large Q is consistent with the predictions for and T since 6ti and 18.84 deviate from 16 by less than 15% and statistical errors of the experiment and systematic uncertainties in methods to extract the cumulant exceed this difference. In [325] for both the collective concentration fluctuations and the local Zimm modes the observed rates are too slow by a factor of 2 if compared to the predictions with T (the solvent viscosity) and (the correlation length) as obtained from the SANS data. It is suggested that this discrepancy may be removed by the introduction of an effective viscosity qf that replaces the plain solvent viscosity Finally at very low Q, i.e. 1, branch C should level at the centre of mass... [Pg.197]

Where there is no evidence for bioaccumulation and/or cumulative toxicity, no downward adjustment is necessary, i.e., a factor of 1 should be used. [Pg.267]


See other pages where Cumulative E-factor is mentioned: [Pg.384]    [Pg.387]    [Pg.389]    [Pg.393]    [Pg.384]    [Pg.387]    [Pg.389]    [Pg.393]    [Pg.320]    [Pg.388]    [Pg.134]    [Pg.128]    [Pg.24]    [Pg.425]    [Pg.854]    [Pg.142]    [Pg.264]    [Pg.282]    [Pg.168]    [Pg.344]    [Pg.931]    [Pg.357]    [Pg.437]    [Pg.35]    [Pg.234]    [Pg.246]    [Pg.20]    [Pg.316]    [Pg.146]   
See also in sourсe #XX -- [ Pg.2 , Pg.2 , Pg.389 , Pg.392 ]

See also in sourсe #XX -- [ Pg.2 , Pg.2 , Pg.389 , Pg.392 ]




SEARCH



E-factor

© 2024 chempedia.info