Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Parametric analysis procedure

The COMPACT (computer-optimized molecular parametric analysis of chemical toxicity) procedure, developed by Lewis and co-workers [92], uses a form of discriminant analysis based on two descriptors, namely, molecular planarity and electronic activation energy (the difference between the energies of the highest occupied and lowest unoccupied molecular orbitals), which predict the potential of a compound to act as a substrate for one of the cytochromes P450. Lewis et al. [93] found 64% correct predictions for 100 compounds tested by the NTP for mutagenicity. [Pg.484]

Some of the motivations to pursue spectral simulation in a clinical MRS setting include providing metabolite prior information for use in parametric spectral analysis procedures, pulse sequence parameter optimization for observation of specific metabolite structures and shortening times for pulse sequence development. This section will describe, in some detail, examples of each with particular regard for the design and level of prior information inclusion of each simulation and the clinical use of the results. [Pg.89]

Kijko, A., Graham, G. (1998). Parametric-historic procedure forprobabilistic seismic hazard analysis. Part I Estimation of maximum regional magnitude Mmax. Pure and Applied Geophysics, 152,413-442. doi 10.1007/s000240050161... [Pg.41]

Nonparametric analysis provides powerful results since the rehahility calculation is unconstrained to fit any particular pre-defined lifetime distribution. However, this flexibility makes nonparametric results neither easy nor convenient to use for different purposes as often encountered in engineering design (e.g., optimization). In addition, some trends and patterns are more clearly identified and recognizable with parametric analysis. Several possible methods can be used to fit a parametric distribution to the nonparametric estimated rehability functions (as provided by the Kaplan-Meier estimator), such as graphical procedures or inference procedures. See Lawless (2003) for details. We choose in this paper the maximum likelihood estimation (MLE) technique, assuming that the sateUite subsystems failure data are arising from a WeibuU piobabihly distribution, as expressed in Equations 1,2. [Pg.868]

We are going to describe a general procedure for the parametric analysis of a typical chemical dynamical system based on the examples presented in this chapter. This procedure has also been laid out in detail by Bykov and Tsybenova (2011) and Bykov et al. (2015). The studied process occurs at external conditions that are characterized by a number of parameters. These parameters relate to the system of ordinary differential equations... [Pg.257]

A basic assumption underlying r-tests and ANOVA (which are parametric tests) is that cost data are normally distributed. Given that the distribution of these data often violates this assumption, a number of analysts have begun using nonparametric tests, such as the Wilcoxon rank-sum test (a test of median costs) and the Kolmogorov-Smirnov test (a test for differences in cost distributions), which make no assumptions about the underlying distribution of costs. The principal problem with these nonparametric approaches is that statistical conclusions about the mean need not translate into statistical conclusions about the median (e.g., the means could differ yet the medians could be identical), nor do conclusions about the median necessarily translate into conclusions about the mean. Similar difficulties arise when - to avoid the problems of nonnormal distribution - one analyzes cost data that have been transformed to be more normal in their distribution (e.g., the log transformation of the square root of costs). The sample mean remains the estimator of choice for the analysis of cost data in economic evaluation. If one is concerned about nonnormal distribution, one should use statistical procedures that do not depend on the assumption of normal distribution of costs (e.g., nonparametric tests of means). [Pg.49]

Non-parametric procedures tend to be simple two group comparisons. In particular, a general non-parametric version of analysis of covariance does not exist. So the advantages of ANCOVA, correcting for baseline imbalances, increasing precision, looking for treatment-by-covariate interactions, are essentially lost within a non-parametric framework. [Pg.170]

While the electronic structure calculations addressed in the preceding Section could in principle be used to construct the potential surfaces that are a prerequisite for dynamical calculations, such a procedure is in practice out of reach for large, extended systems like polymer junctions. At most, semiempirical calculations can be carried out as a function of selected relevant coordinates, see, e.g., the recent analysis of Ref. [44]. To proceed, we therefore resort to a different strategy, by constructing a suitably parametrized electron-phonon Hamiltonian model. This electron-phonon Hamiltonian underlies the two- and three-state diabatic models that are employed below (Secs. 4 and 5). The key ingredients are a lattice model formulated in the basis of localized Wannier functions and localized phonon modes (Sec. 3.1) and the construction of an associated diabatic Hamiltonian in a normal-mode representation (Sec. 3.2) [61]. [Pg.191]

Prior to cluster analysis, these re-scaled travel experience variables were initially evaluated to determine which ones were relevant in differentiating travel experience levels. Using self-perception of travel experience level as the independent variable, the other 7 travel experience variables were subject to Kruskal-Wallis analysis. This procedure attempted to establish a face validity for using the specified variables to describe travel experience by relating them to the respondents own perception of their travel experience. Kruskal-Wallis analysis, which is the non-parametric equivalent of one-way ANOVA, tests whether several independent samples are from the same population (SPSS Inc., 1999). This test was selected as the correct procedure since heavily skewed data were involved and the Kruskal-Wallis is suitable for this situation (Diekhoff, 1992). The results are presented in Table 3.7. [Pg.76]

In Chapter 5 we saw how the calculation of the 95 per cent Cl for the mean can lead to nonsensical results if the data deviate severely from a normal distribution. This requirement for a normal distribution also applies to the t- tests, analyses of variance and correlation that we met in Chapters 6-14. These procedures are termed parametric methods and are quite robust, so moderate non-normality does little damage, but in more extreme cases, some pretty dumb conclusions can emerge. This chapter looks at steps that can be taken to allow the analysis of seriously non-normal data and also of ordinal scale data. [Pg.224]

One procedure is to assume a parametrized form of the particle distribution function n[z) and compare the predictions of Eq. (8) to the measured scattered intensity to estimate the values of the parameters. This procedure was used to characterize the interaction of the interface with particles in a flowing stream above an interface [I2. There was no adsorption of particles on the surface, and the particle distribution function was obtained from a solution of a mass transport equation with a term describing the interaction with the interface. The analysis yielded estimates of the parameters in the interaction potential [12. ... [Pg.182]

In critical cases it may well be worthwhile to make a complete analysis of stability. In many cases, however, enough can be learned by studying what Bilous and Amundson (B7) called parametric sensitivity. These authors derived formulas for calculating the amplification or attenuation of disturbances imposed on an unpacked tubular reactor originally in a steady state, with the idea that if the disturbances grow unduly the performance of the reactor is too sensitive to the conditions imposed on it, that is, to the parameters of the system. The effect of feedback from a control system was not considered. As pointed out by the authors, it would be a much more complicated task to apply their procedure to a packed reactor, but it still would entail far less computation than a study of the transient response. [Pg.257]

On the theoretical basis of the Koopmans theorem ab initio and semi-empirical calculations can be used as important aids in the analysis of PE spectra . Currently, the MNDO method enjoys great popularity because ionization energies of even large and complex molecules can be calculated with an accuracy of few tenths of eV - This level of accuracy has been achieved by re-parametrization of the procedure. Further improvements in the calculation of ionization energies can be achieved by the separate SCF and configuration interaction (Cl) calculations for the neutral molecule and ion. [Pg.271]

It is, finally, worth mentioning that in addition to deconvolution methods it is also possible to use a convolution approach. The advantage of the latter is that LPA can be performed in a one-step procedure, directly on the measured data, provided that the procedure can make use of a parametric description of the IP preliminarily determined. Software packages are available for this type of analysis. ... [Pg.387]


See other pages where Parametric analysis procedure is mentioned: [Pg.433]    [Pg.54]    [Pg.590]    [Pg.373]    [Pg.284]    [Pg.327]    [Pg.257]    [Pg.151]    [Pg.242]    [Pg.3166]    [Pg.456]    [Pg.261]    [Pg.920]    [Pg.237]    [Pg.168]    [Pg.521]    [Pg.305]    [Pg.123]    [Pg.114]    [Pg.190]    [Pg.402]    [Pg.247]    [Pg.186]    [Pg.380]    [Pg.219]    [Pg.249]    [Pg.477]    [Pg.260]    [Pg.242]    [Pg.173]    [Pg.34]    [Pg.25]    [Pg.1385]    [Pg.622]   
See also in sourсe #XX -- [ Pg.257 , Pg.258 , Pg.259 , Pg.260 , Pg.261 ]




SEARCH



Analysis procedures

Parametric

Parametrization

© 2024 chempedia.info