Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Transform square-root

If the data distribution is extremely skewed it is advisable to transform the data to approach more symmetry. The visual impression of skewed data is dominated by extreme values which often make it impossible to inspect the main part of the data. Also the estimation of statistical parameters like mean or standard deviation can become unreliable for extremely skewed data. Depending on the form of skewness (left skewed or right skewed), a log-transformation or power transformation (square root, square, etc.) can be helpful in symmetrizing the distribution. [Pg.30]

Statistical Analysis. Analysis of variance (ANOVA) of toxicity data was conducted using SAS/STAT software (version 8.2 SAS Institute, Cary, NC). All toxicity data were transformed (square root, log, or rank) before ANOVA. Comparisons among multiple treatment means were made by Fisher s LSD procedure, and differences between individual treatments and controls were determined by one-tailed Dunnett s or Wilcoxon tests. Statements of statistical significance refer to a probability of type 1 error of 5% or less (p s 0.05). Median lethal concentrations (LCjq) were determined by the Trimmed Spearman-Karber method using TOXSTAT software (version 3.5 Lincoln Software Associates, Bisbee, AZ). [Pg.96]

Note Classification includes only arithmetic portions of text. Excluded are chapters of statistics, linear transformations, square roots, and coordinate geometry. 1-S = problems having one of the five situations 2-S = problems having two or more of the five situations other = problems otherwise uncoded, including probabilities, range, and geometry problems. [Pg.86]

Furthermore, one may need to employ data transformation. For example, sometimes it might be a good idea to use the logarithms of variables instead of the variables themselves. Alternatively, one may take the square roots, or, in contrast, raise variables to the nth power. However, genuine data transformation techniques involve far more sophisticated algorithms. As examples, we shall later consider Fast Fourier Transform (FFT), Wavelet Transform and Singular Value Decomposition (SVD). [Pg.206]

Uniform mixing in the vertical to 1000 m and uniform concentrations across each puff as it expands with the square root of travel time are assumed. A 0.01 h transformation rate from SO2 to sulfate and 0.029 and 0.007 h" dry deposition rates for SO2 and sulfate, respectively, are used. Wet deposition is dependent on the rainfall rate determined from the surface obser% ation network every 6 h, with the rate assumed to be uniform over each 6-h period. Concentrations for each cell are determined by averaging the concentrations of each time step for the cell, and deposition is determined by totaling all depositions over the period. [Pg.332]

Some coordinate transformations are non-linear, like transforming Cartesian to polar coordinates, where the polar coordinates are given in terms of square root and trigonometric functions of the Cartesian coordinates. This for example allows the Schrodinger equation for the hydrogen atom to be solved. Other transformations are linear, i.e. the new coordinate axes are linear combinations of the old coordinates. Such transfonnations can be used for reducing a matrix representation of an operator to a diagonal form. In the new coordinate system, the many-dimensional operator can be written as a sum of one-dimensional operators. [Pg.309]

The transformation from a set of Cartesian coordinates to a set of internal coordinates, wluch may for example be distances, angles and torsional angles, is an example of a non-linear transformation. The internal coordinates are connected with the Cartesian coordinates by means of square root and trigonometric functions, not simple linear combinations. A non-linear transformation will affect the convergence properties. This may be illustrate by considering a minimization of a Morse type function (eq. (2.5)) with D = a = ] and x = AR. [Pg.323]

Transform) the content of a given column ( vector) can be mathematically modified in various ways, the result being deposited in the (N + 1) column. The available operators are addition of and multiplication with a constant, square and square root, reciprocal, log(w), Infn), 10 , exp(M), clipping of digits, adding Gaussian noise, normalization of the column, and transposition of the table. More complicated data work-up is best done in a spreadsheet and then imported. [Pg.370]

Each of these data sets is skewed, yet each can be transformed to normality. With no transformation applied, the probability plot correlation coefficients for the Co, Fe, and Sc data sets are 0.855, 0.857, and 0.987, respectively. For Co and Fe, the hypothesis of normality is rejected at the 0.5 percent level (12). On the other hand, the maximum probability plot correlation coefficients are 0.993, 0.990, and 0.993 for Co, Fe, and Sc, respectively. The maxima occur at (X,t) (0,0.0048), (0,0.42), and (0.457,0), respectively. These maxima are so high that they provide no evidence that the range of transformations is inadequate. Note that the (, x) values at which the maxima occur correspond to log transformations with a shift for the Co and Fe and nearly a square-root transformation for the Sc. [Pg.126]

Column-standardization is the most widely used transformation. It is performed by division of each element of a column-centered table by its corresponding column-standard deviation (i.e. the square root of the column-variance) ... [Pg.122]

The information content resulting from both processing methods is identical insofar as correlation information is concerned. The matrix-square-root transformation can minimize artefacts due to relay effects and chemical shift near degeneracy (pseudo-relay effects80-82 98). The application of covariance methods to compute HSQC-1,1-ADEQUATE spectra is described in the following section. [Pg.272]

A special situation arises in the limit of small scavenger concentration. Mozumder (1971) collected evidence from diverse experiments, ranging from thermal to photochemical to radiation-chemical, to show that in all these cases the scavenging probability varied as cs1/2 in the limit of small scavenger concentration. Thus, importantly, the square root law has nothing to do with the specificity of the reaction, but is a general property of diffusion-dominated reaction. For the case of an isolated e-ion pair, comparing the t—°° limit of Eq. (7.28) followed by Laplace transformation with the cs 0 limit of the WAS Eq. (7.26), Mozumder derived... [Pg.234]

To determine whether the skew was responsible for the taxonic findings, Gleaves et al. transformed the data using a square root or log transformation and were successful at reducing the skew of all but one indicator to less than 1.0. This is a fairly conservative test of the taxonic Conjecture, because data transformation not only reduces indicator skew, but it can also reduce indicator validities, and hence produce a nontaxonic result. Yet, this did not happen in this study. All but one plot originally rated as taxonic were still rated as taxonic after the transformation. MAMBAC base rate estimates were. 19 (SD =. 18) for transformed empirical indicators, and. 24 (SD =. 06) for transformed theoretical indicators. Nevertheless, these estimates are probably not as reliable as the original estimates because of the possible reduction in validity, which is likely to lower the precision of the estimates. [Pg.144]

The first is to normalize the data, making them suitable for analysis by our most common parametric techniques such as analysis of variance ANOYA. A simple test of whether a selected transformation will yield a distribution of data which satisfies the underlying assumptions for ANOYA is to plot the cumulative distribution of samples on probability paper (that is a commercially available paper which has the probability function scale as one axis). One can then alter the scale of the second axis (that is, the axis other than the one which is on a probability scale) from linear to any other (logarithmic, reciprocal, square root, etc.) and see if a previously curved line indicating a skewed distribution becomes linear to indicate normality. The slope of the transformed line gives us an estimate of the standard deviation. If... [Pg.906]

In some cases it may be possible to transform a curve to linear form, for example by taking logarithms, or a relatively simple relation can be found to fit, such as the power law mentioned above or the square root of time. With composite curves it may be justifiable for the end purpose intended to deal only with one portion, for example by ignoring what happens before an equilibrium condition was reached, or the behaviour after degradation was too great to be of practical interest. [Pg.100]

This transformation thus includes the familiar reciprocal, square-root, and logarithmic transformations. Box and Tidwell have shown how the functions... [Pg.162]

The Holstein-Primakoff transformation also preserves the commutation relations (70). Due to the square-root operators in Eqs. (78a)-(78d), however, the mutual adjointness of S+ and 5 as well as the self-adjointness of S3 is only guaranteed in the physical subspace 0),..., i- -m) of the transformation [219]. This flaw of the Holstein-Primakoff transformation outside the physical subspace does not present a problem on the quantum-mechanical level of description. This is because the physical subspace again is invariant under the action of any operator which results from the mapping (78) of an arbitrary spin operator A(5i, 2, 3). As has been discussed in Ref. 100, however, the square-root operators may cause serious problems in the semiclassical evaluation of the Holstein-Primakoff transformation. [Pg.304]

Mitrovic and Knezic (1979) also prepared ultrafiltration and reverse osmosis membranes by this technique. Their membranes were etched in 5% oxalic acid. The membranes had pores of the order of 100 nm, but only about 1.5 nm in the residual barrier layer (layer AB in Figure 2.15). The pores in the barrier layer were unstable in water and the permeability decreased during the experiments. Complete dehydration of alumina or phase transformation to a-alumina was necessary to stabilize the pore structure. The resulting membranes were found unsuitable for reverse osmosis but suitable for ultrafiltration after removing the barrier layer. Beside reverse osmosis and ultrafiltration measurements, some gas permeability data have also been reported on this type of membranes (Itaya et al. 1984). The water flux through a 50/im thick membrane is about 0.2mL/cm -h with a N2 flow about 6cmVcm -min-bar. The gas transport through the membrane was due to Knudsen diffusion mechanism, which is inversely proportional to the square root of molecular mass. [Pg.48]

An important consequence of the MPC theory is the existence of a time operator T. This operator does not commute with the Liouvillian LT — TL= i. Its average value is interpreted by Prigogine as the age of the system, closely related to entropy. More generally, any positive, monotonously decreasing function, M = M T), is a Lyapounov function. On the other hand, the transformation A appears formally as a square root A = This property leads directly to an W-theorem for intrinsically... [Pg.34]

For the optimal application of GPC to the separation of discrete small molecules, three factors should be considered. Solvent effects are minimal, but may contribute selectivity when solvent-solute interactions occur. The resolving power in SMGPC increases as the square root of the column efficiency (plate count). New, efficient GPC columns exist which make the separation of small molecules affordable and practical, as indicated by applications to polymer, pesticide, pharmaceutical, and food samples. Finally, the slope and range of the calibration curve are indicative of the distribution of pores available within a column. Transformation of the calibration curve data for individual columns yields pore size distributions from which useful predictions can be made regarding the characteristics of column sets. [Pg.185]

In this example, the likelihood function is the distribution on the average of a random sample of log-transformed tissue residue concentrations. One could assume that this likelihood function is normal, with standard deviation equal to the standard deviation of the log-transformed concentrations divided by the square root of the sample size. The likelihood function assumes that a given average log-tissue residue prediction is the true site-specific mean. The mathematical form of this likelihood function is... [Pg.61]

A basic assumption underlying r-tests and ANOVA (which are parametric tests) is that cost data are normally distributed. Given that the distribution of these data often violates this assumption, a number of analysts have begun using nonparametric tests, such as the Wilcoxon rank-sum test (a test of median costs) and the Kolmogorov-Smirnov test (a test for differences in cost distributions), which make no assumptions about the underlying distribution of costs. The principal problem with these nonparametric approaches is that statistical conclusions about the mean need not translate into statistical conclusions about the median (e.g., the means could differ yet the medians could be identical), nor do conclusions about the median necessarily translate into conclusions about the mean. Similar difficulties arise when - to avoid the problems of nonnormal distribution - one analyzes cost data that have been transformed to be more normal in their distribution (e.g., the log transformation of the square root of costs). The sample mean remains the estimator of choice for the analysis of cost data in economic evaluation. If one is concerned about nonnormal distribution, one should use statistical procedures that do not depend on the assumption of normal distribution of costs (e.g., nonparametric tests of means). [Pg.49]

The log transformation is by far the most common transformation, but there are several other transformations that are from time to time used in recovering normality. The square root transformation,., /x, is sometimes used with count data while the logit transformation, log (x/l — x), can be used where the patient provides a measure which is a proportion, such as the proportion of days symptom-free in a 14 day period. One slight problem with the logit transformation is that it is not defined when the value of x is either zero or one. To cope with this in practice, we tend to add 1/2 (or some other chosen value) to x and (1 —x) as a fudge factor before taking the log of the ratio. [Pg.164]


See other pages where Transform square-root is mentioned: [Pg.322]    [Pg.322]    [Pg.127]    [Pg.196]    [Pg.302]    [Pg.356]    [Pg.85]    [Pg.467]    [Pg.444]    [Pg.48]    [Pg.5]    [Pg.271]    [Pg.396]    [Pg.183]    [Pg.378]    [Pg.138]    [Pg.413]    [Pg.297]    [Pg.149]    [Pg.270]    [Pg.156]    [Pg.57]    [Pg.147]    [Pg.151]    [Pg.652]    [Pg.135]    [Pg.564]    [Pg.391]   
See also in sourсe #XX -- [ Pg.65 ]




SEARCH



Square root transformation

Square root transformation

Transform square

© 2024 chempedia.info