Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Results/statistical packages

D. Analytical Testing Methods PROTOCOL EXECUTION ANALYSIS OF RESULTS/STATISTICAL PACKAGES DOCUMENTATION OF RESULTS ANALYST CERTIFICATION AND TRAINING TRANSFEROR TECHNICAL OWNERSHIP... [Pg.507]

Cluster analysis is far from an automatic technique each stage of the process requires many decisions and therefore close supervision by the analyst. It is imperative that the procedure be as interactive as possible. Therefore, for this study, a menu-driven interactive statistical package was written for PDP-11 and VAX (VMS and UNIX) series computers, which includes adequate computer graphics capabilities. The graphical output includes a variety of histograms and scatter plots based on the raw data or on the results of principal-components analysis or canonical-variates analysis (14). Hierarchical cluster trees are also available. All of the methods mentioned in this study were included as an integral part of the package. [Pg.126]

Quantitative methodology uses large or relatively large samples of subjects (as a rule students) and tests or questionnaires to which the subjects answer. Results are treated by statistical analysis, by means of a variety of parametric methods (when we have continuous data at the interval or at the ratio scale) or nonparametric methods (when we have categorical data at the nominal or at the ordinal scale) (30). Data are usually treated by standard commercial statistical packages. Tests and questionnaires have to satisfy the criteria for content and construct validity (this is analogous to lack of systematic errors in measurement), and for reliability (this controls for random errors) (31). [Pg.79]

Several statistics for multivariate tests are known from the literature [AHRENS and LAUTER, 1981 FAHRMEIR and HAMERLE, 1984] the user of statistical packages may find several of them implemented and will rely on their performing correctly. Other, different, tests for separation of groups are used to determine the most discriminating results in discriminant analysis with feature reduction. [Pg.184]

An effective but simple way of graphically illustrating the variability associated with the analytical data is to plot x—y plots of the duplicate and replicate pairs. Most statistical packages will have an option for plotting simple x—y plots. The G-BASE project uses MS Excel running a macro that will automatically plot duplicate-replicate and duplicate-duplicate results. Figure 5.8 shows three examples from the G-BASE East Midlands atlas area duplicate-replicate data for soils. This method gives an immediate visual appreciation of any errors present in an analytical batch and an indication of within site variability, as shown by the duplicate pairs, or the within sample variability, as indicated by the replicate pairs that demonstrate... [Pg.105]

The G-BASE project has used several statistical packages to perform this nested ANOVA analysis (e.g., Minitab and SAS). It currently uses an MS Excel procedure with a macro based on the equations described by Sinclair (1983) in which the ANOVA is performed on results converted to logio (Johnson, 2002). Ramsey et al. (1992) suggest that the combined analytical and sampling variance should not exceed 20% of the total variance with the analytical variance ideally being <4%. [Pg.108]

Standard statistical packages for computing models by least-squares regression typically perform an analysis of variance (ANOVA) based upon the relationship shown in Equation 5.15 and report these results in a table. An example of a table is shown in Table 5.3 for the water model computed by least squares at 1932 nm. [Pg.125]

With most statistics packages, data that are to be subjected to a one-way analysis of variance are entered into two columns in a similar way to that seen with a two-sample f-test (Section 6.8). One column contains a series of codes indicating what catalyst was used and the other column contains the corresponding experimental results. In the first five rows, the results are labelled as being due to the use of platinum (Pt), the next five are due to palladium (Pd) and so on. The general appearance will be as in Table 13.2. [Pg.150]

In most statistical packages, the implementation of the analysis of variance includes an option to select a Tukey s test. The format of the output varies enormously, but (as in Table 13.4) should include a list of confidence intervals for the difference between each possible pair of catalysts. Each line of output shows the difference calculated as the yield with the first metal minus that with the second. The results are shown ordered according to yield [palladium (highest) to platinum (lowest)]. [Pg.152]

Statistical packages differ in the way in which they expect data to be supplied. Some will only work from the raw data. For our success/fail data, you would provide the data as a column containing the 50 results coded suitable (possibly an S or an F for each success or failure). [Pg.199]

The previous chapter mentioned the continuity problem and introduced the Yates correction. Opinions are divided on the application of this correction to the contingency chi-square test. Some statistical packages offer both a corrected and an uncorrected result, others just the uncorrected. A commonly used stratagem is to quote the corrected result where the table contains only two columns and two rows,... [Pg.212]

For those commendably simple experiments that result in 2 x 2 contingency tables, some statistical packages include simple routines to calculate necessary sample size. For anything more complex, you are on your own - quite right too The routine will require you to provide values for the following ... [Pg.217]

To perform this test, most statistical packages require all the data to be entered into a single column with a further column contains codes indicating which group a result belongs to (as described for the one-way ANOVA). Generic output is shown in Table 17.8. [Pg.238]

The book is aimed at those who have to use statistics, but have no ambition to become statisticians per se. It avoids getting bogged down in calculation methods and focuses instead on crucial issues that surround data generation and analysis (sample size estimation, interpretation of statistical results, the hazards of multiple testing, potential abuses, etc.). In this day of statistical packages, it is the latter that cause the real problems, not the number-crunching. [Pg.305]

A wide range of preprogrammed search facilities is available, with all sorts of selection modes (by report number, time period, unit, status, causal code, and any combination of these). From these analyses, output files may be generated for more advanced statistical packages. Graphics facilities enable analysis results to be displayed as pie-charts, etc., for easy and fast interpretation. [Pg.73]

The result is presented in Table 4.11. Note that the sum of each column is now zero. Almost all traditional statistical packages perform this operation prior to PCA, whether desired or not. The PC plots are presented in Figure 4.17. [Pg.213]

In effect this calculation normalizes all processes to a number of sigmas rather than absolute values. The relationship between DPMO and Z score is illustrated in Fig. 2. Z refers to the white area under the curve and the shaded area shows the area where there is a probability of failure. DPMO is the integration of the shaded areas, i.e., the proportion of the results beyond the calculated Z value. Obviously, as Z increases the defective part of the distribution shrinks. The exact probability associated with a specific Z score can be easily obtained from Z score tables or calculated with common software packages such as Excel or statistical packages such as Minitab . In the case of Excel, such calculations are not part of a standard package, but macros can easily be written to perform the needed calculations. [Pg.2720]

Pooling of Estimates Following the approach used in the MI paradigm, after M supplementations have been created for a data set, they are then analyzed using a standard PK/PD or statistical package. There are now M completed data sets containing the observed values and the supplemented values instead of one. The PK/PD or statistical analysis must be done M times, once on each complete data set. Across M data sets the results will vary, reflecting the uncertainty due to supplemental observations. The M complete data analyses are combined to create one repeated-supplementation inference. [Pg.834]

It is not possible to perform three-way or higher ANOYA in Excel, nor is it practical to perform the calculations manually. Many statistical packages do offer such analysis and require the data to be in a somewhat different form. The measurement results are in one column (sometimes known as the dependent variable) and each factor is represented by another column in which the level is given. Up to now we have considered situations in which the different levels of a factor are discrete entities—analysts, methods, etc. However, we have also referred to factors that are continuous variables, such as time and temperature. The model that ANOYA builds in each case is slightly different, and most software can cope with this. The output from different software programs varies but mostly contains the important information of the mean squares, F values and associated probabilities. [Pg.125]

Five parameters in the data-set were found to be unchanged for all 35 compounds and removed from the matrix. These parameters are H-DO for positions II, IV and V and H-AC for positions IV and V. After the redundant elements had been removed, the resulting [35x47] matrix was correlated to the vector of the biological activity. To perform the linear stepwise regression analysis, the STEPWISE procedure of the SAS statistical package ( ) and BASIC programs were used. [Pg.173]

Data for MM compositions of different SWMs and their leachates were examined statistically in order to determine any significant compositional variations among samples. Most statistical analyses were performed using the SAS Statistical Package V 6.12 [335]. In this report, the results of Q-mode factor analysis and Unear programming techniques will be presented. The objectives of the statistical analyses were to define the MM characteristics for the different SWMs and their leachates, and to determine their original sources. [Pg.372]


See other pages where Results/statistical packages is mentioned: [Pg.485]    [Pg.493]    [Pg.520]    [Pg.485]    [Pg.493]    [Pg.520]    [Pg.628]    [Pg.629]    [Pg.120]    [Pg.124]    [Pg.14]    [Pg.16]    [Pg.335]    [Pg.275]    [Pg.323]    [Pg.167]    [Pg.173]    [Pg.86]    [Pg.271]    [Pg.185]    [Pg.315]    [Pg.247]    [Pg.315]    [Pg.196]    [Pg.136]    [Pg.29]    [Pg.273]    [Pg.325]    [Pg.169]    [Pg.44]   


SEARCH



Results/statistical packages, analysis

© 2024 chempedia.info