Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Summarizing data sets

Experiments and trials frequently produce lists of figures that are too long to be easily comprehended and we need to produce one or two summary figures that will give the reader an accurate picture of the overall situation. [Pg.9]

With interval scale (continuous measurement) data, there are two aspects to the figures that we should be trying to describe  [Pg.9]

Essential Statistics for the Pharmaceutical Sciences Philip Rowe [Pg.9]

To indicate the first of these, we quote an indicator of central tendency and for the second an indicator of dispersion . [Pg.10]

In this chapter we look at more than one possible approach to both of the above. It would be wrong to claim that one way is universally better than another. However, we can make rational choices for specific situations if we take account of the nature of the data and the purpose of the report. [Pg.10]


For sources having a large component of emissions from low-level sources, the simple Gifford-Hanna model given previously as Eq. (20-19), X = Cqju, works well, especially for long-term concentrations, such as annual ones. Using the derived coefficients of 225 for particulate matter and 50 for SO2, an analysis of residuals (measured minus estimated) of the dependent data sets (those used to determine the values of the coefficient C) of 29 cities for particulate matter and 20 cities for SOj and an independent data set of 15 cities for particulate matter is summarized in Table 20-1. For the dependent data sets, overestimates result. The standard deviations of the residuals and the mean absolute errors are about equal for particulates and sulfur dioxide. For the independent data set the mean residual shows... [Pg.335]

Data sets on toxicity to aquatic organisms vary considerably from compound to compound, with dibutyltin being the best studied. Results of toxicity tests for all compounds are summarized in Figure 2. Values for all but one test on the octyltins have been set at the solubility of the compounds, since no toxicity was observed below the solubilities derivation of PNECs for the octyltins are, therefore, more precautionary than for the other compounds. [Pg.41]

No properly constituted data sets for reaction rates were found for which eq. (1) is followed with acceptable precision (34). This result may be attributed to the incursion of proximity effect contributions. However, certain benzoic acid type ionization equilibria data sets do appear to follow eq. (1) satisfactorily, although none of the sets qualifies as a minimal basis set. Table XXVII summarizes the results for the fittings with parameters, which for every discriminating... [Pg.59]

In this table you can see that clinical success is summarized for each treatment by visit. The key here is by visit. If the data set to be summarized is simply sorted by visit, then PROC FREQ, PROC TABULATE, or some other procedure can be executed with a BY VISIT statement. If the data set were denormalized, then the task of producing the required summary would be more difficult. [Pg.96]

The final meddra data set in this program contains the lower-level term code (llt code) that can then be merged with the adverse events or medical conditions database. By merging the MedDRA dictionary data with the disease data, you can match the verbatim event text captured on the case report form with the preferred term and associated body system. Then you can summarize these data by body system and preferred term you will see an example of this in Chapter 5. [Pg.111]

Get data This step involves pulling the data to be used into SAS. It often requires merging treatment or study population data with analysis data sets or some other data to be summarized/listed. [Pg.126]

Manipulate data On occasion the data being pulled into SAS for summarization and presentation are not ready for that purpose. In such cases, you may need to manipulate or create additional variables within the SAS program. Keep in mind that it is almost always better to create derived variables prior to this step in analysis data sets programming. [Pg.126]

The advent of CCD detectors for X-ray diffraction experiments has raised the possibility of obtaining charge density data sets in a much reduced time compared to that required with traditional point detectors. This opens the door to many more studies and, in particular, comparative studies. In addition, the length of data collection no longer scales with the size of the problem, thus the size of tractable studies has certainly increased but the limit remains unknown. Before embracing this new technology, it is necessary to evaluate the quality of the data obtained and the possible new sources of error. The details of the work summarized below has either been published or submitted for publication elsewhere [1-3]. [Pg.224]

Table 2 summarizes the re-entry exposure data from studies with chlorpyrifos.1 There are fewer replicates for these workers which would seem to be justified by the lower variability in the data sets. There are practically no differences between the arithmetic and geometric means for these data sets. [Pg.39]

Therefore any attempt to model the spatial occurrence and fate of chemicals in the environment will require an appropriate choice of all the factors discussed above, which have a definite influence on the behavior of the chemicals considered. Figure 2 summarizes some of the most relevant. It is worth mentioning that the availability of spatial data sets has been greatly enhanced by the current progress achieved on remote sensing technologies [57, 58]. [Pg.42]

The most recent report of Myer and Kock47 afforded a more robust assessment of the value of 1,1-ADEQUATE data as a part of the input data set for the COCON CASE program. The output of the structure generation for the brominated phakellins (18a, b) and brominated isophakellin (19a, b) alkaloids is summarized in Table 3. Structures generated without specified hybridization are noted on the first line for each compound that can be compared with the number of structures generated when pre-defined hybridization was employed on the second line for each... [Pg.268]

Following the same experimental protocol and analysis strategy, multiple sets of conductance values for 1,5-pentanedithiol (PDT), 1,6-hexanedithiol (HDT), 1,8-octanedithiol (ODT), and 1,10-decanedithiol (DDT) were measured. The results are summarized in Table 1. Inspection of the data reveals that the high-conductance values (H) are approximately five times larger than the medium-conductance values (M), while the low values (L) do not scale with a constant ratio with respect to the M or L data sets. [Pg.148]

This chapter deals with handling the data generated by analytical methods. The first section describes the key statistical parameters used to summarize and describe data sets. These parameters are important, as they are essential for many of the quality assurance activities described in this book. It is impossible to carry out effective method validation, evaluate measurement uncertainty, construct and interpret control charts or evaluate the data from proficiency testing schemes without some knowledge of basic statistics. This chapter also describes the use of control charts in monitoring the performance of measurements over a period of time. Finally, the concept of measurement uncertainty is introduced. The importance of evaluating uncertainty is explained and a systematic approach to evaluating uncertainty is described. [Pg.139]

PCA results are summarized in Fig. 5, which shows the loading plots characterizing the main contamination patterns in every analyzed data set and their explained... [Pg.347]

Descriptive statistics are used to summarize the general nature of a data set. As such, the parameters describing any single group of data have two components. One of these describes the location of the data, while the other gives a measure of the dispersion of the data in and about this location. Often overlooked is the fact that the choice of which parameters are used to give these pieces of information implies a particular type of distribution for the data. [Pg.871]

A single CV as described gives n predictions. For many data sets in chemistry n is too small for a visualization of the error distribution. Furthermore, the obtained performance measure may heavily depend on the split of the objects into segments. It is therefore recommended to repeat the CV with different random splits into segments (repeated CV), and to summarize the results. Knowing the variability of MSEcv at different levels of model complexities also allows a better estimation of the optimum model complexity, see one standard error rule in Section 4.2.2 (Hastie et al. 2001). [Pg.130]

Various models for BBB permeability prediction are summarized in terms of their data sets, methods employed, statistical parameters and descriptors, important outcomes in Tables 22.1 and 22.2, respectively. [Pg.544]

In the text which follows we shall examine in numerical detail the decision levels and detection limits for the Fenval-erate calibration data set ( set-B ) provided by D. Kurtz (17). In order to calculate said detection limits it was necessary to assign and fit models both to the variance as a function of concentration and the response (i.e., calibration curve) as a function of concentration. No simple model (2, 3 parameter) was found that was consistent with the empirical calibration curve and the replication error, so several alternative simple functions were used to illustrate the approach for calibration curve detection limits. A more appropriate treatment would require a new design including real blanks and Fenvalerate standards spanning the region from zero to a few times the detection limit. Detailed calculations are given in the Appendix and summarized in Table V. [Pg.58]

Table I summarizes a description of the data sets. Datasets A and B are both quality sets of the same amount range, 0.05 to 20 ng, but Dataset A has only 2 replications/level while B has 5. Table I summarizes a description of the data sets. Datasets A and B are both quality sets of the same amount range, 0.05 to 20 ng, but Dataset A has only 2 replications/level while B has 5.

See other pages where Summarizing data sets is mentioned: [Pg.9]    [Pg.9]    [Pg.19]    [Pg.238]    [Pg.74]    [Pg.73]    [Pg.47]    [Pg.233]    [Pg.44]    [Pg.39]    [Pg.321]    [Pg.260]    [Pg.101]    [Pg.189]    [Pg.121]    [Pg.28]    [Pg.269]    [Pg.150]    [Pg.52]    [Pg.943]    [Pg.166]    [Pg.423]    [Pg.244]    [Pg.275]    [Pg.124]    [Pg.27]    [Pg.211]    [Pg.518]    [Pg.543]    [Pg.66]    [Pg.241]    [Pg.31]   


SEARCH



Data set

Summar

© 2024 chempedia.info