Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Statistical tools

It is clear that for an unsymmetrical data matrix that contains more variables (the field descriptors at each point of the grid for each probe used for calculation) than observables (the biological activity values), classical correlation analysis as multilinear regression analysis would fail. All 3D QSAR methods benefit from the development of PLS analysis, a statistical technique that aims to find the multidimensional direction in the X space that explains the maximum multidimensional variance direction in the Y space. PLS is related to principal component analysis (PCA). ° However, instead of finding the hyperplanes of maximum variance, it finds a linear model describing some predicted variables in terms of other observable variables and therefore can be used directly for prediction. Complexity reduction and data [Pg.592]

Usually, for PLS discrimination, external predictions are made using the following equation (29.2)  [Pg.592]


The maximum-likelihood method, like any statistical tool, is useful for correlating and critically examining experimental information. However, it can never be a substitute for that information. While a statistical tool is useful for minimizing the required experimental effort, reliable calculated phase equilibria can only be obtained if at least some pertinent and reliable experimental data are at hand. [Pg.108]

Quality assessment includes the statistical tools used to determine whether an analysis is in a state of statistical control and, if possible, to suggest why an analysis has drifted out of statistical control. Among the tools included in quality assessment are the analysis of duplicate samples, the analysis of blanks, the analysis of standards, and the analysis of spike recoveries. [Pg.722]

Statistical analysis can range from relatively simple regression analysis to complex input/output and mathematical models. The advent of the computer and its accessibiUty in most companies has broadened the tools a researcher has to manipulate data. However, the results are only as good as the inputs. Most veteran market researchers accept the statistical tools available to them but use the results to implement their judgment rather than uncritically accepting the machine output. [Pg.535]

A capability study is a statistical tool which measures the variations within a manufacturing process. Samples of the product are taken, measured and the variation is compared with a tolerance. This comparison is used to establish how capable the process is in producing the product. Process capability is attributable to a combination of the variability in all of the inputs. Machine capability is calculated when the rest of the inputs are fixed. This means that the process capability is not the same as machine capability. A capability study can be carried out on any of the inputs by fixing all the others. All processes can be described by Figure 1, where the distribution curve for a process shows the variability due to its particular elements. [Pg.288]

Meeker, W. Q. and Hamada, M. 1995 Statistical Tools for the Rapid Development and Evaluation of High Reliability Products. IEEE Transactions on Reliability, 44(2), 187-198. [Pg.389]

The supplementary requirements require statistical tools to be identified for each process during the advanced quaiity pianning phase and inciuded in the controi pian. [Pg.549]

The following two sections explore the use of statistical tools, which may help define failure experience trends in, given machinery. [Pg.1045]

In general, when a pharmacological constant or parameter is measured it should be done so repeatedly to give a measure of confidence in the value obtained (i.e., the likelihood that if the measurement were repeated it would yield the same value). There are various statistical tools available to determine this. An important tool and concept in this regard is the Gaussian distribution. [Pg.225]

When an experimental value is obtained numerous times, the individual values will symmetrically cluster around the mean value with a scatter that depends on the number of replications made. If a very large number of replications are made (i.e., >2,000), the distribution of the values will take on the form of a Gaussian curve. It is useful to examine some of the features of this curve since it forms the basis of a large portion of the statistical tools used in this chapter. The Gaussian curve for a particular population of N values (denoted x ) will be centered along the abscissal axis on the mean value where the mean (r ) is given by... [Pg.225]

Populations are very large collections of values. In practice, experimental pharmacology deals with samples (much smaller collections) from a population. The statistical tools used to deal with samples differ somewhat from those used to deal with populations. When an experimental sample is obtained, the investigator often wants to know about two features of the sample central tendency and variability. The central tendency refers to the most representative estimate of the value, while the variability defines the confidence that the estimate is a true reflection of that value. Central tendency estimates can be the median (value that divides the sample into two equal halves) or the... [Pg.226]

Microarray experiments generate large and complex data sets that constitute e.g. lists of spot intensities and intensity ratios. Basically, the data obtained from microarray experiments provide information on the relative expression of genes corresponding to the mRNA sample of interest. Computational and statistical tools are required to analyze the large amount of data to address biological questions. To this end, a variety of analytical platforms are available, either free on the Web or via purchase of a commercially available product. [Pg.527]

Estimates of the lifetime COl are needed for temporal and international comparisons and for assessment of the efficiency of prevention strategies. During the first years of HIV/AIDS treatment, direct lifetime costs were only estimated by simple projections based on retrospective data. Later, specific statistical tools were adopted to estimate life expectancy and lifetime costs. The results of lifetime estimates are very sensitive to imputed assumptions. Table 4 demonstrates some studies in this field. [Pg.361]

Chapters 1 and 2 introduced the basic statistical tools. The necessary computer can do more than just run statistics packages in this chapter, a number of techniques are explained that tap the benefits of fast data handling, namely filtering, optimization, and simulation. [Pg.137]

Introduction The intention behind the book and these programs is to provide the user with a number of statistical tools that are applicable to prob-... [Pg.339]

The normal range should be established with samples obtained from an adequate sample of healthy persons of specified age and sex. The effect of physiological variables such as activity, eating, menstruation and pregnancy should be known. The confidence limits should be determined with the appropriate statistical tools. Normal ranges determined with hospital patients should be rejected (20 21). [Pg.186]

A brief study of the available data related to limits of inflammability in Part Two shows that these parameters are subject to high experimental uncertainty. For a large number of substances, the experimental values are widely dispersed. When they are submitted to quality estimation using statistical tools, in many cases they reveal that it is impossible to use them with confidence. The examples of difficulties raised by the statistical analysis of the LEL data can be multiplied. [Pg.50]

This criticism is made in order to make the reader understand the highly unpredictable aspect of experimental approaches concerning flashpoints. If the approach is experimental, all statistical tools need to be employed to provide conciusions that include a calculated error level. [Pg.69]

By way of introduction to this subject the effect of some structural characteristics on this parameter will be set out (but without going into any detail of how to handle the statistical tool) by limiting the study to hydrocarbons in order to simplify it. The above table shows that the database always contains the lowest AIT value obtained by using a 12 I container. In order to simplify the study even more the lowest AIT values for each substance are used. [Pg.73]

Von Haller, P.D., Yi, E., Donohoe, S., Vaughn, K., Keller, A., Nesvizhskii, A.I., Eng, J., Li, X.J., Goodlett, D.R., Aebersold, R., Watts, J.D. (2003). The Application of New Software Tools to Quantitative Protein Profiling Via Isotope-coded Affinity Tag (ICAT) and Tandem Mass Spectrometry II. Evaluation of Tandem Mass Spectrometry Methodologies for Large-Scale Protein Analysis, and the Application of Statistical Tools for Data Analysis and Interpretation. Mol. Cell. Proteomics 2, 428 -42. [Pg.288]

The scope of this chapter-formatted mini-series is to provide statistical tools for comparing two columns of data, X and Y. With respect to analytical applications such data may be represented for simple linear regression as the concentration of a sample (X) versus an instrument response when measuring the sample (Y). X and Y may also denote a comparison of the reference analytical results (X) versus predicted results (Y) from a calibrated instrument. At other times one may use X and Y to represent the instrument response (X) to a reference value (Y). Whatever data pairs one is comparing as X and Y, there are several statistical tools that are useful to assess the meaning of a change in... [Pg.379]

This series of articles provides several sets of tools useful for evaluating all of the aforementioned statistics at user selected confidence levels. The general statistical tools to be described are... [Pg.383]

Overall, Brown s paper is a wonderful paper, despite the fact that there are some criticisms. The fact that it directly attacks the issue of nonlinearity in NIR is one reason to be so pleased to see it, but the other main reason is that it uses well-known and well-proven statistical methodology to do so. It is delightful to see classical Statistical tools used as the primary tool for analyzing this data set. [Pg.465]

Nowadays, generating huge amounts of data is relatively simple. That means Data Reduction and Interpretation using multivariate statistical tools (chemometrics), such as pattern recognition, factor analysis, and principal components analysis, can be critically important to extracting useful information from the data. These subjects have been introduced in Chapters 5 and 6. [Pg.820]

Contrasted with these continuous data, however, we have discontinuous (or discrete) data, which can only assume certain fixed numerical values. In these cases our choice of statistical tools or tests is, as we will find later, more limited. [Pg.870]


See other pages where Statistical tools is mentioned: [Pg.191]    [Pg.517]    [Pg.57]    [Pg.239]    [Pg.953]    [Pg.198]    [Pg.26]    [Pg.439]    [Pg.28]    [Pg.99]    [Pg.6]    [Pg.51]    [Pg.127]    [Pg.56]    [Pg.57]    [Pg.99]    [Pg.324]    [Pg.89]    [Pg.167]    [Pg.171]    [Pg.383]    [Pg.383]    [Pg.125]    [Pg.101]    [Pg.344]    [Pg.255]    [Pg.55]   
See also in sourсe #XX -- [ Pg.225 ]

See also in sourсe #XX -- [ Pg.592 ]

See also in sourсe #XX -- [ Pg.592 ]

See also in sourсe #XX -- [ Pg.116 ]




SEARCH



Accuracy statistical tools

Bioinformatics statistical tools

Noise statistical tools

Precision statistical tools

Quality control statistical tools

Reproducibility statistical tools

Statistical analyses tools

Statistical process control tools

Statistical tools histogram

Statistical tools matrices

Statistical tools scatter plot

Statistical tools systematic/random errors

Statistical tools univariate data

Validation Statistical tools

© 2024 chempedia.info