Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Analysis of data

The analysis of the results depends strongly on the achievement of a steady state. If, moreover, the Burnett effects are unimportant, then from (51) we have the linear profile of labeled-particle number density [Pg.25]

The principal tasks are then to assure the validity of the assumptions and to assign error bounds to the final value of [Pg.26]

On the microscopic level, both spatial and temporal fluctuations in number density are to be expected. The decay of these should be rapid compared to the time and distance scales appropriate to the macroscopic self-diflusion phenomenon. Thus to study the presence of a steady state, the M observations of JVi and /i are pooled AM at a time so that the resulting observations appear to be uncorrelated. Typically (AM)fi 200to seems to be satisfactory for this purpose at a reduced volume t = 3 for hard disks (but pooling over even longer intervals is frequently used to assure the independence of the observations). While there are rather long-time correlation effects, there seems little doubt that on any macroscopic time scale the flow is steady. [Pg.26]

The importance of Burnett effects can be ascertained in several ways. With respect to nonlinear Burnett effects, we simply analyze the results for each probability p separately, with the expectation that the resulting values of should display a trend with p if such effects are present. Effects due to the linear super-Bumett terms in (51) should affect the linearity of the profile (71). The latter is averaged over each layer. [Pg.26]

The pooled observations of Ni are fitted to the linear expression (71) via least squares. No significant departures from linearity are observed, which is consistent with the existence of theoretical reasons to believe that the super-Burnett effects have a much smaller distance scale than the typical layer thicknesses used in these calculations. Similarly the layers adjacent to the boundaries show no atypical behavior, but the layer thicknesses are many mean free paths, while kinetic boundary layers are typically of the order of a mean free path. Thus the absence of effects beyond the linear Pick s law term is essentially a result of the spatial coarse graining used. [Pg.26]

Before data analysis can start the response cvurves generated during the experiment require processing. The unwanted part of a sensorgram, such as a very long baseline before injection and regeneration, can be removed. The [Pg.21]

In the BIAevaluation software, all the described sensorgram operations can be carried out in a graphical way without number manipulations in the data spreadsheets. [Pg.22]


Nearly all experimental eoexistenee eurves, whether from liquid-gas equilibrium, liquid mixtures, order-disorder in alloys, or in ferromagnetie materials, are far from parabolie, and more nearly eubie, even far below the eritieal temperature. This was known for fluid systems, at least to some experimentalists, more than one hundred years ago. Versehaflfelt (1900), from a eareflil analysis of data (pressure-volume and densities) on isopentane, eoneluded that the best fit was with p = 0.34 and 8 = 4.26, far from the elassieal values. Van Laar apparently rejeeted this eonelusion, believing that, at least very elose to the eritieal temperature, the eoexistenee eurve must beeome parabolie. Even earlier, van der Waals, who had derived a elassieal theory of eapillarity with a surfaee-tension exponent of 3/2, found (1893)... [Pg.640]

The explorative analysis of data sets by visual data mining applications takes place in a three-step process During the first step (overview), the user can obtain an overview of the data and maybe can identify some basic relationships between specific data points. In the second step (filtering), dynamic and interactive navigation, selection, and query tools will be used to reorganize and filter the data set. Each interaction by the user will lead to an immediate update of the data scene and will reveal the hidden patterns and relationships. Finally, the patterns or data points can be analyzed in detail with specific detail tools. [Pg.476]

Molecules are usually represented as 2D formulas or 3D molecular models. WhOe the 3D coordinates of atoms in a molecule are sufficient to describe the spatial arrangement of atoms, they exhibit two major disadvantages as molecular descriptors they depend on the size of a molecule and they do not describe additional properties (e.g., atomic properties). The first feature is most important for computational analysis of data. Even a simple statistical function, e.g., a correlation, requires the information to be represented in equally sized vectors of a fixed dimension. The solution to this problem is a mathematical transformation of the Cartesian coordinates of a molecule into a vector of fixed length. The second point can... [Pg.515]

The normal distribution of measurements (or the normal law of error) is the fundamental starting point for analysis of data. When a large number of measurements are made, the individual measurements are not all identical and equal to the accepted value /x, which is the mean of an infinite population or universe of data, but are scattered about /x, owing to random error. If the magnitude of any single measurement is the abscissa and the relative frequencies (i.e., the probability) of occurrence of different-sized measurements are the ordinate, the smooth curve drawn through the points (Fig. 2.10) is the normal or Gaussian distribution curve (also the error curve or probability curve). The term error curve arises when one considers the distribution of errors (x — /x) about the true value. [Pg.193]

When designing and evaluating an analytical method, we usually make three separate considerations of experimental error. First, before beginning an analysis, errors associated with each measurement are evaluated to ensure that their cumulative effect will not limit the utility of the analysis. Errors known or believed to affect the result can then be minimized. Second, during the analysis the measurement process is monitored, ensuring that it remains under control. Finally, at the end of the analysis the quality of the measurements and the result are evaluated and compared with the original design criteria. This chapter is an introduction to the sources and evaluation of errors in analytical measurements, the effect of measurement error on the result of an analysis, and the statistical analysis of data. [Pg.53]

The probabilistic nature of a confidence interval provides an opportunity to ask and answer questions comparing a sample s mean or variance to either the accepted values for its population or similar values obtained for other samples. For example, confidence intervals can be used to answer questions such as Does a newly developed method for the analysis of cholesterol in blood give results that are significantly different from those obtained when using a standard method or Is there a significant variation in the chemical composition of rainwater collected at different sites downwind from a coalburning utility plant In this section we introduce a general approach to the statistical analysis of data. Specific statistical methods of analysis are covered in Section 4F. [Pg.82]

The following experiments may he used to introduce the statistical analysis of data in the analytical chemistry laboratory. Each experiment is annotated with a brief description of the data collected and the type of statistical analysis used in evaluating the data. [Pg.97]

The stretching properties of polymers are investigated by examining the effect of polymer orientation, polymer chain length, stretching rate, and temperature. Homogeneity of polymer films and consistency between lots of polymer films also are investigated. Statistical analysis of data includes Q-tests and f-tests. [Pg.98]

Vitha, M. F. Carr, P. W. A Laboratory Exercise in Statistical Analysis of Data, /. Chem. Educ. 1997, 74, 998-1000. Students determine the average weight of vitamin E pills using several different methods (one at a time, in sets of ten pills, and in sets of 100 pills). The data collected by the class are pooled together, plotted as histograms, and compared with results predicted by a normal distribution. The histograms and standard deviations for the pooled data also show the effect of sample size on the standard error of the mean. [Pg.98]

A more comprehensive discussion of the analysis of data, covering all topics considered in this chapter as well as additional material, can be found in any textbook on statistics or data analysis following are several such texts. [Pg.102]

The following experiments may he used to illustrate the application of titrimetry to quantitative, qtmlitative, or characterization problems. Experiments are grouped into four categories based on the type of reaction (acid-base, complexation, redox, and precipitation). A brief description is included with each experiment providing details such as the type of sample analyzed, the method for locating end points, or the analysis of data. Additional experiments emphasizing potentiometric electrodes are found in Chapter 11. [Pg.358]

Values for fQi and K 2 for acids of the form H2A are determined from a least-squares analysis of data from a potentiometric titration. [Pg.358]

For each type of problem, appropriate taste tests are suggested together with the type of panel, number of samples per test, and analysis of data. [Pg.19]

Methodology. Practitioners of chemical market research develop iadividual styles and techniques. However, four elements are essential to every useful study defining the problem, data gathering, analysis of data, and presentation of findings. [Pg.534]

Analysis of Data. A veteran practitioner of chemical market research likened this step to the assembly of a jigsaw puzzle. There are many pieces of unequal size and importance that must be put together to make a picture understandable to everyone. Call reports, secondary data inputs, experience, and judgment are the tools used by the market researcher to analyze the data, reach conclusions, make recommendations, and write the report. [Pg.535]

The Hesketh equation is empirical and is based upon a regression analysis of data from a number of industrial venturi scrubbers ... [Pg.1438]

Data should be available at every phase of the service quality loop from soliciting business through client reaction and feedback. The collection and analysis of data is a means of improving the service or conversely can detect the onset of an insidious degradation of the service before it becomes a major issue. [Pg.197]

Analysis and prediction of side-chain conformation have long been predicated on statistical analysis of data from protein structures. Early rotamer libraries [91-93] ignored backbone conformation and instead gave the proportions of side-chain rotamers for each of the 18 amino acids with side-chain dihedral degrees of freedom. In recent years, it has become possible to take account of the effect of the backbone conformation on the distribution of side-chain rotamers [28,94-96]. McGregor et al. [94] and Schrauber et al. [97] produced rotamer libraries based on secondary structure. Dunbrack and Karplus [95] instead examined the variation in rotamer distributions as a function of the backbone dihedrals ( ) and V /, later providing conformational analysis to justify this choice [96]. Dunbrack and Cohen [28] extended the analysis of protein side-chain conformation by using Bayesian statistics to derive the full backbone-dependent rotamer libraries at all... [Pg.339]

Classes II and III include all tests in which the specified gas and/or the specified operating conditions cannot be met. Class II and Class III basically differ only in method of analysis of data and computation of results. The Class II test may use perfect gas laws in the calculation, while Class III must use the more complex real gas equations. An example of a Class II test might be a suction throttled air compressor. An example of a Class III test might be a CO2 loop test of a hydrocarbon compressor. Table 10-4 shows code allowable departure from specified design parameters for Class II and Class III tests. [Pg.418]

Goals and objectives to be defined Determination of customer satisfaction Continual improvement Analysis of data... [Pg.12]

Analysis of data to determine whether goals are being achieved (clause 4.1.5) Monitoring of achievement of goals (clauses 4.1.3.2 and 4.2.8)... [Pg.62]

As mentioned earlier some measures will be chosen because improvements in these areas were part of the project justification. It is most likely that these will be efficiency measures. Calculation of these measures generally requires analysis of data or specific data collection exercises. There is a relatively high cost associated with preparing these measures so they should be used prudently. In choosing efficiency measures, you should use only those where you have comparative data about the current management systems. For example, if there is no information on the number of hours dedicated to PSM and ESH, don t use this to try to demonstrate the improvement in efficiency. [Pg.129]

The analysis of performance provides a powerful technique for identifying potential for improvement. As discussed in the previous chapter, trends can be spotted and action taken to identify and correct any unwanted development. Additionally, analysis of data can help with the identification of underlying problems. For example, a higher than average number of eye injuries at a particular facility might justify further investigation. [Pg.141]

Simplified diagnostics Identification of specific failure modes of plant equipment requires manual analysis of data stored in the computer s memory. The software program should be able to display, modify and compare stored data in a manner that simplifies the analysis of the actual operating condition of the equipment. [Pg.808]

Ratterman, M., An Approach to the Design and Analysis of Data from the Standpipe System on FCC Units, Gulf Research and Development. Pittsburgh, Pennsylvania, October 1983. [Pg.233]

Statistical analysis of data from tests with an apparatus by Wesley has demonstrated satisfactory reproducibility of results not only among specimens in a particular test, but also from test to test undertaken at different times. [Pg.996]

Analysis of data pertaining to the modulus of PEO gels obtained by the polyaddition reaction [90] shows that even in this simplified case the network structure substantially deviates from the ideal one. For all samples studied, the molecular weight between crosslinks (M p) exceeds the molecular weight of the precursor (MJ. With decreasing precursor concentration the M xp/Mn ratio increases. Thus, at Mn = 5650 a decrease in precursor concentration from 50 to 20% increases the ratio from 2.3 to 12 most probably due to intramolecular cycle formation. [Pg.119]

Cholesterol Treatment Trialists (CTT) Collaborators (2005) Efficacy and safety of cholesterol-lowering treatment prospective meta-analysis of data from 90 056 participants in 14 randomised trials of statins. Lancet 366 1267-1278... [Pg.599]

From this we can see that knowledge of k f and Rf in a conventional polymerization process readily yields a value of the ratio kp fkt. In order to obtain a value for kf wc require further information on kv. Analysis of / , data obtained under non-steady state conditions (when there is no continuous source of initiator radicals) yields the ratio kvlkx. Various non-stcady state methods have been developed including the rotating sector method, spatially intermittent polymerization and pulsed laser polymerization (PLP). The classical approach for deriving the individual values of kp and kt by combining values for kp kx. with kp/k, obtained in separate experiments can, however, be problematical because the values of kx are strongly dependent on the polymerization conditions (Section... [Pg.238]

Given what has gone before, the reader can readily deduce how to apply these equations to the numerical and linear graphical analysis of data. Consider the following examples. Figure 2-7 shows simulated data for these three dependences. Each gives... [Pg.29]

From an analysis of data for polypyrrole, Albery and Mount concluded that the high-frequency semicircle was indeed due to the electron-transfer resistance.203 We have confirmed this using a polystyrene sulfonate-doped polypyrrole with known ion and electron-transport resistances.145 The charge-transfer resistance was found to decrease exponentially with increasing potential, in parallel with the decreasing electronic resistance. The slope of 60 mV/decade indicates a Nemstian response at low doping levels. [Pg.583]

Ionic Reactions in TD/D2 Ethane Mixtures. The data in Table III show that deuteron transfer occurs in irradiated mixtures of D2 and ethane as well. Data are shown only for temperatures (<25°C.) at which ionic reactions clearly predominate. Analysis of data concerning thermal atomic and free-radical reactions at higher temperatures will be published elsewhere in the near future. The reaction of D3 + with ethane has been observed directly (1) and postulated (2) by other workers. Both groups have proposed that the sequence initiated by deuteron transfer to ethane proceeds as follows ... [Pg.292]

The World Health Organization (WHO) promotes the use of an Anatomical Therapeutic Chemical (ATC) classification system for the collection and analysis of data on drug use. This was originally developed by Scandinavian authorities, and uses a combination of anatomical, therapeutic and chemical criteria to assign drugs to an individual class. The top-level categories, which are anatomically based, are listed in Table 3.2. [Pg.45]

Analysis of Data (Analyse effectiveness and identify improvements)... [Pg.171]

Analysis of data - Analyse performance of quality system based on feedback, conformity of product to requirements, trends and supplier performance... [Pg.232]


See other pages where Analysis of data is mentioned: [Pg.204]    [Pg.462]    [Pg.82]    [Pg.815]    [Pg.536]    [Pg.381]    [Pg.524]    [Pg.33]    [Pg.486]    [Pg.211]    [Pg.96]    [Pg.206]    [Pg.1101]    [Pg.249]   
See also in sourсe #XX -- [ Pg.142 , Pg.143 , Pg.144 ]

See also in sourсe #XX -- [ Pg.138 , Pg.139 ]




SEARCH



Accident data analysis and identification of critical scenarios

An Example of Regression Analysis on Existing Data

Analyses of clinical laboratory data

Analysis and Correlation of Kinetic Data

Analysis and correlation of rate data

Analysis and use of company level data

Analysis of Binding Data

Analysis of Decay Data

Analysis of Diffraction Data with Maximum Entropy Method

Analysis of Dissolution Data Sets

Analysis of ESEEM data

Analysis of Enzyme Kinetic Data

Analysis of Epidemiological Data

Analysis of GC Data

Analysis of HTS Data

Analysis of Initial Rate Data

Analysis of Kinetics Data

Analysis of LIS Data

Analysis of Laue data and wavelength normalisation

Analysis of Modulus Data

Analysis of Monitoring Data

Analysis of Pharmacokinetic Data

Analysis of Poor-Statistics Data

Analysis of Rate Data

Analysis of Sensory Data

Analysis of Spectrophotometric Data

Analysis of Titration Data

Analysis of Transient Water Flux Data

Analysis of composite data sets

Analysis of data from CATA questions

Analysis of experimental data

Analysis of kinetic data

Analysis of pharmacokinetic and pharmacodynamic data

Analysis of pharmacological data

Analysis of primary data

Analysis of screening data

Analysis of test data

Analysis of the Experimental Data

Analysis of the Historical Data

Analysis of the data

Analysis of viscosity data

COLLECTION AND ANALYSIS OF RATE DATA

Chemometric Analysis of Chromatographic Data

Chemometrics and statistical analysis of spectral data

Classical Statistical Analysis of Simulation-Based Experimental Data

Cluster Analysis Recognition of Inherent Data Structures

Complex Non-Linear Regression Least-Squares (CNRLS) for the Analysis of Impedance Data

Conceptual aspects of multi-way data analysis

Confirmatory clinical trials Analysis of categorical efficacy data

Confirmatory clinical trials Analysis of continuous efficacy data

Data Analysis of DSC Heat Effects for the Most Representative (Bio)-degradable Polymers

Data Reporting of Thermal Analysis

Data analysis of projective descriptive methods

Data analysis representation of the product space

Data analysis representation of the sequence

Differential methods of data analysis (

Electron Calculations and the Analysis of Experimental Data

Examples of data and analysis

Experimental Methods and Analysis of Kinetic Data

Exploratory analysis of designed data

Factor Analysis Causes of Data Structures

General principles of quantitative data analysis

Global Analysis of Frequency-Domain Data

Graphical analysis of initial rate data

Identification of events and data analysis

Intrachain Viscosity Analysis of the PIB-Data

Isothermal Analysis of Experimental Rate Data

Kinetic Analysis of Isothermal Data

Kinetic Analysis of Nonisothermal Data

Kinetic Data Analysis and Evaluation of Model Parameters for Uniform (Ideal) Surfaces

Knowledge Acquisition from Data Analysis Mechanistic and Kinetic Insights for a Set of Close Reactions

Meta Analysis of Safety Pharmacology Data Predicting Compound Promiscuity

Methods of data analysis

Numerical Analysis of Experimental Data

Numerical Methods of Data Analysis

Other techniques utilized in the analysis of Py-MS data

Practice of experiment and data analysis

Principles of Data Analysis

Processing and Analysis of ID NMR Data

Processing and Analysis of the NMR Data

Quantitative Analysis of Biochemical Data

Quantitative Analysis of Multiphase Topology from SAXS Data

Resources for analysis and presentation of data

Robust Methods in Analysis of Multivariate Food Chemistry Data

Search and Analysis of Enzyme Data

Simultaneous Analysis of All Available Data for Single Compounds

Skill 1.7 Applying mathematics to investigations in chemistry and the analysis of data

Stability Analysis of Sampled-Data Systems

Statistical Analyses and Plotting of Control Sample Data

Statistical Analysis of Experimental Data

Statistical Analysis of Preprocessed Data

Statistical analysis of structure data

Statistical analysis, of biochemical data

Statistical analysis, of data

Statistical and Numerical Methods of Data Analysis

Sulfur isotopic data analysis of crude

Sulfur isotopic data analysis of crude oils

Temporal dominance of sensations data analysis and representation

The Differential Method of Data Analysis

The Integral Method of Data Analysis

The Practice of Dynamic Combinatorial Libraries Analytical Chemistry, Experimental Design, and Data Analysis

The analysis of survival data

Thermodynamic analysis of solubility data on gaspeite

Thermodynamic analysis of solubility data on hellyerite

Total Pressure Method of Reaction-Rate Data Analysis

Wrap-Up of Preliminary Data Analysis

© 2024 chempedia.info