Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Numerical procedures, data analysis

Integral Methods for the Analysis of Kinetic Data—Numerical Procedures. While the graphical procedures discussed in the previous section are perhaps the most practical and useful of the simple methods for determining rate constants, a number of simple numerical procedures exist for accomplishing this task. [Pg.53]

Table 11,1.1 contains a workup of the data in terms of the above analysis. In the more general case one should be sure to use appropriate averaging techniques or graphical integration to determine both F(t) and T. When there is an abundance of data, plot it, draw a smooth curve, and integrate graphically instead of using the strictly numerical procedure employed above. [Pg.392]

The last move of the Methods section. Describe Numerical Methods, is included only if numerical or mathematical procedures (e.g., statistical analyses) were used to analyze, derive, or model data presented in the paper. In such cases, the experimental methods are described hrst (move 2), and the numerical methods are described last (move 3). Subheadings used to demark move 3 include Statistical Methods or Data Analysis. [Pg.64]

A factor that is of concern with bubblecap trays is the development of a liquid gradient from inlet to outlet which results in corresponding variation in vapor flow across the cross section and usually to degradation of the efficiency. With other kinds of trays this effect rarely is serious. Data and procedures for analysis of this behavior are summarized by Bolles (in Smith, 1963, Chap. 14). There also are formulas and a numerical example of the design of all features of bubblecap trays. Although, as mentioned, new installations of such trays are infrequent, many older ones still are in operation and may need to be studied for changed conditions. [Pg.433]

Because of the very low scattered intensity, the data at the shortest sampling interval is usually the poorest in quality. Arbitrary renormalization of the data followed by the graphical representation outlined above is most likely to amplify errors in the data analysis, focus attention on the inherent errors in the construction of the composite relaxation function, and give undue importance to the worst data. When the data is as limited in quality as it is for this problem, any method of analysis should be as numerically stable as possible and the maximum allowable smoothing of the data should be employed. This procedure may obscure subtle features, but only very high quality data could reliably demonstrate their presence anyway. At the present time a conservative approach seems more sensible. [Pg.138]

Numerous technical descriptions of various aspects of lifetime apparatus can be found (e.g. MacKenzie, 1983, and references therein), and a number of sophisticated data analysis and fitting procedures have been developed to analyse the data collected (e.g. Coleman, Griffith and Heyland, 1974 Coleman, 1979 Kirkegaard, Pedersen and Eldrup, 1989), but detailed discussion of these topics is beyond the scope of the present treatment. [Pg.13]

Traditionally, data was a single numerical result from a procedure or assay for example, the concentration of the active component in a tablet. However, with modem analytical equipment, these results are more often a spectrum, such as a mid-infrared spectrum for example, and so the use of multivariate calibration models has flourished. This has led to more complex statistical treatments because the result from a calibration needs to be validated rather than just a single value recorded. The quality of calibration models needs to be tested, as does the robustness, all adding to the complexity of the data analysis. In the same way that the spectroscopist relies on the spectra obtained from an instrument, the analyst must rely on the results obtained from the calibration model (which may be based on spectral data) therefore, the rigor of testing must be at the same high standard as that of the instrument... [Pg.8]

We are not going to deal with all these examples of application of percolation theory to catalysis in this paper. Although the physics of these problems are different the basic numerical and mathematical techniques are very similar. For the deactivation problem discussed here, for example, one starts with a three-dimensional network representation of the catalyst porous structure. Systematic procedures of how to map any disordered porous medium onto an equivalent random network of pore bodies and throats have been developed and detailed accounts can be found in a number of publications ( 8). For the purposes of this discussion it suffices to say that the success of the mapping techniques strongly depends on the availability of quality structural data, such as mercury porosimetry, BET and direct microscopic observations. Of equal importance, however, is the correct interpretation of this data. It serves no purpose to perform careful mercury porosimetry and BET experiments and then use the wrong model (like the bundle of pores) for data analysis and interpretation. [Pg.175]

Nitrogen sorption/desorption isotherms of membranes A, B and C exhibit narrow hysteresis loops in regions close to saturation points (Figures 3a,b,c). The experimental points on both branches of the three isotherms were fitted by an analytical function. In each case, correlation coefficients were greater than 0,9995. This allowed not only averaging of experimental data, but also simplified numerical procedures of isotherm analysis. Analysis was performed only in the regions for which experimental points were available. [Pg.342]

The numerical procedures used to deal with experimental data are illustrated with results from an experiment in chemical kinetics recorded in table C.l. The reaction is first order but that will be ignored in the analysis of the data. Note that the data points have been obtained for equal increments in the independent variable time. It turns out that numerical techniques are especially easy to apply when this is the case. The second feature of this data set is that the precision of the time data is much higher than that of the concentration data. [Pg.608]

The dissociation constants for tri- di- and mono-chloroacetic acids are 0.2, 0.05 and 0.0014 M, respectively. The experimental procedures and data analysis used in this study are satisfactoiy and the numerical values of the proposed equilibrium constants are therefore considered to be reliable, but are not selected since data on organic ligands are not included in the present review. However, the equilibrium constants for the weak complexes require extensive changes in the ionic medium and the observed variation in distribution coefficients could therefore also be a result of activity coefficient variations. [Pg.414]

The primary function of the computer in FTIR instruments is to perform the Fourier transformation that converts an inter-ferogram to a recognizable spectrum. However, the availability of an on-line mini-computer has opened the door to routine data manipulations. In this section we will review procedures that have been or promise to be useful in coal characterization. Certain data analysis operations, such as numerical integration... [Pg.52]


See other pages where Numerical procedures, data analysis is mentioned: [Pg.262]    [Pg.137]    [Pg.757]    [Pg.305]    [Pg.249]    [Pg.200]    [Pg.272]    [Pg.245]    [Pg.369]    [Pg.78]    [Pg.62]    [Pg.73]    [Pg.616]    [Pg.625]    [Pg.408]    [Pg.54]    [Pg.504]    [Pg.244]    [Pg.209]    [Pg.297]    [Pg.616]    [Pg.223]    [Pg.55]    [Pg.60]    [Pg.363]    [Pg.283]    [Pg.183]    [Pg.251]    [Pg.347]    [Pg.349]    [Pg.282]    [Pg.145]   
See also in sourсe #XX -- [ Pg.53 , Pg.54 ]




SEARCH



Analysis procedures

Data analysis numerical

Data analysis procedure

Numerical analysis

Numerical data

Numerical procedures

© 2024 chempedia.info