Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Raw data

The data obtained for both Tank 1 and Tank 2 levels is shown in Fig. 6.10. A total of 2 h of data were collected. It can be seen that there are no obvious issues with the data collected, such as missing values or abnormal values. In order to use the data, it is necessary to downsample the 1 s data to the desired sampling rate of 15 s. This can be accomplished by taking every 15th data point from the original data set for the new downsampled data vector. The downsampled data are presented in Sect. 3.A.1 Water Level in Tanks 1 and 2 Data. [Pg.313]

6 Modelling Dynamic Processes Using System Identification Methods [Pg.314]

On the long time scale the featureless background is important when determining long lifetimes of up to 142 ns. The slope for background data [Pg.186]

A small fraction of positrons will not enter the sample but be backscattered from the surface. Some are confined by the electron acceleration grid and returned to the sample. For a positron with 2 keV energy and a potential of the grid just below 2 kV the turn around time to the sample is 2 ns. The magnitude of this backscatter bump is material dependent and 6% in low Z, low density samples. [Pg.187]

A sharp peak at about 6 ns, occurs when backscattered positrons pass the grid and reach the CEMA and trigger timing pulses without the secondary electron time of flight. This 6 ns peak vanishes in statistical noise in the case of samples that cause longer lifetimes. In the lifetime analysis, the data in the 6 ns peak region are ignored in the present discussion. [Pg.187]

The resolution function and short lifetimes remain rather constant throughout the porosity range. The longer lifetimes increase steadily with porosity up to about 50% porogen load. At larger loads the lifetimes remain similar while their intensities drop. [Pg.188]

Technological advances have superseded these concerns to some extent. Already the great drop in the price of computer memory, with harddisks even in PCs of the lower price class attaining storage capacities of tens to hundreds of gigabytes, has allowed for the secure storage and retrieval of the complete [Pg.94]

On the other hand, examples can certainly be found of information which is not considered to be raw data. One such example may be animal cage cards Cage cards bearing just the usual information for study personnel to let them perform their duties correctly, like animal and cage number, study number, study dates, are not raw data, since this information is not the result [Pg.95]

It may also happen that it becomes necessary to file the same raw data in more than one place separately. Let us consider, for instance, the case of an investigation with a number of test items being run in parallel, and thus using the same controls, as it frequently happens with in vitro studies. There will be one set of original observations for the control values, which will be valid for all of the separate studies conducted in parallel. Since these studies will have to be reported and archived as separate entities, each of them should have its own record of the control values. The single record of the control data, constituting the real raw data, will therefore have to be copied (possibly a number of times) in order to accommodate the different studies. [Pg.96]

In the same way are those instances to be regarded where all original observations are entered in a bound laboratory notebook which ultimately will then contain raw data from several studies. The original raw data will certainly be contained in this notebook, which consequently will have to be archived in a proper way as soon as it is full. At the time of completion of any such study the ultimate location of the notebook will not be known, and therefore, verified copies thereof will have to be made and archived with the study raw data. These records should, by the way, also bear the identification of the notebook itself, so that later on a comparison between the various types of raw data may be possible. [Pg.96]

Finally, such copying may become necessary also in other instances, e.g., where the recording has taken place on a medium that will deteriorate rather rapidly, as is the case for certain light-sensitive paper records. However, these copies should represent truly accurate copies of the original, with no corners or edges cut off thus they have to be verified, normally by dated signature of the person who did the copying. [Pg.96]


At each stage of a field life cycle raw data has to be converted into information, but for the information to have value it must influence decision making and profitability. [Pg.136]

In statistical terms, a perceptual improvement is therefore obtained if the amplitude distribution in the filtered signal (image) is more concentrated around zero than in the raw data (contrast enhancement). A more concentrated amplitude distribution generally means smaller entropy. Thus, from an operator perception point of view, interesting results should be obtained if the raw data can be filtered to yield low entropy amplitude distributions. However, one should note that the entropy can be minimized by means of a (pathological) filter which always outputs zero or another constant value. Thus, appropriate restrictions must be imposed on the filter construction process. [Pg.89]

Figure C2.18.5. Si(2p) spectmm of Si(l 11) reacted with 5 x 10 Torr of XeF2, using photon energy of 130 eV. The top panel shows the raw data and the fitted background. The bottom panel shows the spectmm after background has been subtracted and fitted into five components bulk Si and the four fluorosilyl peaks. The solid curve is the sum of the individual dashed component curves. Reproduced from [40]. Figure C2.18.5. Si(2p) spectmm of Si(l 11) reacted with 5 x 10 Torr of XeF2, using photon energy of 130 eV. The top panel shows the raw data and the fitted background. The bottom panel shows the spectmm after background has been subtracted and fitted into five components bulk Si and the four fluorosilyl peaks. The solid curve is the sum of the individual dashed component curves. Reproduced from [40].
Figure 4-6. Autoscaling. The variables are represented by variance bars, a) Raw data b) the data after UV-scaling only c) the autoscaled data [8]. Figure 4-6. Autoscaling. The variables are represented by variance bars, a) Raw data b) the data after UV-scaling only c) the autoscaled data [8].
Raw data are collected observations that have not been organized numerically. An average is a value that is typical or representative of a set of data. Several averages can be defined, the most common being the arithmetic mean (or briefly, the mean), the median, the mode, and the geometric mean. [Pg.192]

The most visible part of the analytical approach occurs in the laboratory. As part of the validation process, appropriate chemical or physical standards are used to calibrate any equipment being used and any solutions whose concentrations must be known. The selected samples are then analyzed and the raw data recorded. [Pg.6]

The raw data collected during the experiment are then analyzed. Frequently the data must be reduced or transformed to a more readily analyzable form. A statistical treatment of the data is used to evaluate the accuracy and precision of the analysis and to validate the procedure. These results are compared with the criteria established during the design of the experiment, and then the design is reconsidered, additional experimental trials are run, or a solution to the problem is proposed. When a solution is proposed, the results are subject to an external evaluation that may result in a new problem and the beginning of a new analytical cycle. [Pg.6]

All of the data processing is achieved by digitization of the raw data, like that in Figure 8.37(a), followed by treatment by computer. [Pg.331]

Da.ta. Ana.lysls. First, the raw data must be converted to concentrations over an appropriate time span. When sample periods do not correspond to the averaging time of the exposure limit, some assumptions must be made about unsampled periods. It may be necessary to test the impact of various assumptions on the final decision. Next, some test statistics (confidence limit, etc) (Fig. 3) are calculated and compared to a test criteria to make an inference about a hypotheses. [Pg.109]

Secondary sources also may exist within a company or consulting firm. These sources are usually unpubUshed reports or raw data collected at a prior time for another purpose. [Pg.534]

Refs. 132 and 153. Values are in metric tons. Numbers represent raw data. ... [Pg.279]

The larger the value of n, the more uniform is the size distribution. Other types of distribution functions can be found in Reference 1. Distribution functions based on two parameters sometimes do not accurately match the actual distributions. In these cases a high order polynomial fit, using multiple parameters, must be considered to obtain a better representation of the raw data. [Pg.331]

The results processor computes the test results from the raw data furnished by the AP and coUates these results together with the demographic patient data into test reports. Test results falling outside normal limits are flagged on the report to speed up the diagnosis process. These data managers can also store thousands of patient reports in their current memory. Some of the more sophisticated systems also store the actual reaction curves used to determine the test results. [Pg.398]

In addition to these faciUties for supply of data in an expHcit form for direct use by the system, there also are options designed for the calculation of the parameters used by the system s point generation routines. Two obvious categories of this type can be identified and are included at the top left of Figure 5. The first of these appHes to the correlation of raw data and is most commonly appHed to the estimation of binary interaction parameters. [Pg.76]

Raw data are repeatedly corrected by an amount determined by a correcting algorithm and checked against the constraints they must satisfy. The residuals of the constraints, which is a measure of the degree to which the constraints ate not redefined, are calculated and the algorithm attempts to rninirnize these residuals. The procedure is continued until the residuals can no longer be reduced. [Pg.80]

Neural nets can also be used for modeling physical systems whose behavior is poorly understood, as an alternative to nonlinear statistical techniques, eg, to develop empirical relationships between independent and dependent variables using large amounts of raw data. [Pg.540]

The rapid development of microelectronics has enabled many similar measurements to be made with data collecting systems and then stored electronically. The raw data can then be downloaded to the data processing installation, where they can be plotted and evaluated at any time [1]. This applies particularly to monitoring measurements on pipelines for intensive measurements, see Section 3.7. Figure 3-1 shows an example of a computer-aided data storage system. [Pg.79]

The methods discussed in this section extend the original concept of deriving structures from experimental NMR data in two ways. First, during the structure calculation, part of the assignment problem is solved automatically. This allows specification of the NOE data in a fonn closer to the raw data, which makes the refinement similar to X-ray refinement. Second, the quality of the data is assessed. The methods have been recently reviewed in more detail [64,67]. [Pg.264]

Table 2 Raw Data and Posterior Modes from Dirichlet Mixtures for a Six Amino Acid Segment of Nuclear Hormone Receptors ... Table 2 Raw Data and Posterior Modes from Dirichlet Mixtures for a Six Amino Acid Segment of Nuclear Hormone Receptors ...
Raw data must be analyzed and transformed into a format useful for specific purposes. Summary tables, graphs, and geographic distributions are some of the formats used for data display. Air quality information often consists of a large body of data collected at a variety of locations and over different seasons. Table 15-3 shows the tabular format used by the California Air Resources Board to reduce ozone hourly measurements to a format which shows information about compliance with air quality standards (6). The format has location, maximum values, annual means, and number of occurrences of hourly values above a given concentration as a function of the month of the year. One can quickly determine which areas are violating a standard, at what time of the year elevated concentrations are occurring, and the number of good data points collected. [Pg.227]

To determine the deterioration in component performance and efficiency, the values must be corrected to a reference plane. These corrected measurements will be referenced to different reference planes depending upon the point, which is being investigated. Corrected values can further be adjusted to a transposed design value to properly evaluate the deterioration of any given component. Transposed data points are very dependent on the characteristics of the components performance curves. To determine the characteristics of these curves, raw data points must be corrected and then plotted against representative nondimensional parameters. It is for this reason that we must evaluate the turbine train while its characteristics have not been altered due to component deterioration. If component data were available from the manufacturer, the task would be greatly reduced. [Pg.693]

There is an improved signal-to-noise ratio in the raw data. This can be seen in the Ex N E) form of data in Figure 3-... [Pg.316]

Relative photoionization cross sections for molecules do not vary gready between each other in this wavelength region, and therefore the peak intensities in the raw data approximately correspond to the relative abundances of the molecular species. Improvement in quantification for both photoionizadon methods is straightforward with calibration. Sampling the majority neutral channel means much less stringent requirements for calibrants than that for direct ion production from surfaces by energetic particles this is especially important for the analysis of surfaces, interfaces, and unknown bulk materials. [Pg.563]

By examining the sputtered neutral particles (the majority channel) using nons-elective photoionization and TOFMS, SALI generates a relatively uniform sensitivity with semiquantitative raw data and overcomes many of the problems associated with SIMS. Estimates for sensitivities vary depending on the lateral spatial resolution for a commercial liquid-metal (Ga ) ion gun. Galculated values for SALI... [Pg.567]

Determination of concentration profiles from the raw data can be more complicated when protons are used as the incident particles. The energy loss ( dE/ die) is smaller for protons and straggling effects are more important. The observed profile A (AJ)) is a convolution of the actual concentration profile C x with a depth resolution function (x, Eq), which broadens with increasing x roughly as Jx- Hence, resolution deteriorates with depth. However, near-surface resolution for resonant profiling may be on the order of tens of A. [Pg.684]

In addition to qualitative analysis of nearly all the elements of the periodic table, EEL spectra also enable determination of the concentration of a single element which is part of the transmitted volume and hence gives rise to a corresponding ionization edge. As in all comparable spectroscopic techniques, for quantification the net edge signal, which is related to the number N of excited atoms, must be extracted from the raw data measured. The net intensity 4 of the feth ionization shell of an individual element is directly connected to this number, N, multiplied by the partial cross-section of ionization ) and the intensity Iq of the incident electron beam, i.e. ... [Pg.65]

Uses raw data from field tests to compute hydraulic conductivity computed value is evaluated by the expert system for its correctness with regard to these considerations site-specific geological characteristics, validity of test procedures, accuracy of the raw data, and the computational method. System is written in Arity-Prolog on a PC. [Pg.292]

Appendix HI, of WASH-1400 presents a database from 52 references that were used in the study. It includes raw data, notes on test and maintenance time and frequency, human-reliability estimates, aircraft-crash probabilities, frequency of initiating events, and information on common-cause failures. Using this information, it assesses the range for each failure rate. [Pg.153]


See other pages where Raw data is mentioned: [Pg.274]    [Pg.469]    [Pg.1440]    [Pg.1792]    [Pg.475]    [Pg.333]    [Pg.517]    [Pg.398]    [Pg.540]    [Pg.385]    [Pg.183]    [Pg.346]    [Pg.232]    [Pg.564]    [Pg.63]    [Pg.90]    [Pg.204]    [Pg.325]    [Pg.109]    [Pg.138]    [Pg.1123]    [Pg.6]    [Pg.543]   
See also in sourсe #XX -- [ Pg.472 ]

See also in sourсe #XX -- [ Pg.62 , Pg.93 , Pg.142 ]

See also in sourсe #XX -- [ Pg.52 ]

See also in sourсe #XX -- [ Pg.59 ]

See also in sourсe #XX -- [ Pg.261 , Pg.270 ]

See also in sourсe #XX -- [ Pg.289 ]

See also in sourсe #XX -- [ Pg.183 ]

See also in sourсe #XX -- [ Pg.103 ]




SEARCH



Archiving of Electronic Raw Data

High-resolution (a) raw data spectrum and (b) accurate masses calculated from internal calibration table

Processing the answers from raw to clean data

Raw Data Checking

Raw Data Preprocessing

Raw data for a de-alkylation plant

Raw data notebooks

Raw data processing

Raw data sheets

Raw data, definition

Raw material data

Test results and raw data

© 2024 chempedia.info