Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Data normalization process

Because neural network requires that input must meet [0, 1] interval, data normalization processing of input unit is necessary by applying fuzzy theory. Processing rules is The unit set is 17 = C/j Uj 17,, the factor sets are 17, = u m, = Iwji... [Pg.1207]

Data preprocessing. The data should preferably be evenly distributed over the entire operating range of the process. If the process data are noisy, the noise should preferably be removed by using an appropriate smoothing or filtering technique. In addition, outhers should be removed, since they would affect the accuracy and prediction capability of the model. Data normalization. Process values usually take arbitrary values and it can be expected that they will not all be of the same magnitude. It is therefore recommended to scale all process values between 0.1 and 0.9 to avoid saturation of the hidden nodes and to ensme that all process variables have an impact on the output. [Pg.371]

Lognormal distribution Similar to a normal distribution. However, the logarithms of the values of the random variables are normally distributed. Typical applications are metal fatigue, electrical insulation life, time-to-repair data, continuous process (i.e., chemical processes) failure and repair data. [Pg.230]

With the single-channel method, data are acquired in series or one channel at a time. Normally, a series of data points are established for each machine-train and data are acquired from each point in a measurement route. While this approach is more than adequate for routine monitoring of relatively simple machines, it is based on the assumption that the machine s dynamics and the resultant vibration profile are constant throughout the entire data acquisition process. This approach hinders the ability to evaluate real-time relationships between measurement points on the machine-train and variations in process parameters such as speed, load, pressure, etc. [Pg.687]

Trend data that are not properly normalized for speed, load, and process variables are of little value. Since load and process-variable normalization requires a little more time during the data-acquisition process, many programs do not perform these adjustments. If this is the case, it is best to discontinue the use of trends altogether. [Pg.733]

To address this situation, a data interpretation system was constructed to monitor and detect changes in the second stage that will significantly affect the product quality. It is here that critical properties are imparted to the process material. Intuitively, if the second stage can be monitored to anticipate shifts in normal process operation or to detect equipment failure, then corrective action can be taken to minimize these effects on the final product. One of the limitations of this approach is that disturbances that may affect the final product may not manifest themselves in the variables used to develop the reference model. The converse is also true—that disturbances in the monitored variables may not affect the final product. However, faced with few choices, the use of a reference model using the process data is a rational approach to monitor and to detect unusual process behavior, to improve process understanding, and to maintain continuous operation. [Pg.84]

To construct the reference model, the interpretation system required routine process data collected over a period of several months. Cross-validation was applied to detect and remove outliers. Only data corresponding to normal process operations (that is, when top-grade product is made) were used in the model development. As stated earlier, the system ultimately involved two analysis approaches, both reduced-order models that capture dominant directions of variability in the data. A PLS analysis using two loadings explained about 60% of the variance in the measurements. A subsequent PCA analysis on the residuals showed that five principal components explain 90% of the residual variability. [Pg.85]

EXAFS data are processed to obtain radial structure functions (RSFs). First, the non-EXAFS components are subtracted from the data. Pre-edge absorption is removed using the Victoreen correction (International Tables for Crystallography, 1969) of the form AX + BX . The monotonic decrease of absorbance beyond the edge, called the photoelectric decay, is subtracted out after approximating it either by a second degree polynomial or a spline-function (Eccles, 1978). The normalized x(k) is then expressed as... [Pg.96]

TABLE 2.1. Checklist of Data Normally Included on a Process Flowsheet... [Pg.20]

Data on the regeneration characteristics of the platinum-rhenium catalyst are obviously not as extensive as on the conventional platinum catalysts. Some difficulties have been reported. However, the best performance of the platinum-rhenium catalyst requires modification of the normal process, regeneration, and rejuveniation conditions. Furthermore, these catalysts have been regenerated successfully both in laboratory and commercial unit equipment. [Pg.115]

The data processing can be divided into three phases. Phase 1 is the removal of poor quality spectra with an automated routine. Phase 2 is the data preprocessing of the spectra, which passed the quality test. This usually entails some type of baseline correction and normalization process. Phase 3 is multivariate image reconstruction where the spectra are classified and reproduced as color points... [Pg.212]

An example of the most common control chart, the Shewhart chart, is shown in Fig. 8-46. It merely consists of measurements plotted versus sample number with control limits that indicate the range for normal process operation. The plotted data are either an individual measurement x or the sample mean x if more than one sample is measured at each sampling instant. The sample mean for k samples is cal-... [Pg.36]

The major objective in SPC is to use process data and statistical techniques to determine whether the process operation is normal or abnormal. The SPC methodology is based on the fundamental assumption that normal process operation can be characterized by random variations around a mean value. The random variability is caused by the cumulative effects of a number of largely unavoidable phenomena such as electrical measurement noise, turbulence, and random fluctuations in feedstock or catalyst preparation. If this situation exists, the process is said to be in a state of statistical control (or in control), and the control chart measurements tend to be normally distributed about the mean value. By contrast, frequent control chart violations would indicate abnormal process behavior or an out-of-control situation. Then a search would be initiated to attempt to identify the assignable cause or the. special cause of the abnormal behavior... [Pg.37]

FIGURE 8.3 (See color insert following page 114.) Pre-processing data normalization. Normalization reduces plate-to-plate and well-to-well variations, allowing uniform analysis of entire HCS dataset in other modules. [Pg.152]

It is worth discussing some aspects of precision that relate to the way in which data are processed. Typically, m.e.p. is estimated as the average of ten measurements on a single site. These values often deviate from a normal distribution and should not be used as replicates (sample sizes of 100 or more are normally distributed). To obtain replicates, independent determinations of o.p.d. are made on different sections, using the mean value in each case. [Pg.131]

If X and Sx are determined from a set of normal process runs and a subsequently measured value of X falls more than 2sx away from X, the chances are that something has changed in the process—there is less than a 10% chance that normal scatter can account for the deviation. If the deviation is greater than 3sx, there is less than a 1% chance that normal scatter is the cause. The exact percentages depend on how the measured values are distributed about the mean—whether they follow a Gaussian distribution, for example—and how many points are in the data set used to calculate the mean and standard deviation. [Pg.31]

The distribution of vanadyl vs. nickel porphyrins is noteworthy (Table IV). An overview of the data from Site 368 shows that normal porphyrin diagenesis has progressed to the nickel DPEP porphyrin stage at + 15.87 m (IS). At greater depth, the normal process is interrupted by high thermal stress with concomitant formation of vanadyl porph)u is and decomposition of the native nickel porphyrins. Further below the sill (—8.39 m) only slightly altered nickel porph)rins, again predominantly... [Pg.173]


See other pages where Data normalization process is mentioned: [Pg.213]    [Pg.480]    [Pg.35]    [Pg.16]    [Pg.116]    [Pg.161]    [Pg.170]    [Pg.398]    [Pg.372]    [Pg.706]    [Pg.316]    [Pg.399]    [Pg.536]    [Pg.96]    [Pg.308]    [Pg.544]    [Pg.552]    [Pg.331]    [Pg.642]    [Pg.191]    [Pg.186]    [Pg.267]    [Pg.32]    [Pg.555]    [Pg.195]   
See also in sourсe #XX -- [ Pg.532 ]




SEARCH



Data normalization

Data processing

Normal processes

Normalizing Data

Process data

© 2024 chempedia.info