Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Process Data Representation and Analysis

The operation of any chemical process is ultimately based on the measurement of process variables—temperatures, pressures, flow rates, concentrations, and so on. It is sometimes possible to measure these variables directly, but, as a rule, indirect techniques must be used. [Pg.22]

Suppose, for example, that you wish to measure the concentration, C, of a solute in a solution. To do so, you normally measure a quantity, X—such as a thermal or electrical conductivity, a light absorbance, or the volume of a titer—that varies in a known manner with C, and then calculate C from the measured value of X. The relationship between C and X is determined in a separate calibration experiment in which solutions of known concentration are prepared and X is measured for each solution. [Pg.22]

Consider a calibration experiment in which a variable, y, is measured for several values of another variable, x  [Pg.22]

In the terms of the first paragraph, y might be a reactant concentration or some other process variable and x would be a readily measured quantity (such as conductivity) whose value correlates with the value of y. Our object is to use the calibration data to estimate the value of y for a value of x between tabulated points (interpolation) or outside the range of the table data (extrapolation). [Pg.23]

A number of interpolation and extrapolation methods are commonly used, including two-point linear interpolation, graphical interpolation, and curve fitting. Which one is most appropriate depends on the nature of the relationship between x and y. [Pg.23]


The STEP procedure, described by Hendrick and Benner (1987), was developed from a research program on incident investigation methods. STEP is based on the mulHple events sequence method and is an investigative process which structures data collection, representation, and analysis. [Pg.274]

As discussed and illustrated in the introduction, data analysis can be conveniently viewed in terms of two categories of numeric-numeric manipulation, input and input-output, both of which transform numeric data into more valuable forms of numeric data. Input manipulations map from input data without knowledge of the output variables, generally to transform the input data to a more convenient representation that has unnecessary information removed while retaining the essential information. As presented in Section IV, input-output manipulations relate input variables to numeric output variables for the purpose of predictive modeling and may include an implicit or explicit input transformation step for reducing input dimensionality. When applied to data interpretation, the primary emphasis of input and input-output manipulation is on feature extraction, driving extracted features from the process data toward useful numeric information on plant behaviors. [Pg.43]

The process of field validation and testing of models was presented at the Pellston conference as a systematic analysis of errors (6. In any model calibration, verification or validation effort, the model user is continually faced with the need to analyze and explain differences (i.e., errors, in this discussion) between observed data and model predictions. This requires assessments of the accuracy and validity of observed model input data, parameter values, system representation, and observed output data. Figure 2 schematically compares the model and the natural system with regard to inputs, outputs, and sources of error. Clearly there are possible errors associated with each of the categories noted above, i.e., input, parameters, system representation, output. Differences in each of these categories can have dramatic impacts on the conclusions of the model validation process. [Pg.157]

The choice of representation, of similarity measure and of selection method are not independent of each other. For example, some types of similarity measure (specifically the association coefficients as exemplified by the well-known Tanimoto coefficient) seem better suited than others (such as Euclidean distance) to the processing of fingerprint data [12]. Again, the partition-based methods for compound selection that are discussed below can only be used with low-dimensionality representations, thus precluding the use of fingerprint representations (unless some drastic form of dimensionality reduction is performed, as advocated by Agrafiotis [13]). Thus, while this chapter focuses upon selection methods, the reader should keep in mind the representations and the similarity measures that are being used recent, extended reviews of these two important components of diversity analysis are provided by Brown [14] and by Willett et al. [15]. [Pg.116]

Phase equilibrium information characterizes partitioning between phases for a system and is important for describing separation processes. For equilibrium-limited processes, these values dictate the limits for separation in a single stage. For mass transfer-limited processes, the partitioning between phases is an important parameter in the analysis. The data can be presented in tabular form. But this approach is restricted in application, since an analysis typically requires phase equilibrium values that are not explicitly listed in the table. So, graphical representation and computational methods are usually more useful. [Pg.42]

The next bottleneck in the process was the collection, storage and analysis of the data. The whole data manipulation area, known as informatics, became a major source of research into the storage and analysis of data. It was also clear that the presentation of data was a problem since complete data sets often contained results as a function of a whole series of variables, which meant trying to find trends or an optimum of multidimensional data sets. To represent this data requires displays as surfaces or even more complex representations, which are certainly difficult for the imaccustomed to interpret and imderstand. Consequently, data-mining tools were employed and further developed to help scan the data sets for relationships. [Pg.73]


See other pages where Process Data Representation and Analysis is mentioned: [Pg.22]    [Pg.23]    [Pg.25]    [Pg.27]    [Pg.29]    [Pg.22]    [Pg.23]    [Pg.25]    [Pg.27]    [Pg.29]    [Pg.538]    [Pg.296]    [Pg.134]    [Pg.258]    [Pg.20]    [Pg.341]    [Pg.5]    [Pg.522]    [Pg.23]    [Pg.283]    [Pg.198]    [Pg.538]    [Pg.298]    [Pg.189]    [Pg.10]    [Pg.200]    [Pg.237]    [Pg.908]    [Pg.283]    [Pg.2250]    [Pg.166]    [Pg.405]    [Pg.814]    [Pg.75]    [Pg.485]    [Pg.243]    [Pg.143]    [Pg.146]    [Pg.2233]    [Pg.538]    [Pg.83]    [Pg.61]    [Pg.8]    [Pg.354]    [Pg.13]    [Pg.311]    [Pg.453]    [Pg.220]   


SEARCH



Data and analysis

Data processing

Process analysis

Process analysis processes

Process data

Process data analysis

Processing analysis

Representations and

© 2024 chempedia.info