Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Sequential Processing of Measurements

For the sequential processing of measurements data, we will consider the total set of measurements partitioned into two smaller subsets composed of (g — c) and c measurements, respectively. Partitioning the matrix A along these lines, we have [Pg.97]

Introducing this partitioning into the equations, we have for the covariance matrix of the residuum in the balances [Pg.98]

The right-hand side of Eq. (6.25) represents the covariance of the residuum when both blocks of measurement equations are used. The first term in the right-hand side is the covariance resulting from considering the information provided by the first set of measurements. Hence, the second term arises as a correction to the estimate, due to the incorporation of new measurements (new information) as it becomes available  [Pg.98]

This procedure can be extended to the case of having an arbitrary number of blocks of information (measurements). In this case, the covariance matrix, which uses the information provided by the 7 th block, is given by [Pg.98]

This formula corresponds to adding new observations to the sequential treatment. If the covariance matrix of the reduced set of measurements is to be recovered from the augmented one, then solving for 7-1 from Eq. (6.27), we have [Pg.98]


In this chapter the mathematical formulation for the sequential processing of both constraints and measurements is presented. Some interesting features are discussed that make this approach well suited for the analysis of a set of measurements and for gross error identification, the subject of the next chapter. [Pg.112]

This chapter discussed the idea of exploiting the sequential processing of information (both constraints and measurements), to allow computations to be done in a recursive way without solving the full-scale reconciliation problem. [Pg.124]

A more systematic approach was developed by Romagnoli and Stephanopoulos (1981) and Romagnoli (1983) to analyze a set of measurement data in the presence of gross errors. The method is based on the idea of exploiting the sequential processing of the information (constraints and measurements), thus allowing the computations to be done in a recursive way without solving the full-scale reconciliation problem. [Pg.129]

The previous approach for solving the reconciliation problem allows the calculation, in a systematic recursive way, of the residual covariance matrix after a measurement is added or deleted from the original adjustment. A combined procedure can be devised by using the sequential treatment of measurements together with the sequential processing of the constraints. [Pg.137]

Case 1. A bias present in the measurement of /2 was identified by the sequential processing of the measurements (see Example 7.2). We augment, in consequence, the vector of parameters of the original problem by adding an additional component to represent the uncertain parameter (bias term). [Pg.142]

In this chapter, the general problem of joint parameter estimation and data reconciliation was discussed. First, the typical parameter estimation problem was analyzed, in which the independent variables are error-free, and aspects related to the sequential processing of the information were considered. Later, the more general formulation in terms of the error-in-variable method (EVM), where measurement errors in all variables are considered in the parameter estimation problem, was stated. Alternative solution techniques were briefly discussed. Finally, joint parameter-state estimation in dynamic processes was considered and two different approaches, based on filtering techniques and nonlinear programming techniques, were discussed. [Pg.198]

If an error probability of 0.1 is considered, then 12.03 > 9.24 (from statistical tables) and one may say that inconsistency is important at this error probability level. After sequential processing of the measurements, as is shown in Table 10, feed temperature and reboiler and condenser duties are suspected to contain gross errors. Since feed temperature and reboiler duty do not appear in separate equations, it is difficult to isolate the gross error when it happens in one or both of duties. In this case, we need... [Pg.266]

Is there any other approach or concept that can directly measure protein amount in the tissue section Ten years ago, Roth et al.38 documented a novel method, named the Midwestern assay. This method is based on using two chromogens, soluble and insoluble, for the IHC staining process, to produce sequential production of soluble and insoluble reaction products. The soluble IHC product is used to measure the amount of antigen (protein) by spectrophotometry, while insoluble product indicates the localization of protein in the tissue section. Their experimental results demonstrated that soluble reac-... [Pg.82]

Method validation is carried out to provide objective evidence that a method is suitable for a given application. A formal assessment of the validation information against the measurement requirements specification and other important method performance parameters is therefore required. Although validation is described as a sequential process, in reality it can involve more than one iteration to optimize some performance parameters, e.g. if a performance parameter is outside the required limits, method improvement followed by revalidation is needed. [Pg.92]

The design of the mass spectrometer may influence its use in a particular kind of measurement. The study of electronic state-specific ions and their reactions has mainly been carried out using the GIB method. Metastable ions (ions produced by the ionization process but decomposing on the way to detection) can be observed in many of Type (1) mass spectrometers and metastable ions aid our understanding of the ionization process and stability of ions. Sequential reactions and kinetic studies of ion-molecule reactions are difficult with the simpler mass spectrometers of Type 1 and so more complex hybrid mass spectrometers have to be used. The ions observed in micro- or milliseconds after the ionization process may or may not be the same as ion observed seconds after the ionization process, which is a limitation in the use of Type 1 mass spectrometers. [Pg.349]

The sequential procedure can be implemented on-line, in real time, for any processing plant without much computational effort. Furthermore, by sequentially deleting one measurement at a time, it is possible to quantify the effect of that measurement on the reconciliation procedure, making this approach very suitable for gross error detection/identification, as discussed in the next chapter. [Pg.124]


See other pages where Sequential Processing of Measurements is mentioned: [Pg.12]    [Pg.16]    [Pg.284]    [Pg.12]    [Pg.16]    [Pg.284]    [Pg.1253]    [Pg.116]    [Pg.1253]    [Pg.408]    [Pg.97]    [Pg.97]    [Pg.245]    [Pg.619]    [Pg.2]    [Pg.53]    [Pg.203]    [Pg.1074]    [Pg.154]    [Pg.238]    [Pg.234]    [Pg.5]    [Pg.57]    [Pg.180]    [Pg.535]    [Pg.291]    [Pg.134]    [Pg.109]   


SEARCH



Process measures

Sequential measurement

Sequential processes

Sequential processing

Sequential processing measurements

© 2024 chempedia.info