Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Sequential processing measurements

There are two basic types of unconstrained optimization algorithms (I) those reqmring function derivatives and (2) those that do not. The nonderivative methods are of interest in optimization applications because these methods can be readily adapted to the case in which experiments are carried out directly on the process. In such cases, an ac tual process measurement (such as yield) can be the objec tive function, and no mathematical model for the process is required. Methods that do not reqmre derivatives are called direc t methods and include sequential simplex (Nelder-Meade) and Powell s method. The sequential simplex method is quite satisfac tory for optimization with two or three independent variables, is simple to understand, and is fairly easy to execute. Powell s method is more efficient than the simplex method and is based on the concept of conjugate search directions. [Pg.744]

Method validation is carried out to provide objective evidence that a method is suitable for a given application. A formal assessment of the validation information against the measurement requirements specification and other important method performance parameters is therefore required. Although validation is described as a sequential process, in reality it can involve more than one iteration to optimize some performance parameters, e.g. if a performance parameter is outside the required limits, method improvement followed by revalidation is needed. [Pg.92]

In this chapter the mathematical formulation for the sequential processing of both constraints and measurements is presented. Some interesting features are discussed that make this approach well suited for the analysis of a set of measurements and for gross error identification, the subject of the next chapter. [Pg.112]

For the sequential processing of measurements data, we will consider the total set of measurements partitioned into two smaller subsets composed of (g — c) and c measurements, respectively. Partitioning the matrix A along these lines, we have... [Pg.116]

This chapter discussed the idea of exploiting the sequential processing of information (both constraints and measurements), to allow computations to be done in a recursive way without solving the full-scale reconciliation problem. [Pg.124]

A more systematic approach was developed by Romagnoli and Stephanopoulos (1981) and Romagnoli (1983) to analyze a set of measurement data in the presence of gross errors. The method is based on the idea of exploiting the sequential processing of the information (constraints and measurements), thus allowing the computations to be done in a recursive way without solving the full-scale reconciliation problem. [Pg.129]

The previous approach for solving the reconciliation problem allows the calculation, in a systematic recursive way, of the residual covariance matrix after a measurement is added or deleted from the original adjustment. A combined procedure can be devised by using the sequential treatment of measurements together with the sequential processing of the constraints. [Pg.137]

Case 1. A bias present in the measurement of /2 was identified by the sequential processing of the measurements (see Example 7.2). We augment, in consequence, the vector of parameters of the original problem by adding an additional component to represent the uncertain parameter (bias term). [Pg.142]

In this chapter, the general problem of joint parameter estimation and data reconciliation was discussed. First, the typical parameter estimation problem was analyzed, in which the independent variables are error-free, and aspects related to the sequential processing of the information were considered. Later, the more general formulation in terms of the error-in-variable method (EVM), where measurement errors in all variables are considered in the parameter estimation problem, was stated. Alternative solution techniques were briefly discussed. Finally, joint parameter-state estimation in dynamic processes was considered and two different approaches, based on filtering techniques and nonlinear programming techniques, were discussed. [Pg.198]

If an error probability of 0.1 is considered, then 12.03 > 9.24 (from statistical tables) and one may say that inconsistency is important at this error probability level. After sequential processing of the measurements, as is shown in Table 10, feed temperature and reboiler and condenser duties are suspected to contain gross errors. Since feed temperature and reboiler duty do not appear in separate equations, it is difficult to isolate the gross error when it happens in one or both of duties. In this case, we need... [Pg.266]

If a sequential process involving the binding of more than one metal ion is involved, then two lvalues may be measured for the 1 1 and 1 2 complexes, respectively Kn and Kn (e.g. binding of two Na+ ions by dibenzo[30]crown-10). [Pg.44]

The predicted quantum yield of CO loss (Oco) should be very high perhaps approaching unity because each of the sequential processes is populated by way of barrierless events from the previous state. The measured co is 0.67 in cyclohexane, considerably less than unity and is further reduced to 0.52 in more viscous media [22, 23], The branching space at the conical intersection allows the Cr(CO)5 fragment to locate in one of three possible C4v species only one of which has an... [Pg.45]


See other pages where Sequential processing measurements is mentioned: [Pg.1253]    [Pg.12]    [Pg.16]    [Pg.116]    [Pg.124]    [Pg.57]    [Pg.1253]    [Pg.283]    [Pg.193]    [Pg.368]    [Pg.1957]    [Pg.1475]    [Pg.856]    [Pg.408]    [Pg.370]    [Pg.97]    [Pg.97]    [Pg.105]    [Pg.284]   
See also in sourсe #XX -- [ Pg.97 , Pg.144 , Pg.247 ]

See also in sourсe #XX -- [ Pg.97 , Pg.144 , Pg.247 ]




SEARCH



Process measures

Sequential Processing of Measurements

Sequential measurement

Sequential processes

Sequential processing

© 2024 chempedia.info