Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Reconciliation models

Correlations are inherent in chemical processes even where it can be assumed that there is no correlation among the data. Principal component analysis (PCA) transforms a set of correlated variables into a new set of uncorrelated ones, known as principal components, and is an effective tool in multivariate data analysis. In the last section we describe a method that combines PCA and the steady-state data reconciliation model to provide sharper, and less confounding, statistical tests for gross errors. [Pg.219]

The key idea of this section is to combine PCA and the steady-state data reconciliation model to provide sharper and less confounding statistical tests for gross errors, through exploiting the correlation. [Pg.238]

While some reconciliation models only have material balance relationships, more meaningful reconciliation results are obtained with models that include material balances, heat balances, equilibrium constraints (both in the separation and reaction domains), rate relationships (heat transfer, mass transfer, momentum transfer, and kinetics), as well as equipment-specific relationships. In other words, one should include more than just material balance constraints when reconciling a model. Heat balance, kinetics, transport relationships—if needed for the... [Pg.126]

Figure 2 Transient Data Reconciliation. Model (solid line), EKF (dashdotted line), and CEKF (dashed line). Figure 2 Transient Data Reconciliation. Model (solid line), EKF (dashdotted line), and CEKF (dashed line).
This contribution comprehensively reviews the literature reported for particulate emissions of motor vehicles operated under real-world conditions. This article will mainly focus on the results published for size segregated emissions factors of particle mass, elemental and organic carbon, crustal components and selected trace metals, since information is important for health effects studies and source reconciliation modeling efforts. [Pg.64]

The first step in data reconciliation is to measure the flow rates (if possible) and sample the streams to analyse variations in the concentrations of elements, compounds, etc. In all cases, average values are obtained and their respective standard deviations are estimated. These data form the input to the data reconciliation model with the aim of finding a solution within the boundaries of the data. An analysis of adjustments calculated in data reconciliation for each stream forms an integral part of the analysis. This is a very helpful tool (i) to find deficiencies in the model, (ii) to establish inadequate assaying and sampling practices, (iii) to determine model variance, and (iv) to determine coincidental or systematic errors. [Pg.230]

A data reconciliation model has been built for the plant. This is an optimization model constituted by an objective function that corresponds to the minimization of the weighted errors on measurements and of a list of constraints representing the physics of the process operations. The constraints are mass and energy balances, separation rules, and thermodynamic behaviors. The model has been developed by using the equation solver type data reconciliation software VALI III (Belsim s.a, 2001). [Pg.1002]

To determine if a process unit is at steady state, a program monitors key plant measurements (e.g., compositions, product rates, feed rates, and so on) and determines if the plant is steady enough to start the sequence. Only when all of the key measurements are within the allowable tolerances is the plant considered steady and the optimization sequence started. Tolerances for each measurement can be tuned separately. Measured data are then collec ted by the optimization computer. The optimization system runs a program to screen the measurements for unreasonable data (gross error detection). This validity checkiug automatically modifies tne model updating calculation to reflec t any bad data or when equipment is taken out of service. Data vahdation and reconciliation (on-line or off-line) is an extremely critical part of any optimization system. [Pg.742]

The vertices are connected with hues indicating information flow. Measurements from the plant flow to plant data, where raw measurements are converted to typical engineering units. The plant data information flows via reconciliation, rec tification, and interpretation to the plant model. The results of the model (i.e., troubleshooting, model building, or parameter estimation) are then used to improve plant operation through remedial action, control, and design. [Pg.2547]

At this point, analysts have a set of adjusted measurements that may better represent the unit operation. These will ultimately be used to identify faults, develop a model, or estimate parameters. This automatic reconciliation is not a panacea. Incomplete data sets, unknown uncertainties and incorrec t constraints all compromise the accuracy of the adjustments. Consequently, preliminary adjustments by hand are still recommended. Even when automatic adjustments appear to be correct, the resiilts must be viewed with some skepticism. [Pg.2569]

Overview Reconciliation adjusts the measurements to close constraints subject to their uncertainty. The numerical methods for reconciliation are based on the restriction that the measurements are only subject to random errors. Since all measurements have some unknown bias, this restriction is violated. The resultant adjusted measurements propagate these biases. Since troubleshooting, model development, ana parameter estimation will ultimately be based on these adjusted measurements, the biases will be incorporated into the conclusions, models, and parameter estimates. This potentially leads to errors in operation, control, and design. [Pg.2571]

However, given that reconciliation will not always adjust measurements, even when they contain large random and gross error, the adjustments will not necessarily indicate that gross error is present. Further, the constraints may also be incorrect due to simphfications, leaks, and so on. Therefore, for specific model development, scrutiny of the individual measurement adjustments coupled with reconciliation and model building should be used to isolate gross errors. [Pg.2572]

Four observation were thought to be in disagreement with the diffusion model (1) the lack of a proportional relationship between the electron scavenging product and the decrease of H2 yield (2) the lack of significant acid effect on the molecular yield of H2 (3) the relative independence from pH of the isotope separation factor for H2 yield and (4) the fact that with certain solutes the scavenging curves for H2 are about the same for neutral and acid solutions. Schwarz s reconciliation follows. [Pg.216]

Some recent applications have benefited from advances in computing and computational techniques. Steady-state simulation is being used off-line for process analysis, design, and retrofit process simulators can model flow sheets with up to about a million equations by employing nested procedures. Other applications have resulted in great economic benefits these include on-line real-time optimization models for data reconciliation and parameter estimation followed by optimal adjustment of operating conditions. Models of up to 500,000 variables have been used on a refinery-wide basis. [Pg.86]

The use of this extended planning model will only be problematic if extra reference points, e.g., initial tank storage levels, have to be considered. This may lead to overdetermination of the model (i.e., conflicting level values for a given point in time) and it may be necessary to solve a data reconciliation problem. ... [Pg.267]

The steady-state linear model data reconciliation problem can be stated as... [Pg.577]

Extended Kalman filtering has been a popular method used in the literature to solve the dynamic data reconciliation problem (Muske and Edgar, 1998). As an alternative, the nonlinear dynamic data reconciliation problem with a weighted least squares objective function can be expressed as a moving horizon problem (Liebman et al., 1992), similar to that used for model predictive control discussed earlier. [Pg.577]

This chapter is devoted to the analysis of variable classification and the decomposition of the data reconciliation problem for linear and bilinear plant models, using the so-called matrix projection approach. The use of orthogonal factorizations, more precisely the Q-R factorization, to solve the aforementioned problems is discussed and its range of application is determined. Several illustrative examples are included to show the applicability of such techniques in practical applications. [Pg.72]

In this chapter the use of Q-R factorizations with the purpose of system decomposition and instrumentation analysis, for linear and bilinear plant models, is thoroughly investigated. Simple expressions are provided using subproducts of Q-R factorizations for application in data reconciliation. Furthermore, the use of factorization procedures... [Pg.72]

In this chapter we concentrate on the statement and further solution of the general steady-state data reconciliation problem. Initially, we analyze its resolution for linear plant models, and then the nonlinear case is discussed. [Pg.94]

Let us first define the models to be used in our formulation of the data reconciliation problem. [Pg.95]

Two situations arise in linear data reconciliation. Sometimes all the variables included in the process model are measured, but more frequently some variables are not measured. Both cases will be separately analyzed. [Pg.96]

Orthogonal factorizations may be applied to resolve problem (5.3) if the system of equations cp(x, u) = 0 is made up of linear mass balances and bilinear component and energy balances. After replacing the bilinear terms of the original model by the corresponding mass and energy flows, a linear data reconciliation problem results. [Pg.102]

In this sense, the application of Q-R factorizations constitutes an efficient alternative for solving bilinear data reconciliation. Successive linearizations and nonlinear programming are required for more complex models. These techniques are more reliable and accurate for most problems, and thus require more computation time. [Pg.109]

In the previous development it was assumed that only random, normally distributed measurement errors, with zero mean and known covariance, are present in the data. In practice, process data may also contain other types of errors, which are caused by nonrandom events. For instance, instruments may not be adequately compensated, measuring devices may malfunction, or process leaks may be present. These biases are usually referred as gross errors. The presence of gross errors invalidates the statistical basis of data reconciliation procedures. It is also impossible, for example, to prepare an adequate process model on the basis of erroneous measurements or to assess production accounting correctly. In order to avoid these shortcomings we need to check for the presence of gross systematic errors in the measurement data. [Pg.128]

That is, the least squares estimate can be finally expressed as the contribution of three terms. The first one arises from the solution of the original problem, without constraints (for data reconciliation xo = y) the next is a correction term due to the presence of constraints and the last one takes into account failures in the model (systematic errors). [Pg.141]

As in the classical steady-state data reconciliation formulation, the optimal estimates are those that are as close as possible (in the least squares sense) to the measurements, such that the model equations are satisfied exactly. [Pg.169]

As pointed out by Liebman et al., given a perfect model, an ideal data reconciliation scheme would use all information (process measurements) from the startup of the process until the current time. Unfortunately, such a scheme would necessarily result in an optimization problem of ever-increasing dimension. For practical implementation we can use a moving time window to reduce the optimization problem to manageable dimensions. A window approach was presented by Jang et al. (1986) and extended later by Liebman et al. (1992). [Pg.170]

In this chapter different aspects of data processing and reconciliation in a dynamic environment were briefly discussed. Application of the least square formulation in a recursive way was shown to lead to the classical Kalman filter formulation. A simpler situation, assuming quasi-steady-state behavior of the process, allows application of these ideas to practical problems, without the need of a complete dynamic model of the process. [Pg.174]


See other pages where Reconciliation models is mentioned: [Pg.2548]    [Pg.2549]    [Pg.2569]    [Pg.199]    [Pg.160]    [Pg.15]    [Pg.282]    [Pg.64]    [Pg.216]    [Pg.297]    [Pg.16]    [Pg.575]    [Pg.119]    [Pg.122]    [Pg.16]    [Pg.25]    [Pg.183]   
See also in sourсe #XX -- [ Pg.126 ]




SEARCH



Reconciliation of Apparent Contradictions in the Diffusion Model for Water Radiolysis According to Schwarz

Reconciliation of HP HCR Reactor Model

© 2024 chempedia.info