Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Prediction error method

The identification of plant models has traditionally been done in the open-loop mode. The desire to minimize the production of the off-spec product during an open-loop identification test and to avoid the unstable open-loop dynamics of certain systems has increased the need to develop methodologies suitable for the system identification. Open-loop identification techniques are not directly applicable to closed-loop data due to correlation between process input (i.e., controller output) and unmeasured disturbances. Based on Prediction Error Method (PEM), several closed-loop identification methods have been presented Direct, Indirect, Joint Input-Output, and Two-Step Methods. [Pg.698]

Assmne a SOFC is operating with constant rated voltage and power demand 0.6 p.u. There is 0.3 p.u. of step increase in the total load at i = 10 s. Fig. 16 compares the time response of identified system versus the response of the actual system. The identified system was obtained using Box-Jenkins algorithm, and is of 4 order. This method estimates parameters of the Box-Jenkins model structure using a prediction error method. The order of the identified system is the minimum order required to obtain a good time domain match. [Pg.186]

This section will examine the principles and key results for modelling an open-loop process modelled using the general prediction error model given by Eq. (6.4). The foundation for such modelling is the prediction error method, which uses the fact that most models in system identification are used for predicting future values of the process. [Pg.292]

Although for simple models it is possible to estimate the parameters using least-squares, linear regression (see, e.g. (Question 21) in Sect. 3.8.2), for more complex models this is not possible. Instead, more complex methods are required in order to obtain them. One very popular approach is the prediction error method. Parameter estimation using the prediction error method can be summarised as follows ... [Pg.292]

Theorem 6.3 Open-Loop Process Identification (properties of the prediction error method). The prediction error method produces parameter estimates that are unbiased if the prediction error is a white noise signal. [Pg.293]

Proof This will be shown by deriving the Fisher information matrix for the prediction error method. [Pg.295]

Since Theorem 6.3 states that the prediction error method produces unbiased estimates, as m —> oo, the estimated parameter values will approach the true parameter values. Thus, it can be concluded that the prediction error method asymptotically approaches a minimum variance estimator. [Pg.296]

The second method is called direct identification, where the fact that the process is running in closed loop is ignored. In this type of identification, both the process and error structures must be simultaneously estimated. Thus, either a Box-Jenkins or a general prediction error model should be fit. Since this is one of the more common approaches to closed-loop system identification, it is necessary to examine the properties of this approach. It will be assumed that the prediction error method will be used. [Pg.306]

This implies that the prediction error method can be used to estimate the model parameters without taking into consideration the fact that the system is running in closed loop. Eurthermore, the model of the controller is not required nor is any... [Pg.307]

The prediction error method provides consistent parameter estimates. [Pg.322]

Based on the prediction error method, which minimizes the error between the prediction and the actual process output data, MATLAB (2005) can be used to construct models of basically any structure. For this general model the method... [Pg.331]

Case study 3 illustrates the use of proactive techniques to analyze operator tasks, predict errors and develop methods to prevent an error occurring. Methods for the development of operating instructions and checklists are shown using the same chemical plant as in case study 2. [Pg.292]

It should also be acknowledged that in recent years computational quantum chemistry has achieved a number of predictions that have since been experimentelly confirmed (45-47). On the other hand, since numerous anomalies remain even within attempts to explain the properties of atoms in terms of quantum mechanics, the field of molecular quantum mechanics can hardly be regarded as resting on a firm foundation (48). Also, as many authors have pointed out, the vast majority of ab initio research judges its methods merely by comparison with experimental date and does not seek to establish internal criteria to predict error bounds theoretically (49-51). The message to chemical education must, therefore, be not to emphasize the power of quantum mechanics in chemistry and not to imply that it necessarily holds the final answers to difficult chemical questions (52). [Pg.17]

The %HIA, on a scale between 0 and 100%, for the same dataset was modeled by Deconinck et al. with multivariate adaptive regression splines (MARS) and a derived method two-step MARS (TMARS) [38]. Among other Dragon descriptors, the TMARS model included the Tig E-state topological parameter [25], and MARS included the maximal E-state negative variation. The average prediction error, which is 15.4% for MARS and 20.03% for TMARS, shows that the MARS model is more robust in modeling %H1A. [Pg.98]

The trial-and-error method of choosing an optimal demulsifier from a wide variety of demulsifiers to effectively treat a given oil field water-in-oil emulsion is time-consuming. However, there are methods to correlate and predict the performance of demulsifiers. [Pg.327]

An optimization criterion for determining the output parameters and basis functions is to minimize the output prediction error and is common to all input-output modeling methods. The activation or basis functions used in data analysis methods may be broadly divided into the following two categories ... [Pg.12]

You may be surprised that for our example data from Miller and Miller ([2], p. 106), the correlation coefficient calculated using any of these methods of computation for the r-value is 0.99887956534852. When we evaluate the correlation computation we see that given a relatively equivalent prediction error represented as (X - X), J2 (X - X), or SEP, the standard deviation of the data set (X) determines the magnitude of the correlation coefficient. This is illustrated using Graphics 59-la and 59-lb. These graphics allow the correlation coefficient to be displayed for any specified Standard error of prediction, also occasionally denoted as the standard error of estimate (SEE). It should be obvious that for any statistical study one must compare the actual computational recipes used to make a calculation, rather than to rely on the more or less non-standard terminology and assume that the computations are what one expected. [Pg.387]

The overall accuracy of the predictions, assessed as the mean-fold error of prediction of the test set was 2.03, making this approach one that would possess suitable accuracy for use in drug design and human pharmacokinetic predictions. Similar methods developed separately for acids and bases showed an improvement in accuracy. This investigation also included a prediction of unbound VD, which should represent a simpler parameter to predict since it would be based only on tissue binding and not plasma protein binding. However, it is interesting to note that this approach was less accurate for this parameter, which would be unexpected. [Pg.483]

All regression methods aim at the minimization of residuals, for instance minimization of the sum of the squared residuals. It is essential to focus on minimal prediction errors for new cases—the test set—but not (only) for the calibration set from which the model has been created. It is relatively easy to create a model— especially with many variables and eventually nonlinear features—that very well fits the calibration data however, it may be useless for new cases. This effect of overfitting is a crucial topic in model creation. Definition of appropriate criteria for the performance of regression models is not trivial. About a dozen different criteria— sometimes under different names—are used in chemometrics, and some others are waiting in the statistical literature for being detected by chemometricians a basic treatment of the criteria and the methods how to estimate them is given in Section 4.2. [Pg.118]

The basis of all performance criteria are prediction errors (residuals), yt - yh obtained from an independent test set, or by CV or bootstrap, or sometimes by less reliable methods. It is crucial to document from which data set and by which strategy the prediction errors have been obtained furthermore, a large number of prediction errors is desirable. Various measures can be derived from the residuals to characterize the prediction performance of a single model or a model type. If enough values are available, visualization of the error distribution gives a comprehensive picture. In many cases, the distribution is similar to a normal distribution and has a mean of approximately zero. Such distribution can well be described by a single parameter that measures the spread. Other distributions of the errors, for instance a bimodal distribution or a skewed distribution, may occur and can for instance be characterized by a tolerance interval. [Pg.126]

Not just by accident PLS regression is the most used method for multivariate calibration in chemometrics. So, we recommend to start with PLS for single y-variables, using all x-variables, applying CV (leave-one-out for a small number of objects, say for n < 30, 3-7 segments otherwise). The SEPCV (standard deviation of prediction errors obtained from CV) gives a first idea about the relationship between the used x-variables and the modeled y, and hints how to proceed. Great effort should be applied for a reasonable estimation of the prediction performance of calibration models. [Pg.204]

In the absence of an accurate determination of prediction errors, it was not possible to calculate a mismatch level specific to each compound, although a method was proposed by which it could if this situation were to change. Instead, a generic mismatch level needed to be adopted above which postulates were rejected and below which postulates were accepted. The problem in this situation is the large variation in prediction accuracy, e.g., two similar postulates of which only one is truly correct, but whose common parts are well predicted, will both yield a mismatch below a generic threshold. A mismatch criterion specific to each postulate, would be a pre-requisite for a calculation of the probability that the postulate was indeed correct. [Pg.235]


See other pages where Prediction error method is mentioned: [Pg.208]    [Pg.100]    [Pg.421]    [Pg.234]    [Pg.331]    [Pg.331]    [Pg.3509]    [Pg.3734]    [Pg.222]    [Pg.208]    [Pg.100]    [Pg.421]    [Pg.234]    [Pg.331]    [Pg.331]    [Pg.3509]    [Pg.3734]    [Pg.222]    [Pg.442]    [Pg.327]    [Pg.368]    [Pg.369]    [Pg.41]    [Pg.42]    [Pg.121]    [Pg.69]    [Pg.78]    [Pg.23]    [Pg.118]    [Pg.132]    [Pg.192]    [Pg.195]    [Pg.199]   
See also in sourсe #XX -- [ Pg.331 ]




SEARCH



Analytical Methods for Predicting and Reducing Human Error

Error method

Predictable errors

Resampling Methods for Prediction Error Assessment and Model Selection

The Prediction Error Method

Time series modeling prediction error method

© 2024 chempedia.info