Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Error performance criteria

The previously discussed simple performance criteria, i.e. decay ratio, overshoot, etc., use only a few points in the response and therefore are simple to use. On the other hand, error performance criteria are based on the entire response of the process but they are also more complicated. [Pg.120]

The most commonly used family of methods for cluster seeking uses optimization of a squared-error performance criterion in the form... [Pg.28]

Designs Based on Optimization of an integral Feedback Error Performance Criterion... [Pg.6]

The selection of cluster number, which is generally not known beforehand, represents the primary performance criterion. Optimization of performance therefore requires trial-and-error adjustment of the number of clusters. Once the cluster number is established, the neural network structure is used as a way to determine the linear discriminant for interpretation. In effect, the RBFN makes use of known transformed features space defined in terms of prototypes of similar patterns as a result of applying /c-means clustering. [Pg.62]

The steadystate error is another time-domain specification. It is not a dynamic specification, but it is an important performance criterion. In many loops (but not all) a steadystate error of zero is desired, i.e, the value of the controlled variable should eventually level out at the setpoint. [Pg.227]

Jones uses an integrated squared error loss value [L(x) as the performance criterion ... [Pg.168]

Another problem associated with using the number of theoretical plates (N) as a performance criterion is that there are several equations which can be used to make this calculation, as shown in Figure 1. All of these equations are equivalent for gaussian peaks. However, for tailing peaks, all of the equations are subject to error, some more than others. Therefore, a measure of peak symmetry is required to be able to determine the validity of plate height measurements. [Pg.34]

The 0-NLPCA network has 8-6-10-12 neurons in each layer, yielding a prototype model with 6 principal components (PCs). For comparison, the linear PCA was also applied to the same data. As a performance criterion, the root mean square of error (RMSE) was evaluated to compare the prediction ability of the developed PCA and O-NLPCA models on the training and validation data. While the linear PCA gave 0.3021 and 0.3227 RMSE on training and validation data sets, respectively, the O-NLPCA provided 0.2526 and 0.2244 RMSE. This suggests that to capture the same amount of information, the linear PCA entails utilization of more principal components than its nonlinear counterpart. As a result, the information embedded in the nonlinear principal components addresses the underlying events more efficiently than the linear ones. [Pg.198]

The principal steady-state performance criterion usually is zero error at steady state. We have seen already that in most situations the proportional controller cannot achieve zero steady-state error, while a PI controller can. Also, we know that for proportional control the steady-state error (offset) tends to zero as Kc - oo. No further discussion is needed on the steady-state performance criteria. [Pg.160]

During validation, the five QCs should be tested in a minimum of six runs, at least in duplicate. It is not acceptable to discard any runs during this testing. The inter- and intrabatch precision and accuracy from all runs performed should be reported. It is generally expected that the precision and accuracy acceptance be at least 20-25% of the target range with a Total Error not exceeding 30%. (Total Error acceptance criterion is discussed later in this report.)... [Pg.578]

Controller tuning can be defined as an optimisation process that involves a performance criterion related to the form of controller response and to the error between the process variable and the set point. When tuning a controller, some of the questions that may be asked include ... [Pg.117]

However, it has to be considered that it is neither the content of free formaldehyde itself nor the molar ratio which eventually should be taken as the decisive and the only criterion for the classification of a resin concerning the subsequent formaldehyde emission from the finished board. In reality, the composition of the glue mix as well as the various process parameters during the board production also determine both performance and formaldehyde emission. Depending on the type of board and the manufacturing process, it is sometimes recommended to use a UF-resin with a low molar ratio F/U (e.g. F/U = 1.03), hence low content of free formaldehyde, while sometimes the use of a resin with a higher molar ratio (e.g. F/U = 1.10) and the addition of a formaldehyde catcher/depressant will give better results [17]. Which of these two, or other possible approaches, is the better one in practice can only be decided in each case by trial and error. [Pg.1048]

A number of performance criteria are not primarily dedicated to the users of a model but are applied in model generation and optimization. For instance, the mean squared error (MSE) or similar measures are considered for optimization of the number of components in PLS or PC A. For variable selection, the models to be compared have different numbers of variables in this case—and especially if a fit criterion is used—the performance measure must consider the number of variables appropriate measures are the adjusted squared correlation coefficient, adjR, or the Akaike S information criterion (AIC) see Section 4.2.3. [Pg.124]

An important point is the evaluation of the models. While most methods select the best model at the basis of a criterion like adjusted R2, AIC, BIC, or Mallow s Cp (see Section 4.2.4), the resulting optimal model must not necessarily be optimal for prediction. These criteria take into consideration the residual sum of squared errors (RSS), and they penalize for a larger number of variables in the model. However, selection of the final best model has to be based on an appropriate evaluation scheme and on an appropriate performance measure for the prediction of new cases. A final model selection based on fit-criteria (as mostly used in variable selection) is not acceptable. [Pg.153]


See other pages where Error performance criteria is mentioned: [Pg.120]    [Pg.122]    [Pg.674]    [Pg.680]    [Pg.49]    [Pg.497]    [Pg.499]    [Pg.1620]    [Pg.571]    [Pg.275]    [Pg.417]    [Pg.471]    [Pg.574]    [Pg.575]    [Pg.575]    [Pg.590]    [Pg.426]    [Pg.181]    [Pg.368]    [Pg.912]    [Pg.91]    [Pg.295]    [Pg.97]    [Pg.431]    [Pg.44]    [Pg.188]    [Pg.238]    [Pg.23]    [Pg.227]    [Pg.274]    [Pg.105]    [Pg.379]    [Pg.346]    [Pg.302]    [Pg.124]   
See also in sourсe #XX -- [ Pg.120 , Pg.121 ]




SEARCH



Performance criterion

© 2024 chempedia.info