Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Output-error model

On the other hand, MCCC considers the influence of the variation of one parameter on model output in the context of simultaneous variations of all other parameters. In this situation, is smaller than 1 in absolute value and its size depends on the relative importance of the variation of model output due to the parameter of interest and the variation of model output given by the sum total of all sources (namely, the variability in all structural parameter values plus the error variance). [Pg.90]

Optimization of the PPR model is based on minimizing the mean-squares error approximation, as in back propagation networks and as shown in Table I. The projection directions a, basis functions 6, and regression coefficients /3 are optimized, one at a time for each node, while keeping all other parameters constant. New nodes are added to approximate the residual output error. The parameters of previously added nodes are optimized further by backfitting, and the previously fitted parameters are adjusted by cyclically minimizing the overall mean-squares error of the residuals, so that the overall error is further minimized. [Pg.39]

One weakness of some multimedia models that must be considered by the user is inconsistency of time scales. For example, if we employ monthly averaged air concentrations to get rainout values on fifteen-minute interval inputs to a watershed model, large errors can obviously occur. The air-land-water-simulation (ALWAS) developed by Tucker and co-workers (12) overcomes this limitation by allowing for sequential air quality outputs to provide deposition data to drive a soil model. This in turn is coupled to a surface water model. [Pg.98]

For an aquatic model of chemical fate and transport, the input loadings associated with both point and nonpoint sources must be considered. Point loads from industrial or municipal discharges can show significant daily, weekly, or seasonal fluctuations. Nonpoint loads determined either from data or nonpoint loading models are so highly variable that significant errors are likely. In all these cases, errors in input to a model (in conjunction with output errors, discussed below) must be considered in order to provide a valid assessment of model capabilities through the validation process. [Pg.159]

System Representation Errors. System representation errors refer to differences in the processes and the time and space scales represented in the model, versus those that determine the response of the natural system. In essence, these errors are the major ones of concern when one asks "How good is the model ". Whenever comparing model output with observed data in an attempt to evaluate model capabilities, the analyst must have an understanding of the major natural processes, and human impacts, that influence the observed data. Differences between model output and observed data can then be analyzed in light of the limitations of the model algorithm used to represent a particularly critical process, and to insure that all such critical processes are modeled to some appropriate level of detail. For example, a... [Pg.159]

Output Errors. Output errors are analogous to input errors they can lead to biased parameter values or erroneous conclusions on the ability of the model to represent the natural system. As noted earlier, whenever a measurement is made, the possibility of an error is introduced. For example, published U.S.G.S. stream-flow data often used in hydrologic models can be 5 to 15% or more in error this, in effect, provides a tolerance range within which simulated values can be judged to be representative of the observed data. It can also provide a guide for terminating calibration efforts. [Pg.161]

Output errors can be especially insidious since the natural tendency of most model users is to accept the observed data values as the "truth" upon which the adequacy and ability of the model will be judged. Model users should develop a healthy, informed scepticism of the observed data, especially when major, unexplained differences between observed and simulated values exist. The FAT workshop described earlier concluded that rt is clearly inappropriate to allocate all differences between predicted and observed values as model errors measurement errors in field data collection programs can be substantial and must be considered. [Pg.161]

If you encounter these functions, you can reformulate them as equivalent smooth functions by introducing additional constraints and variables. For example, consider the problem of fitting a model to n data points by minimizing the sum of weighted absolute errors between the measured and model outputs. This can be formulated as follows ... [Pg.384]

The sensitivity of diffusion-model output to variations in input has been assessed by workers at Systems Applications, Inc., and at the California Department of Transportation. In each case, reports are in preparation and are therefore not yet available. It is important to distinguish between sensitivity and model performance. True physical or chemical sensitivity that is reflected by the simulation-model equations is a valid reflection of reality. But spurious error propagation through improper numerical integration techniques may be r arded as an artificial sensitivity. Such a distinction must be drawn carefully, lest great sensitivity come to be considered synonymous with unacceptable performance. [Pg.233]

Finally, the MOS should also take into account the uncertainties in the estimated exposure. For predicted exposure estimates, this requires an uncertainty analysis (Section 8.2.3) involving the determination of the uncertainty in the model output value, based on the collective uncertainty of the model input parameters. General sources of variability and uncertainty in exposure assessments are measurement errors, sampling errors, variability in natural systems and human behavior, limitations in model description, limitations in generic or indirect data, and professional judgment. [Pg.348]

In the equation, Yis the model output,/is the model, and (jCi,..., JCp) are random model parameters with standard error (5,..., 5p). The variance of model output is given by the Ist-order Taylor expansion ... [Pg.62]

Approximation methods can be useful, but as the degree of complexity of the input distributions or the model increases, in terms of more complex distribution shapes (as reflected by skewness and kurtosis) and non-linear model forms, one typically needs to carry more terms in the Taylor series expansion in order to produce an accurate estimate of percentiles of the distribution of the model output. Thus, such methods are often most widely used simply to quantify the mean and variance of the model output, although even for these statistics, substantial errors can accrue in some situations. Thus, the use of such methods requires careful consideration, as described elsewhere (e.g. Cullen Frey, 1999). [Pg.54]

After the formulation stage, we have all the equations of the model, but they are not useful yet, because parameters in the equations do not have a particular value. Consequently, the model cannot be used to reproduce the behavior of a physical entity. The parameter estimation procedure consists of obtaining a set of parameters that allows simulation with the model. In many cases, parameters can be found in literature, but in other cases it is required to fit the model to the experimental behavior by using mathematical procedures. The easier and more used types of procedures are those based on the use of optimization algorithms to make minimum the differences between the experimental observations and the model outputs. The more frequently used criterion to optimize the values of the parameters is the least square regression coefficient. In this procedure, a set of values is proposed for all model parameters (one for every parameter) and the model is run. After that, the error criterion is calculated as the sum of the squares of the residues (differences between the values of every experimental and modeled value). Then, an optimization procedure is used to change the values of the model parameters in order to get the minimum value of this criterion. [Pg.101]

An uncertainty analysis involves the determination of the variation of imprecision in an output function based on the collective variance of model inputs. One of the five issues in uncertainty analysis that must be confronted is how to distinguish between the relative contribution of variability (i.e. heterogeneity) versus true certainty (measurement precision) to the characterization of predicted outcome. Variability refers to quantities that are distributed empirically - such factors as soil characteristics, weather patterns and human characteristics - which come about through processes that we expect to be stochastic because they reflect actual variations in nature. These processes are inherently random or variable, and cannot be represented by a single value, so that we can determine only their moments (mean, variance, skewness, etc.) with precision. In contrast, true uncertainty or model specification error (e.g. statistical estimation error) refers to an input that, in theory, has a single value, which cannot be known with precision due to measurement or estimation error. [Pg.140]

The modelled output signal is then calculated by performing the inverse FFT on the modelled output spectrum. The model parameters of the model of the combustion camber arc found by fitting the modelled output signal with the output signal measured. The fitting is effected by minimising the least square error. [Pg.580]

In this work we consider a benehmark eontrol problem of the isothermal operation of a eontinuous stirred tank reactor (CSTR) where the Van de Vusse reaetions take place [12, 13] (i.e. A B -> C and 2A -> D). The performance index is defined as the weighted sum of squares of errors between the setpoint and the estimated model output predieted for the time step in the future with witk) = D.D for all w(t< ) = 10,000foj- k=Mp The... [Pg.565]

Output Error (OE) model. When the properties of disturbances are not modeled and the noise model H q) is chosen to be identity tie = 0 and rid = 0), the noise source w k) is equal to e k), the difference (error) between the actual output and the noise-free output. [Pg.87]

Time series models of the output error such as Eq. 9.5 can be used to identify the dynamic response characteristics of e k) [148]. Dynamic response characteristics such as overshoot, settling time and cycling can be extracted from the pulse response of the fitted time series model. The pulse response of the estimated e k) can be compared to the pulse response of the desired response specification to determine if the output error characteristics are acceptable [148]. [Pg.236]

M Verhaegen and P Dewilde. Subspace model identification. Part I The output error state space model identification class of algorithms. Int. J. Control 56 1187-1210, 1992. [Pg.300]

For example, Wu (2000) computed the AUC, AUMC, and MRT for a 1-compartment model and then showed what impact changing the volume of distribution and elimination rate constant by plus or minus their respective standard errors had on these derived parameters. The difference between the actual model outputs and results from the analysis can then be compared directly or expressed as a relative difference. Alternatively, instead of varying the parameters by some fixed percent, a Monte Carlo approach can be used where the model parameters are randomly sampled from some distribution. Obviously this approach is more complex. A more comprehensive approach is to explore the impact of changes in model parameters simultaneously, a much more computationally intensive problem, possibly using Latin hypercube sampling (Iman, Helton, and Campbell, 1981), although this approach is not often seen in pharmacokinetics. [Pg.40]

Using actual data sets, Kowalski (2001) showed that five out of six case studies selected the same model as stepwise procedures but did not perform as well when the data were rich and FO-approximation was used. He concluded that WAM might actually perform better than FOCE at choosing a model. However, one potential drawback for this approach is that it requires successful estimation of the variance-covariance matrix, which can sometimes require special handling to develop (e.g., if the model is sensitive to initial estimates, the variance-covariance matrix may not be readily evaluated). Therefore, the WAM algorithm may not be suitable for automated searches if the model output does not always include the standard errors. [Pg.238]

Figure 22.4 illustrates a different way to adjust the parameters of the controller. We postulate a reference model which tells us how the controlled process output ideally should respond to the command signal (set point). The model output is compared to the actual process output. The difference (error em) between the two outputs is used through a computer to adjust the parameters of the controller in such a way as to minimize the integral square error ... [Pg.228]

The third set, the external set is used to check performance of the trained network, and to compare with other configuration or topologies. At the end of training process a well-fit model is the desired result. If the network has properly trained then it will be able to generalize to unknown input data with the same relationship it has learned. This capability is evaluated with the test data set by feeding in these values and computing the output error. [Pg.147]

FIGURE 12.3 Backpropagating system output error through the system model. [Pg.196]

Influenced by the mind of forward modeling problems, it is easily directed to adopt complicated model classes so as to capture various complex physical mechanisms. However, the more complicated the model class is utilized, the more uncertain parameters are normally induced unless extra mathematical constraints are imposed. In the former case, the model output may not necessarily be accurate even if the model well characterizes the physical system since the combination of the many small errors from each uncertain parameter can induce a large output error. In the latter case, it is possible that the extra constraints induce substantial errors. Therefore, it is important to use a proper model class for system identification purpose. In this chapter, the Bayesian model class selection approach is introduced and applied to select the most plausible/suitable class of mathematical models representing a static or dynamical (structural, mechanical, atmospheric,...) system (from some specified model classes) by using its response measurements. This approach has been shown to be promising in several research areas, such as artificial neural networks [164,297], structural dynamics and model updating [23], damage detection [150] and fracture mechanics [151], etc. [Pg.214]

Consideration of the equations of a faulty LTI system and of a Luenberger state variable observer reveals that any faults affecting the system have an affect on the observer output error which, after transients have settled, can be used as a fault indicator ([34], Sect. 5.2.2). Assume that thedynamics of a system may be represented by the linear time-invariant state space model... [Pg.10]


See other pages where Output-error model is mentioned: [Pg.189]    [Pg.462]    [Pg.74]    [Pg.251]    [Pg.41]    [Pg.42]    [Pg.757]    [Pg.99]    [Pg.127]    [Pg.41]    [Pg.42]    [Pg.286]    [Pg.159]    [Pg.618]    [Pg.315]    [Pg.82]    [Pg.89]    [Pg.236]    [Pg.5]    [Pg.40]    [Pg.136]    [Pg.196]    [Pg.205]    [Pg.215]   
See also in sourсe #XX -- [ Pg.87 ]




SEARCH



Error model

Model output

Output error

© 2024 chempedia.info