Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Parameter estimation reliability

While many methods for parameter estimation have been proposed, experience has shown some to be more effective than others. Since most phenomenological models are nonlinear in their adjustable parameters, the best estimates of these parameters can be obtained from a formalized method which properly treats the statistical behavior of the errors associated with all experimental observations. For reliable process-design calculations, we require not only estimates of the parameters but also a measure of the errors in the parameters and an indication of the accuracy of the data. [Pg.96]

The primary purpose for expressing experimental data through model equations is to obtain a representation that can be used confidently for systematic interpolations and extrapolations, especially to multicomponent systems. The confidence placed in the calculations depends on the confidence placed in the data and in the model. Therefore, the method of parameter estimation should also provide measures of reliability for the calculated results. This reliability depends on the uncertainties in the parameters, which, with the statistical method of data reduction used here, are estimated from the parameter variance-covariance matrix. This matrix is obtained as a last step in the iterative calculation of the parameters. [Pg.102]

The most common way in which the global carbon budget is calculated and analyzed is through simple diagrammatical or mathematical models. Diagrammatical models usually indicate sizes of reservoirs and fluxes (Figure 1). Most mathematical models use computers to simulate carbon flux between terrestrial ecosystems and the atmosphere, and between oceans and the atmosphere. Existing carbon cycle models are simple, in part, because few parameters can be estimated reliably. [Pg.417]

Reliability analysis of parameters estimated in differential equations should be performed rigorously via sensitivity analysis. The details of this analysis can be found in the paper by Froment and Hosten (1981). [Pg.547]

Determination of confidence limits for non-linear models is much more complex. Linearization of non-linear models by Taylor expansion and application of linear theory to the truncated series is usually utilized. The approximate measure of uncertainty in parameter estimates are the confidence limits as defined above for linear models. They are not rigorously valid but they provide some idea about reliability of estimates. The joint confidence region for non-linear models is exactly given by Eqn. (B-34). Contrary to ellipsoidal contours for linear models it is generally banana-shaped. [Pg.548]

With this modification the conditioning of matrix A is significantly improved and cond AR) gives a more reliable measure of the ill-conditioning of the parameter estimation problem. This modification has been implemented in all computer programs provided with this book. [Pg.146]

A Three-Dimensional, Three-Phase Automatic History-IVlatching Model Reliability of Parameter Estimates... [Pg.376]

Tan, T.B. and N. Kalogerakis, "A Three-dimensional Three-phase Automatic History Matching Model Reliability of Parameter Estimates", J. Can Petr. Technology, 31(3), 34-41 (1992). [Pg.401]

Therefore, a flexible method to evaluate physical and chemical system parameters is still needed (2, 3). The model identification technique presented in this study allows flexibility in model formulation and inclusion of the available experimental measurements to identify the model. The parameter estimation scheme finds the optimal set of parameters by minimizing the sum of the differences between model predictions and experimental observations. Since some experimental data are more reliable than others, it is advantageous to assign higher weights to the dependable data. [Pg.103]

The parameter estimation approach is important in judging the reliability and accuracy of the model. If the confidence intervals for a set of estimated parameters are given and their magnitude is equal to that of the parameters, the reliability one would place in the model s prediction would be low. However, if the parameters are identified with high precision (i.e., small confidence intervals) one would tend to trust the model s predictions. The nonlinear optimization approach to parameter estimation allows the confidence interval for the estimated parameter to be approximated. It is thereby possible to evaluate if a parameter is identifiable from a particular set of measurements and with how much reliability. [Pg.104]

Several investigators have offered various techniques for estimating crystallization growth and nucleation parameters. Parameters such as kg, 6, and ki are the ones usually estimated. Often different results are presented for identical systems. These discrepancies are discussed by several authors (13,14). One weakness of most of these schemes is that the validity of the parameter estimates, i.e., the confidence in the estimates, is not assessed. This section discusses two of the more popular routines to evaluate kinetic parameters and introduces a method that attempts to improve the parameter inference and provide a measure of the reliability of the estimates. [Pg.104]

This study demonstrates that the nonlinear optimization approach to parameter estimation is a flexible and effective method. Although computationally intensive, this method lends itself to a wide variety of process model formulations and can provide an assessment of the uncertainty of the parameter estimates. Other factors, such as measurement error distributions and instrumentation reliability can also be integrated into the estimation procedure if they are known. The methods presented in the crystallization literature do not have this flexibility in model formulation and typically do not address the parameter reliability issue. [Pg.113]

Note Bubble column mass transfer parameters are difficult to estimate reliably. 9 The above figures are based on results from the sulphite oxidation method at a higher electrolyte concentration than the 0.05M solution to be used in the present example, and therefore the values for and a may be overestimates. [Pg.221]

A limitation of this method is that the pitch period must be known exactly to obtain reliable estimates. The running DFT can be viewed as a filter bank where each filter is a cosine modulated version of a prototype filter given by a rectangular window of length N over the interval 0 < n < N. Based on this interpretation, an improvement in sine-wave parameter estimation can be made by generalizing the window shape as described in the following section. [Pg.474]

An assumption concerning the number of compartments is, by nature, not required. For reliable results and precise parameter estimates, however, a relatively large number of data points per individual are required. Phase 1 studies of mAbs usually provide sufficient data for a noncompartmental analysis, but the assumption of linear pharmacokinetics is not valid for most mAbs. This prerequisite, however, was frequently neglected during the early years of therapeutic mAh development, and an overall estimate for CL, for example, was frequently reported in the literature. In dose-escalating studies, however, the concentration-time plots of the raw data clearly indicate that the slope of the terminal phase is not parallel for the different doses, but increases with increasing dose (Fig. 3.10). As a result, the listing of different clearance values for different doses can be found. For example, the clearance of trastuzumab was reported to be 88.3 mL/h for a 10-mg dose, 34.3 mL/h for a 50-mg dose, 25.0 mL/h for a 100-mg dose, 19.0 mL/h for a 250-mg dose, and 16.7 mL/h for a 300-mg dose. [Pg.79]

Approaches based on parameter estimation assume that the faults lead to detectable changes of physical system parameters. Therefore, FD can be pursued by comparing the estimates of the system parameters with the nominal values obtained in healthy conditions. The operative procedure, originally established in [23], requires an accurate model of the process (including a reliable nominal estimate of the model parameters) and the determination of the relationship between model parameters and physical parameters. Then, an online estimation of the process parameters is performed on the basis of available measures. This approach, of course, might reveal ineffective when the parameter estimation technique requires solution to a nonlinear optimization problem. In such cases, reduced-order or simplified mathematical models may be used, at the expense of accuracy and robustness. Moreover, fault isolation could be difficult to achieve, since model parameters cannot always be converted back into corresponding physical parameters, and thus the influence of each physical parameters on the residuals could not be easily determined. [Pg.127]

Consider all alternatives for estimation in terms of reliability, accuracy, time required, and cost efficiency. Develop predictive models that allow for in silico screening, rather than necessitating prior synthesis of compound. Analyze literature for both pharmacokinetic and toxicokinetic parameter estimation, to identify models that already exist or ones that could be suitably modified for the parameter of interest... [Pg.263]

Despite highly developed computer technologies and numerical methods, the application of new-generation rate-based models requires a high computational effort, which is often related to numerical difficulties. This is a reason for the relatively limited application of modeling methods described above to industrial problems. Therefore, a further study in this field - as well as in the area of model parameter estimation - is required in order to bridge a gap and to provide process engineers with reliable, consistent, robust and user-friendly simulation tools for reactive absorption operations. [Pg.305]

Again, the parameter estimates obtained are not stable, except for n. These studies suggest that examination of D as the dependent variable will not provide reliable models. [Pg.561]

To check the reliability of the results it was also performed the parameter estimation using the full data set. It can be noticed from Table 4 that there is no loss of fitting quality for the system C02-Diphenyl and the parameters are approximately the same when we compare the conventional and the discrimination procedures. [Pg.383]

The extension of ideal phase analysis of the Maxwell-Stefan equations to nonideal liquid mixtures requires the sufficiently accurate estimation of composition-dependent mutual diffusion coefficients and the matrix of thermodynamic factors. However, experimental data on mutual diffusion coefficients are rare, and prediction methods are satisfactory only for certain types of liquid mixtures. The thermodynamic factor may be calculated from activity coefficient models such as NRTL or UNIQUAC, which have adjustable parameters estimated from experimental phase equilibrium data. The group contribution method of UNIFAC may also be helpful, as it has a readily available parameter table consisting of mam7 species. If, however, reliable data are not available, then the averaged values of the generalized Maxwell-Stefan diffusion coefficients and the matrix of thermodynamic factors are calculated at some mean composition between x0i and xzi. Hence, the matrix of zero flux mass transfer coefficients [k ] is estimated by... [Pg.335]

Analysis of the shape of error surfaces. To conclude this section, we consider a more quantitative approach to error estimation. The first step is to estimate the accuracy of the individual data points this can either be done by analysis of the variability of replicate measurements, or from the variation of the fitted result. From that, one can assess the shape of the error surface in the region of the minimum. The procedure is straightforward the square root of the error, defined as the SSD, is taken as a measure of the quality of the fit. A maximum allowed error is defined which depends on the reliability of the individual points, for example, 30% more than with the best fit, if the points are scattered by about 30%. Then each variable (not the SSD as before) is minimised and also maximised. A further condition is imposed that the sum of errors squared (SSD) should not increase by more than the fraction defined above. This method allows good estimates to be made of the different accuracy of the component variables, and also enables accuracy to be estimated reliably even in complex analyses. Finally, it reveals whether parameters are correlated. This is an important matter since it happens often, and in some extreme cases where parameters are tightly correlated it leads to situations where individual constants are effectively not defined at all, merely their products or quotients. Correlations can also occur between global and local parameters. [Pg.330]

During the parameter estimation, it was noted that the model is relatively insensitive to changes in k, and therefore it should not be taken as a reliable prediction of the Diels-Alder reaction rate constant. Effectively, the rate R is very large compared with the rates of initiation and chain transfer, and therefore the DH ratio calculation could be simplified without significantly changing the model results ... [Pg.143]

Rheological model (a) Is it appropriate for the experimental data (b) How reliable is the model parameter estimation software Has the reliability of the software been checked (c) What are you looking for in the data (e.g., effect of temp.) (d) Compare experimental data with a model s predictions because values often do not indicate the appropriateness of the model for the selected data ... [Pg.55]

The statistical measure of the quality of the regression is used to determine whether the model provides a meaningful representation of the data. The parameter estimates are reliable only if the model provides a statistically adequate representation of the data. The evaluation of the quality of the regression requires an independent assessment of the stochastic errors in the data, information that may not be available. In such cases, visual inspection of the fitting results may be useful. Issues associated with assessment of regression quality are discussed further in Section 19.7.2 and Chapter 20. [Pg.381]

This gave much more reliable results. However, we had not implemented a variable step-size algorithm, and therefore obtained fit parameters that were only in the neighborhood of the best fit parameters. Given our previous experience with simplex searching, we combined the two techniques in order to obtain more precise parameter estimates. [Pg.240]


See other pages where Parameter estimation reliability is mentioned: [Pg.230]    [Pg.387]    [Pg.230]    [Pg.387]    [Pg.389]    [Pg.32]    [Pg.166]    [Pg.207]    [Pg.520]    [Pg.102]    [Pg.108]    [Pg.12]    [Pg.216]    [Pg.501]    [Pg.464]    [Pg.448]    [Pg.219]    [Pg.75]    [Pg.58]    [Pg.2301]    [Pg.299]    [Pg.2806]    [Pg.2952]    [Pg.320]    [Pg.351]    [Pg.102]   
See also in sourсe #XX -- [ Pg.305 ]




SEARCH



Parameter estimation

© 2024 chempedia.info