Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Parameter estimation approach

The Geothermal Response Test as developed by us and others has proven important to obtain accurate information on ground thermal properties for Borehole Heat Exchanger design. In addition to the classical line source approach used for the analysis of the response data, parameter estimation techniques employing a numerical model to calculate the temperature response of the borehole have been developed. The main use of these models has been to obtain estimates in the case of non-constant heat flux. Also, the parameter estimation approach allows the inclusion of additional parameters such as heat capacity or shank spacing, to be estimated as well. [Pg.190]

The parameter estimation approach is important in judging the reliability and accuracy of the model. If the confidence intervals for a set of estimated parameters are given and their magnitude is equal to that of the parameters, the reliability one would place in the model s prediction would be low. However, if the parameters are identified with high precision (i.e., small confidence intervals) one would tend to trust the model s predictions. The nonlinear optimization approach to parameter estimation allows the confidence interval for the estimated parameter to be approximated. It is thereby possible to evaluate if a parameter is identifiable from a particular set of measurements and with how much reliability. [Pg.104]

Very often a mixture of these two approaches is used to determine the values of the parameters. Good examples are Dano et al. [78] and Chassagnole et al. [79]. In these studies many parameters were taken from the literature and, in a parameter estimation approach, were allowed to vary within experimental error to fit the unknown parameters. When considering dynamics, the boundary conditions of the network have to be supplied as explicit functions of time, and therefore they have to be measured in order to give good values for parameter with parameter estimation [74, 79]. Many detailed and core models can be interrogated online at JWS online (www.jjj.bio.vu.nl) [80]. [Pg.409]

Thus, a list of 1 5 descriptors was calculated for these purposes, as described below. The partition coefficient log P (calculated by a method based on the Gho.sc/Crip-pen approach [11]) (see also Chapter X, Section 1.1 in the Handbook) was calculated because it affects the solubility dramatically [17, 18]. All the other descriptors were calculated with the program PETRA (Parameter Estimation for the Treatment of Reactivity Applications) [28. ... [Pg.498]

Because the technical barriers previously outhned increase uncertainty in the data, plant-performance analysts must approach the data analysis with an unprejudiced eye. Significant technical judgment is required to evaluate each measurement and its uncertainty with respec t to the intended purpose, the model development, and the conclusions. If there is any bias on the analysts part, it is likely that this bias will be built into the subsequent model and parameter estimates. Since engineers rely upon the model to extrapolate from current operation, the bias can be amplified and lead to decisions that are inaccurate, unwarranted, and potentially dangerous. [Pg.2550]

When estimates of k°, k, k", Ky, and K2 have been obtained, a calculated pH-rate curve is developed with Eq. (6-80). If the experimental points follow closely the calculated curve, it may be concluded that the data are consistent with the assumed rate equation. The constants may be considered adjustable parameters that are modified to achieve the best possible fit, and one approach is to use these initial parameter estimates in an iterative nonlinear regression program. The dissociation constants K and K2 derived from kinetic data should be in reasonable agreement with the dissociation constants obtained (under the same experimental conditions) by other means. [Pg.290]

A general method has been developed for the estimation of model parameters from experimental observations when the model relating the parameters and input variables to the output responses is a Monte Carlo simulation. The method provides point estimates as well as joint probability regions of the parameters. In comparison to methods based on analytical models, this approach can prove to be more flexible and gives the investigator a more quantitative insight into the effects of parameter values on the model. The parameter estimation technique has been applied to three examples in polymer science, all of which concern sequence distributions in polymer chains. The first is the estimation of binary reactivity ratios for the terminal or Mayo-Lewis copolymerization model from both composition and sequence distribution data. Next a procedure for discriminating between the penultimate and the terminal copolymerization models on the basis of sequence distribution data is described. Finally, the estimation of a parameter required to model the epimerization of isotactic polystyrene is discussed. [Pg.282]

We have presented applications of a parameter estimation technique based on Monte Carlo simulation to problems in polymer science involving sequence distribution data. In comparison to approaches involving analytic functions, Monte Carlo simulation often leads to a simpler solution of a model particularly when the process being modelled involves a prominent stochastic coit onent. [Pg.293]

Topaz was used to calculate the time response of the model to step changes in the heater output values. One of the advantages of mathematical simulation over experimentation is the ease of starting the experiment from an initial steady state. The parameter estimation routines to follow require a value for the initial state of the system, and it is often difficult to hold the extruder conditions constant long enough to approach steady state and be assured that the temperature gradients within the barrel are known. The values from the Topaz simulation, were used as data for fitting a reduced order model of the dynamic system. [Pg.496]

Basically two search procedures for non-linear parameter estimation applications apply. (Nash and Walker-Smith, 1987). The first of these is derived from Newton s gradient method and numerous improvements on this method have been developed. The second method uses direct search techniques, one of which, the Nelder-Mead search algorithm, is derived from a simplex-like approach. Many of these methods are part of important mathematical computer-based program packages (e.g., IMSL, BMDP, MATLAB) or are available through other important mathematical program packages (e.g., IMSL). [Pg.108]

The user supplied weighting constant, (>0), should have a large value during the early iterations of the Gauss-Newton method when the parameters are away from their optimal values. As the parameters approach the optimum, should be reduced so that the contribution of the penalty function is essentially negligible (so that no bias is introduced in the parameter estimates). [Pg.164]

Over the years two ML estimation approaches have evolved (a) parameter estimation based an implicit formulation of the objective function and (b) parameter and state estimation or "error in variables" method based on an explicit formulation of the objective function. In the first approach only the parameters are estimated whereas in the second the true values of the state variables as well as the values of the parameters are estimated. In this section, we are concerned with the latter approach. [Pg.232]

The implicit LS, ML and Constrained LS (CLS) estimation methods are now used to synthesize a systematic approach for the parameter estimation problem when no prior knowledge regarding the adequacy of the thermodynamic model is available. Given the availability of methods to estimate the interaction parameters in equations of state there is a need to follow a systematic and computationally efficient approach to deal with all possible cases that could be encountered during the regression of binary VLE data. The following step by step systematic approach is proposed (Englezos et al. 1993)... [Pg.242]

While prior information may be used to influence the parameter estimates towards realistic values, there is no guarantee that the final estimates will not reach extreme values particularly when the postulated grid cell model is incorrect and there is a large amount of data available. A simple way to impose inequality constraints on the parameters is through the incorporation of a penalty function as already discussed in Chapter 9 (Section 9.2.1.2). By this approach extra terms are added in the objective function that tend to explode when the parameters approach near the boundary and become negligible when the parameters are far. One can easily construct such penalty functions. For example a simple and yet very effective penalty function that keeps the parameters in the interval (kmjnkmaXil) is... [Pg.383]

To offer more flexibility we adopt an approach, based on the transient simulation model TRNSYS (Klein et al., 1976), making use of the Lund DST borehole model (Hellstrom, 1989). The parameter estimation procedure is carried out using the GenOPT (Wetter, 2004) package with the Nelder and Mead Simplex minimization algorithm (Nelder and Mead, 1965) or Hooke and Jeeves minimization algorithm (Hooke and Jeeves, 1961). [Pg.185]

PBPK and classical pharmacokinetic models both have valid applications in lead risk assessment. Both approaches can incorporate capacity-limited or nonlinear kinetic behavior in parameter estimates. An advantage of classical pharmacokinetic models is that, because the kinetic characteristics of the compartments of which they are composed are not constrained, a best possible fit to empirical data can be arrived at by varying the values of the parameters (O Flaherty 1987). However, such models are not readily extrapolated to other species because the parameters do not have precise physiological correlates. Compartmental models developed to date also do not simulate changes in bone metabolism, tissue volumes, blood flow rates, and enzyme activities associated with pregnancy, adverse nutritional states, aging, or osteoporotic diseases. Therefore, extrapolation of classical compartmental model simulations... [Pg.233]

Fowle and Fein (1999) measured the sorption of Cd, Cu, and Pb by B. subtilis and B. licheniformis using the batch technique with single or mixed metals and one or both bacterial species. The sorption parameters estimated from the model were in excellent agreement with those measured experimentally, indicating that chemical equilibrium modeling of aqueous metal sorption by bacterial surfaces could accurately predict the distribution of metals in complex multicomponent systems. Fein and Delea (1999) also tested the applicability of a chemical equilibrium approach to describing aqueous and surface complexation reactions in a Cd-EDTA-Z . subtilis system. The experimental values were consistent with those derived from chemical modeling. [Pg.83]

A major limitation of the linearized forms of the Michaelis-Menten equation is that none provides accurate estimates of both Km and Vmax. Furthermore, it is impossible to obtain meaningful error estimates for the parameters, since linear regression is not strictly appropriate. With the advent of more sophisticated computer tools, there is an increasing trend toward using the integrated rate equation and nonlinear regression analysis to estimate Km and While this type of analysis is more complex than the linear approaches, it has several benefits. First, accurate nonbiased estimates of Km and Vmax can be obtained. Second, nonlinear regression may allow the errors (or confidence intervals) of the parameter estimates to be determined. [Pg.269]

Obtaining Eft), t, and of from experimental tracer data involves determining areas under curves defined continuously or by discrete data. The most sophisticated approach involves die use of E-Z Solve or equivalent software to estimate parameters by nonlinear regression. In this case, standard techniques are required to transform experimental concentration versus time data into Eft) or F(t) data the subsequent parameter estimation is based on nonlinear regression of these data using known expressions for Eft) and F t) (developed in Section 19.4). In the least sophisticated approach, discrete data, generated directly from experiment or obtained from a continuous response curve, are... [Pg.459]


See other pages where Parameter estimation approach is mentioned: [Pg.677]    [Pg.82]    [Pg.174]    [Pg.164]    [Pg.364]    [Pg.420]    [Pg.422]    [Pg.677]    [Pg.82]    [Pg.174]    [Pg.164]    [Pg.364]    [Pg.420]    [Pg.422]    [Pg.488]    [Pg.20]    [Pg.243]    [Pg.427]    [Pg.269]    [Pg.182]    [Pg.184]    [Pg.91]    [Pg.677]    [Pg.115]    [Pg.135]    [Pg.156]    [Pg.159]    [Pg.162]    [Pg.185]    [Pg.248]    [Pg.373]    [Pg.178]    [Pg.186]    [Pg.189]    [Pg.114]    [Pg.359]    [Pg.9]   
See also in sourсe #XX -- [ Pg.104 ]




SEARCH



Joint State-Parameter Estimation A Filtering Approach

Parameter estimation

© 2024 chempedia.info