Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

The Parameter Estimation Problem

Parameter estimation problems result when we attempt to match a model of known form to experimental data by an optimal determination of unknown model parameters. The exact nature of the parameter estimation problem will depend on the mathematical model. An important distinction has to be made at this point. A model will contain both state variables (concentrations, temperatures, pressures, etc.) and parameters (rate constants, dispersion coefficients, activation energies, etc.). [Pg.160]

A further distinction still is possible by decomposing the state variables into two groups independent and dependent variables. This decomposition will lead us to two different problems, as we will discuss later. [Pg.161]

Let us now specify the model we will be considering. The following variables are defined  [Pg.161]

6 n-dimensional colunm vector of parameters whose numerical values are unknown [9i,92,0 ].  [Pg.161]

A single experiment consists of the measurement of each of the g observed variables for a given set of state variables (dependent, independent). Now if the independent state variables are error-free (explicit models), the optimization need only be performed in the parameter space, which is usually small. [Pg.161]


The scope of this book deals primarily with the parameter estimation problem. Our focus will be on the estimation of adjustable parameters in nonlinear models described by algebraic or ordinary differential equations. The models describe processes and thus explain the behavior of the observed data. It is assumed that the structure of the model is known. The best parameters are estimated in order to be used in the model for predictive purposes at other conditions where the model is called to describe process behavior. [Pg.2]

The formulation of the parameter estimation problem is equally important to the actual solution of the problem (i.e., the determination of the unknown parameters). In the formulation of the parameter estimation problem we must answer two questions (a) what type of mathematical model do we have and (b) what type of objective function should we minimize In this chapter we address both these questions. Although the primary focus of this book is the treatment of mathematical models that are nonlinear with respect to the parameters nonlinear regression) consideration to linear models linear regression) will also be given. [Pg.7]

In practice, the solution of Equation 3.16 for the estimation of the parameters is not done by computing the inverse of matrix A. Instead, any good linear equation solver should be employed. Our preference is to perform first an eigenvalue decomposition of the real symmetric matrix A which provides significant additional information about potential ill-conditioning of the parameter estimation problem (see Chapter 8). [Pg.29]

Generally speaking, for condition numbers less than 10 the parameter estimation problem is well-posed. For condition numbers greater than 1010 the problem is relatively ill-conditioned whereas for condition numbers 10 ° or greater the problem is very ill-conditioned and we may encounter computer overflow problems. [Pg.142]

When the parameters differ by more than one order of magnitude, matrix A may appear to be ill-conditioned even if the parameter estimation problem is well-posed. The best way to overcome this problem is by introducing the reduced sensitivity coefficients, defined as... [Pg.145]

With this modification the conditioning of matrix A is significantly improved and cond AR) gives a more reliable measure of the ill-conditioning of the parameter estimation problem. This modification has been implemented in all computer programs provided with this book. [Pg.146]

Practical experience has shown that (i) if we have a relatively large number of data points, the prior has an insignificant effect on the parameter estimates (ii) if the parameter estimation problem is ill-posed, use of "prior" information has a stabilizing effect. As seen from Equation 8.48, all the eigenvalues of matrix A are increased by the addition of positive terms in its diagonal. It acts almost like Mar-quadt s modification as far as convergence characteristics are concerned. [Pg.147]

Equality constraints are rather seldom in parameter estimation. If there is an equality constraint among the parameters, one should first attempt to eliminate one of the unknown parameters simply by solving explicitly for one of the parameters and then substituting that relationship in the model equations. Such an action reduces the dimensionality of the parameter estimation problem which aids significantly in achieving convergence. [Pg.158]

When the parameters differ by several orders of magnitude between them, the joint confidence region will have a long and narrow shape even if the parameter estimation problem is well-posed. To avoid unnecessary use of the shape criterion, instead of investigating the properties of matrix A given by Equation 12.2, it is better to use the normalized form of matrix A given below (Kalogerakis and Luus, 1984) as AR. [Pg.189]

Given an EoS, the objective of the parameter estimation problem is to compute optimal values for the interaction parameter vector, k, in a statistically correct and computationally efficient manner. Those values are expected to enhance the correlational ability of the EoS without compromising its ability to predict the correct phase behavior. [Pg.229]

The implicit LS, ML and Constrained LS (CLS) estimation methods are now used to synthesize a systematic approach for the parameter estimation problem when no prior knowledge regarding the adequacy of the thermodynamic model is available. Given the availability of methods to estimate the interaction parameters in equations of state there is a need to follow a systematic and computationally efficient approach to deal with all possible cases that could be encountered during the regression of binary VLE data. The following step by step systematic approach is proposed (Englezos et al. 1993)... [Pg.242]

The formulation for the next three problems of the parameter estimation problem was given in Chapter 6. These examples were formulated with data from the literature and hence the reader is strongly recommended to read the original papers for a thorough understanding of the relevant physical and chemical phenomena. [Pg.302]

In this chapter, the general problem of joint parameter estimation and data reconciliation will be discussed. The more general formulation, in terms of the error-in-variable method (EVM), where measurement errors in all variables are considered in the parameter estimation problem, will be stated. Finally, joint parameter and state estimation in dynamic processes will be considered. [Pg.178]

The first phase of the parameter estimation problem consists of choosing the measurements in such a way that the necessary conditions for estimability are satisfied. This means that we have to design the experiment such that if the measurements were totally without error, it would be possible to recover the desired parameters. These conditions were defined in Chapter 2. For instance, in the previous example, it would... [Pg.181]

In the error-in-variable method (EVM), measurement errors in all variables are treated in the parameter estimation problem. EVM provides both parameter estimates and reconciled data estimates that are consistent with respect to the model. The regression models are often implicit and undetermined (Tjoa and Biegler, 1992), that is,... [Pg.185]

Assuming that j are normally distributed and uncorrelated, with zero mean and known positive definite covariance matrix the parameter estimation problem can be formulated as minimizing with respect to zj and 0 ... [Pg.186]

Kim et al. (1990) proposed a nested, nonlinear EVM, following ideas similar to those of Reilly and Patino-Leal (1981). In this approach, the parameter estimation is decoupled from the data reconciliation problem however, the reconciliation problem is optimized at each iteration of the parameter estimation problem. [Pg.187]

The performance index can be changed to include another term (there is no sense in doing this in the parameter estimation problem, but as we will see in the next section, there is a good reason for doing it in the DMC problem). If the magnitudes of the ntt parameter values are included in the J performance index... [Pg.283]

Numerical Analysis. It is difficult to determine which measurements contain sufficient information to allow the independent determination of all model parameters. This issue can be studied by assessing the impact of the use of various measurements on the parameter estimation problem using pseudo-experimental data. [Pg.106]

Pseudo-experimental data can be generated by solving the model. Equations 1-4, for a chosen set of parameters and initial conditions, and then adding random noise to the model solution. For a given choice of measurement variables, the simulated data is then used in the parameter estimation problem. This procedure provides a means by which to evaluate the measurements that are required and the amount of measurement noise that is tolerable for parameter identification. [Pg.106]

Since both x and y contain experimental error, the parameter estimation problem should be treated as an error in variables problem (3,32). The objective function to be minimized is given by ... [Pg.236]

The parameter estimation problem remains essentially the same as before, since could be determined separately from hold-up measurements. [Pg.262]

Parameter identification is complicated by several factors (i) the complexity of the models and the nonconvexity of the parameter estimation problems, and (ii) the need for the model parameters to be identifiable from the available measurements. Moreover, in the presence of structural plant-model mismatch, parameter identification does not necessarily lead to model improvement. In order to avoid the task of identifying a model on-line, fixed-model methods have been proposed. The idea therein is to utilize both the available measurements and a (possibly inaccurate) steady-state model to drive the process towards a desirable operating point. In constraint-adaptation schemes (Forbes and Marlin, 1994 Chachuat et al., 2007), for instance, the measurements are used to correct the constraint functions in the RTO problem, whereas a process model is used to... [Pg.393]

You might think at this point the correlation is complete. It is not, though, because the data were transformed to make the parameter estimation problem linear. Thus, the statistics are in terms of the transformed problem. It is always a good idea to calculate the curve lit using the original variables. You can do this most conveniently by duplicating some columns so they are adjacent for plotting purposes, as shown in Table E.6. [Pg.302]


See other pages where The Parameter Estimation Problem is mentioned: [Pg.7]    [Pg.9]    [Pg.11]    [Pg.13]    [Pg.15]    [Pg.17]    [Pg.19]    [Pg.21]    [Pg.115]    [Pg.345]    [Pg.447]    [Pg.12]    [Pg.179]    [Pg.179]    [Pg.181]    [Pg.26]    [Pg.261]    [Pg.2592]    [Pg.48]    [Pg.10]    [Pg.28]   


SEARCH



Formulation of the Parameter Estimation Problem

Parameter estimation

The classical problem of parameter estimation

The parameters

© 2024 chempedia.info