Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Optimal estimate

Bryson, A. E., Ho, Y. B., "Applied Optimal Control Optimization, Estimation and Control," Blaisdell Publishing, Waltham, Mass. (1969). ... [Pg.80]

OPTIMIZATION, ESTIMATION, AND CONTROL, BLAISDFLL PUBL, WALTHAM, MASS., 1969. [Pg.241]

In this figure the next definitions are used A - projection operator, B - pseudo-inverse operator for the image parameters a,( ), C - empirical posterior restoration of the FDD function w(a, ), E - optimal estimator. The projection operator A is non-observable due to the Kalman criteria [10] which is the main singularity for this problem. This leads to use the two step estimation procedure. First, the pseudo-inverse operator B has to be found among the regularization techniques in the class of linear filters. In the second step the optimal estimation d (n) for the pseudo-inverse image parameters d,(n) has to be done in the presence of transformed noise j(n). [Pg.122]

The adaptive estimation of the pseudo-inverse parameters a n) consists of the blocks C and E (Fig. 1) if the transformed noise ( ) has unknown properties. Bloek C performes the restoration of the posterior PDD function w a,n) from the data a (n) + (n). It includes methods and algorithms for the PDD function restoration from empirical data [8] which are based on empirical averaging. Beeause the noise is assumed to be a stationary process with zero mean value and the image parameters are constant, the PDD function w(a,n) converges, at least, to the real distribution. The posterior PDD funetion is used to built a back loop to block B and as a direct input for the estimator E. For the given estimation criteria f(a,d) an optimal estimation a (n) can be found from the expression... [Pg.123]

For the iteration algorithm (5) the optimal estimations (6) are directly used by a second back loop to block B (long dashed line in Fig. 1). [Pg.123]

Note The segmentation operation yields a near-optimal estimate x that may be used as initialization point for an optimization algoritlim that has to find out the global minimum of the criterion /(.). Because of its nonlinear nature, we prefer to minimize it by using a stochastic optimization algorithm (a version of the Simulated Annealing algorithm [3]). [Pg.175]

Fig. 9.5 Integration of two measurement systems to obtain optimal estimate. Fig. 9.5 Integration of two measurement systems to obtain optimal estimate.
As expecfed the optimal estimator is indeed a function of both the covariance of the atmospheric turbulence and the measurement noise. A matrix identity can be used to derive an equivalent form of Eq. 16 (Law and Lane, 1996)... [Pg.380]

Many more measurements are necessary, and these have to be carefully distributed over the. x-range to ensure optimally estimated coefficients. [Pg.128]

More generally, the loss function need not be symmetric L(e) L(-e). Indeed, underestimation of a pollutant concentration may lead to not cleaning a hazardous area with the resulting health hazards. These health hazards may be weighted more than the costs of cleaning unduly due to an overestlmatlon of the pollutant concentration. The optimal estimators linked to asymmetric linear loss functions are given In Journel (3 ). [Pg.113]

Confidence limits for the parameter estimates define the region where values of bj are not significantly different from the optimal value at a certain probability level 1-a with all other parameters kept at their optimal values estimated. The confidence limits are a measure of uncertainty in the optimal estimates the broader the confidence limits the more uncertain are the estimates. These intervals for linear models are given by... [Pg.547]

For linear models the joint confidence region is an Alp-dimensional ellipsoid. All parameters encapsulated within this hyperellipsoid do not differ significantly from the optimal estimates at the probability level of 1-a. [Pg.548]

A. Gelb (Ed.), Applied Optimal Estimation. MIT Press, Cambridge, MA, 1974. [Pg.603]

For computer simulations, (5.35) leads to accurate estimates of free energies. It is also the basis for higher-order cumulant expansions [20] and applications of Bennett s optimal estimator [21-23], We note that Jarzynski s identity (5.8) follows from (5.35) simply by integration over w because the probability densities are normalized to 1 ... [Pg.181]

Shown in Fig. 5.3d are free energies estimated from the same forward and backward simulation runs using Bennett s optimal estimator, obtained by solving (5.50) using a Newton-Raphson method. Unlike the direct exponential estimator (which... [Pg.189]

Fig. 5.3. Comparison of different free energy estimators. Plotted are distributions of estimated free energies using sample sizes (i.e., number of independent simulation runs) of N = 100 simulations (solid lines), as well as N = 1, 000 (long dashed) and N = 10,000 simulations short dashed lines), (a) Exponential estimator, (5.44). (b) Cumulant estimator using averages from forward and backward paths, (5.47). (c) Cumulant estimator using averages and variances from forward and backward paths, (5.48). (d) Bennett s optimal estimator, (5.50)... Fig. 5.3. Comparison of different free energy estimators. Plotted are distributions of estimated free energies using sample sizes (i.e., number of independent simulation runs) of N = 100 simulations (solid lines), as well as N = 1, 000 (long dashed) and N = 10,000 simulations short dashed lines), (a) Exponential estimator, (5.44). (b) Cumulant estimator using averages from forward and backward paths, (5.47). (c) Cumulant estimator using averages and variances from forward and backward paths, (5.48). (d) Bennett s optimal estimator, (5.50)...
In summary, using work collected from forward and backward paths greatly improves the accuracy of the estimates, and for the symmetric system studied here eliminates the bias. In our particular example, the cumulant estimators using forward and backward work data produce the most precise free energy estimates, followed by Bennett s optimal estimator. However, this somewhat poorer performance of the optimal estimator is caused in part by the high degree of symmetry of the system studied. [Pg.190]

Fig. 5.4. Comparison of different free energy estimators for asymmetric perturbation from A = 0 to 2 within t = 1. Shown are distributions of free energies estimated using the direct exponential average, (5.44), averaged over forward and backward perturbations (solid line), averages (5.47) from forward and backward paths (long dashed line)-, averages and variances (5.48) from forward and backward paths (short dashed line)-, and Bennett s optimal estimator, (5.50), (dotted line). In all cases, free energies were estimated from N 1,000 simulations. The vertical arrow indicates the actual free energy difference of (3AA —6.6... Fig. 5.4. Comparison of different free energy estimators for asymmetric perturbation from A = 0 to 2 within t = 1. Shown are distributions of free energies estimated using the direct exponential average, (5.44), averaged over forward and backward perturbations (solid line), averages (5.47) from forward and backward paths (long dashed line)-, averages and variances (5.48) from forward and backward paths (short dashed line)-, and Bennett s optimal estimator, (5.50), (dotted line). In all cases, free energies were estimated from N 1,000 simulations. The vertical arrow indicates the actual free energy difference of (3AA —6.6...
However, mathematical evidence of such a definition of characteristic timescale has been understood only recently in connection with optimal estimates [54]. As an example we will consider evolution of the probability, but the consideration may be performed for any observable. We will speak about the transition time implying that it describes change of the evolution of the transition probability from one state to another one. [Pg.376]

Alternatively, the definition of the mean transition time (5.4) may be obtained on the basis of consideration of optimal estimates [54]. Let us define the transition time i) as the interval between moments of initial state of the system and abrupt change of the function, approximating the evolution of its probability Q(t.X(t) with minimal error. As an approximation consider the following function v /(f,xo, ) = flo(xo) + a (xo)[l(f) — l(f — i (xo))]. In the following we will drop an argument of ao, a, and the relaxation time d, assuming their dependence on coordinates of the considered interval c and d and on initial coordinate x0. Optimal values of parameters of such approximating function satisfy the condition of minimum of functional ... [Pg.378]

When considering analytic description, asymptotically optimal estimates are of importance. Asymptotically optimal estimates assume infinite duration of the observation process for fjv —> oo. For these estimates an additional condition for amplitude of a leap is superimposed The amplitude is assumed to be equal to the difference between asymptotic and initial values of approximating function a = <2(0, xo) — <2(oc,Xq). The only moment of abrupt change of the function should be determined. In such an approach the required quantity may be obtained by the solution of a system of linear equations and represents a linear estimate of a parameter of the evolution of the process. [Pg.379]

Therefore, we have again arrived at (5.4), which, as follows from the above, is an asymptotically optimal estimate. From the expression (5.9), another well-known... [Pg.379]

Bryson AE, Ho Y-C (1975) Applied optimal control optimization, estimation, and control. Hemisphere Publishing/distributed by Halsted Press, Washington... [Pg.44]

In this chapter, the mathematical tools and fundamental concepts utilized in the development and application of modem estimation theory are considered. This includes the mathematical formulation of the problem and the important concepts of redundancy and estimability in particular, their usefulness in the decomposition of the general optimal estimation problem. A brief discussion of the structural aspects of these concepts is included. [Pg.28]

Throughout this chapter, we will refer to estimation in a very general sense. We will see later that data reconciliation is only a particular case within the framework of the optimal estimation theory. [Pg.28]

As in the classical steady-state data reconciliation formulation, the optimal estimates are those that are as close as possible (in the least squares sense) to the measurements, such that the model equations are satisfied exactly. [Pg.169]

Friedland, B. (1969). Treatment of bias in recursive filtering. IEEE Trans. Autom. Control AC-14,359-367. Gelb, A. (1974). Applied Optimal Estimation. MIT Academic Press, Cambridge, MA. [Pg.176]

When a non-constant drift is present, the estimation of the semi-variogram model is confounded with the estimation of the drift. That is, to find the optimal estimator of the semi-variogram, it is necessary to know the drift function, but it is unknown. David (14) recommended an estimator of the drift, m ( ), derived from least-square methods of trend surface analysis (18). Then at every data point a residual is given by... [Pg.215]

In other words, the optimal estimate may be calculated through ... [Pg.291]

P. Maragakis, M. Spithchy, and M. Karplus, Optimal estimates of free energy estimates from multi-state nonequihbrium work data. Phys. Rev. Lett. 96, 100602 (2006). [Pg.120]

Larkin, F. M., Optimal Estimation of Bounded Linear Functionals from Noisy Data, ... [Pg.31]


See other pages where Optimal estimate is mentioned: [Pg.123]    [Pg.385]    [Pg.542]    [Pg.206]    [Pg.219]    [Pg.56]    [Pg.185]    [Pg.190]    [Pg.380]    [Pg.380]    [Pg.217]    [Pg.580]    [Pg.134]    [Pg.294]    [Pg.95]    [Pg.405]   
See also in sourсe #XX -- [ Pg.290 ]




SEARCH



Determination of Optimal Inputs for Precise Parameter Estimation and Model Discrimination

Estimation of Optimal Rubber Content

Generating optimal linear PLS estimation

Generating optimal linear PLS estimation GOLPE)

Maximum likelihood estimation, optimal

Numerical Estimates for Optimized Parameters

Optimal kinetic parameter estimates

Regression, parameter estimation local optimization

Relationships for Estimating Optimized Conditions

Selection of Optimal Sampling Interval and Initial State for Precise Parameter Estimation

Sequential Design for Optimal Parameter Estimation

© 2024 chempedia.info