The theoretical grounds for parameter estimation are based on the principle of maximum likelihood, which states if an event has occurred, the maximum probability should have corresponded to its realization. As a consequence, the estimators maximizing the likelihood function possess certain optimal properties (Johnson and Leone, 1977, Chap. 7 Linnik, 1961, Chap. 3). The likelihood function is composed of the distribution functions of the errors. When no information on the type of distribution is available, it is usually assumed to be normal, justified by the results of the central limit theorem (Box et al, 1978, p. 44 Linnik, 1961, pp. 71-74). For normally distributed errors, maximization of the likelihood function is reduced to minimization of the residuals, i.e., to the least-squares methods. [Pg.429]

The training problem determines the set of model parameters given above for an observed set of wavelet coefficients. In other words, one first obtains the wavelet coefficients for the time series data that we are interested in and then, the model parameters that best explain the observed data are found by using the maximum likelihood principle. The expectation maximization (EM) approach that jointly estimates the model parameters and the hidden state probabilities is used. This is essentially an upward and downward EM method, which is extended from the Baum-Welch method developed for the chain structure HMM [43, 286]. [Pg.147]

Also under OLS assumptions, the regression parameter estimates have a number of optimal properties. First, 0 is an unbiased estimator for 0. Second, the standard error of the estimates are at a minimum, i.e., the standard error of the estimates will be larger than the OLS estimates given any other assumptions. Third, assuming the errors to be normally distributed, the OLS estimates are also the maximum likelihood (ML) estimates for 0 (see below). It is often stated that the OLS parameter estimates are BLUE (Best Linear Unbiased Predictors) in the sense that best means minimum variance. Fourth, OLS estimates are consistent, which in simple terms means that as the sample size increases the standard error of the estimate decreases and the bias of the parameter estimates themselves decreases. [Pg.59]

The problem considered here is the estimation of the state vector X (which contains the unknown parameters) from the observations of the vectors = [yo> yi.yk ] Because the collection of variables Y = (yoYi - -yk) jointly gaussian, we can estimate X by maximizing the likelihood of conditional probability distributions p(Xk/Yk), which are given by the values of conditional variables. Moreover, we can also search the estimate X, which minimizes the mean square error k = Xk — Xk. In both cases (maximum likelihood or least squares), the optimal estimate for the jointly gaussian variables is the conditional mean and the error in the estimate is the conventional covariance. [Pg.179]

In selecting the model, a practitioner will select the market variables that are incorporated in the model these can be directly observed such as zero-coupon rates or forward rates, or swap rates, or they can be indeterminate such as the mean of the short rate. The practitioner will then decide the dynamics of these market or state variables, so, for example, the short rate may be assumed to be mean reverting. Finally, the model must be calibrated to market prices so, the model parameter values input must be those that produce market prices as accurately as possible. There are a number of ways that parameters can be estimated the most common techniques of calibrating time series data such as interest rate data are general method of moments and the maximum likelihood method. For information on these estimation methods, refer to the bibliography. [Pg.81]

© 2019 chempedia.info