Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Maximum likelihood estimation, optimal

The optimal parameter p can be found by maximum-likelihood estimation, but even the optimal p will not guarantee that the Box—Cox transformed values are symmetric. Note that all these transformations are only defined for positive data values. In case of negative values, a constant has to be added to make them positive. Within R, the Box—Cox transformation can be performed to data of a vector Jt as follows ... [Pg.48]

Another optimization approach was followed by Wagner [68 ]. Wagner developed a methodology for performing simultaneous model parameter estimation and source characterization, in which he used an inverse model as a non-linear maximum likelihood estimation problem. The hydrogeologic and source parameters were estimated based on hydraulic head and contaminant concentration measurements. In essence, this method is minimizing the following ... [Pg.77]

Haines et al. (47) suggested including the criterion Bayesian D-optimality, which maximizes some concave function of the information matrix, which in essence is the minimization of the generalized variance of the maximum likelihood estimators of the two parameters of the logistic regression. The authors underline that toxicity is recorded as an ordinal variable and not a simple binary variable, and that the present design needs to be extended to proportional odds models. [Pg.792]

There are several concepts for the description of phase in quantum theory at present. Some of them are accenting the theoretical aspects, other the experimental ones. Quantization based on the correspondence principle leads to the formulation of operational quantum phase concepts. For example, the well-known operational approach formulated by Noh et al. [63,64] is motivated by the correspondence principle in classical wave theory. Further generalization may be given in the framework of quantum estimation theory. The prediction may be improved using the maximum-likelihood estimation. The optimization of phase inference will be pursued in the following. [Pg.528]

The established method for experimentally estimating the diffusion coefficient of a particle makes use of the linear dependence of its mean squared displacement (MSD) with respect to time (another method that involves a maximum likelihood estimate (MLE) that is optimal with respect to an information-theoretic limit has been proposed recently [3]). However, noise in the measurement of the displacements, due to optical and instmment constraints, complicates this estimation. Frequently though, the mean squared measurement noise (Xmeas(O) IS Well approximated to be additive [4] and satisfies (X t)) = 2Dt + ( meas(O)- Thus, a linear regression model is... [Pg.216]

Nonparametric analysis provides powerful results since the rehahility calculation is unconstrained to fit any particular pre-defined lifetime distribution. However, this flexibility makes nonparametric results neither easy nor convenient to use for different purposes as often encountered in engineering design (e.g., optimization). In addition, some trends and patterns are more clearly identified and recognizable with parametric analysis. Several possible methods can be used to fit a parametric distribution to the nonparametric estimated rehability functions (as provided by the Kaplan-Meier estimator), such as graphical procedures or inference procedures. See Lawless (2003) for details. We choose in this paper the maximum likelihood estimation (MLE) technique, assuming that the sateUite subsystems failure data are arising from a WeibuU piobabihly distribution, as expressed in Equations 1,2. [Pg.868]

Although it sounds reasonable to use the maximum likelihood to define our esfimafe of the displacement, there are two questions that remain. Firstly, what is the variance of the error associated with this estimate This defines N which was used in Eq. 22 fo defermine fhe error in fhe wavefront reconstruction. Secondly, is it possible to do better than the centroid In other words is it optimal ... [Pg.387]

While it is perfectly permissible to estimate a and b on this basis, the calculation can only be done in an iterative fashion, that is, both a and b are varied in increasingly smaller steps (see Optimization Techniques, Section 3.5) and each time the squared residuals are calculated and summed. The combination of a and b that yields the smallest of such sums represents the solution. Despite digital computers, Adcock s solution, a special case of the maximum likelihood method, is not widely used the additional computational effort and the more complicated software are not justified by the improved (a debatable notion) results, and the process is not at all transparent, i.e., not amenable to manual verification. [Pg.96]

Table 2.3 is used to classify the differing systems of equations, encountered in chemical reactor applications and the normal method of parameter identification. As shown, the optimal values of the system parameters can be estimated using a suitable error criterion, such as the methods of least squares, maximum likelihood or probability density function. [Pg.112]

These considerations raise a question how can we determine the optimal value of n and the coefficients i < n in (2.54) and (2.56) Clearly, if the expansion is truncated too early, some terms that contribute importantly to Po(AU) will be lost. On the other hand, terms above some threshold carry no information, and, instead, only add statistical noise to the probability distribution. One solution to this problem is to use physical intuition [40]. Perhaps a better approach is that based on the maximum likelihood (ML) method, in which we determine the maximum number of terms supported by the provided information. For the expansion in (2.54), calculating the number of Gaussian functions, their mean values and variances using ML is a standard problem solved in many textbooks on Bayesian inference [43]. For the expansion in (2.56), the ML solution for n and o, also exists, lust like in the case of the multistate Gaussian model, this equation appears to improve the free energy estimates considerably when P0(AU) is a broad function. [Pg.65]

The most popular approach is supervised because a region of interest has to be defined in the background of the images in order to extract n samples. Afterward, the choice of the estimator depends on what kind of data is available. For complex images, the optimal maximum likelihood (ML) estimator of a2 is given by28... [Pg.218]

The problem considered here is the estimation of the state vector X (which contains the unknown parameters) from the observations of the vectors = [yo> yi.yk ] Because the collection of variables Y = (yoYi - -yk) jointly gaussian, we can estimate X by maximizing the likelihood of conditional probability distributions p(Xk/Yk), which are given by the values of conditional variables. Moreover, we can also search the estimate X, which minimizes the mean square error k = Xk — Xk. In both cases (maximum likelihood or least squares), the optimal estimate for the jointly gaussian variables is the conditional mean and the error in the estimate is the conventional covariance. [Pg.179]

Maximum likelihood (ML) estimation can be performed if the statistics of the measurement noise Ej are known. This estimate is the value of the parameters for which the observation of the vector, yj, is the most probable. If we assume the probability density function (pdf) of to be normal, with zero mean and uniform variance, ML estimation reduces to ordinary least squares estimation. An estimate, 0, of the true yth individual parameters (pj can be obtained through optimization of some objective function, 0 (0 ). ModeP is assumed to be a natural choice if each measurement is assumed to be equally precise for all values of yj. This is usually the case in concentration-effect modeling. Considering the multiplicative log-normal error model, the observed concentration y is given by ... [Pg.2948]

Also under OLS assumptions, the regression parameter estimates have a number of optimal properties. First, 0 is an unbiased estimator for 0. Second, the standard error of the estimates are at a minimum, i.e., the standard error of the estimates will be larger than the OLS estimates given any other assumptions. Third, assuming the errors to be normally distributed, the OLS estimates are also the maximum likelihood (ML) estimates for 0 (see below). It is often stated that the OLS parameter estimates are BLUE (Best Linear Unbiased Predictors) in the sense that best means minimum variance. Fourth, OLS estimates are consistent, which in simple terms means that as the sample size increases the standard error of the estimate decreases and the bias of the parameter estimates themselves decreases. [Pg.59]

Maximal likelihood was first presented by R.A. Fisher (1921) (when he was 22 years old ) and is the backbone of statistical estimation. The object of maximum likelihood is to make inferences about the parameters of a distribution 0 given a set of observed data. Maximum likelihood is an estimation procedure that finds an estimate of 0 (an estimator called 0) such that the likelihood of actually observing the data is maximal. The Likelihood Principle holds that all the information contained in the data can be summarized by a likelihood function. The standard approach (when a closed form solution can be obtained) is to derive the likelihood function, differentiate it with respect to the model parameters, set the resulting equations equal to zero, and then solve for the model parameters. Often, however, a closed form solution cannot be obtained, in which case optimization is done to find the set of parameter values that maximize the likelihood (hence the name). [Pg.351]

Whereas standard linearized localization methods give a single point solution and uncertainty estimates (Schechinger and Vogel 2005), die result of the nonlinear method is a probability density function (PDF) over the unknown source coordinates. The optimal location is taken as the maximum likelihood point of the PDF. The PDF explicitly accounts for a priori known data errors, which are assumed to be Gaussian. [Pg.140]

The models built in the previous steps can be parameterized based on physiogenomic data. The maximum likelihood method is used, which is a weU-established method for obtaining optimal estimates of parameters. S-plus provides very good support for algorithms that provide these estimates for the initial linear regression models, as well as other generalized linear models that we may use when the error in distribution is not normal. [Pg.456]

However, only when the measurement errors are independent and follow a normal distribution (homoschedastic noise), PCA estimation of this subspace is optimal in a maximum likelihood sense [87]. On the contrary, if measurement error variances are non-uniform (heteroschedastic noise) and also not independent, the PCA projection may represent the samples incorrectly or non-optimally. Non-uniform measurement errors in a data set may arise from error sources inherent to a given type of instrumentation or experimental setting, for example, variations in noise across measurement channels in spectrophotometers, or due to the presence of missing information. [Pg.121]

The theoretical grounds for parameter estimation are based on the principle of maximum likelihood, which states if an event has occurred, the maximum probability should have corresponded to its realization. As a consequence, the estimators maximizing the likelihood function possess certain optimal properties (Johnson and Leone, 1977, Chap. 7 Linnik, 1961, Chap. 3). The likelihood function is composed of the distribution functions of the errors. When no information on the type of distribution is available, it is usually assumed to be normal, justified by the results of the central limit theorem (Box et al, 1978, p. 44 Linnik, 1961, pp. 71-74). For normally distributed errors, maximization of the likelihood function is reduced to minimization of the residuals, i.e., to the least-squares methods. [Pg.429]


See other pages where Maximum likelihood estimation, optimal is mentioned: [Pg.37]    [Pg.2815]    [Pg.344]    [Pg.307]    [Pg.315]    [Pg.470]    [Pg.191]    [Pg.728]    [Pg.114]    [Pg.1877]    [Pg.108]    [Pg.163]    [Pg.363]    [Pg.558]    [Pg.56]    [Pg.575]    [Pg.149]    [Pg.395]    [Pg.50]    [Pg.106]    [Pg.429]    [Pg.2546]    [Pg.237]    [Pg.173]    [Pg.251]    [Pg.279]    [Pg.109]    [Pg.334]    [Pg.949]   


SEARCH



Likelihood

Maximum likelihood

Maximum likelihood estimates

Optimal estimate

© 2024 chempedia.info