Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Full covariance method

While this also uses a variance-covariance matrix much like the full covariance method, the actual matrix is much more condensed. As an example, the matrix used in a 20-factor model would have a size of (20 X 20) 400 cells, which is moderate compared with the one-million-cell matrix mentioned previously for the full variance-covariance model. The advantages of using a multifactor model are that it easily allows for mapping a new issue into past data for similar bonds by looking at its descriptive characteristics, and it can be inverted for use in a portfolio optimizer without too much effort. The multifactor model is also more tolerant to pricing errors in individual securities since prices are averaged within each factor bucket. [Pg.784]

The first and most straightforward way to calculate tracking error is by using the full covariance model. This method depends heavily on past data for every single instrument in the index and portfolio. Using matrix algebra, the covariance of daily total returns between every pair of assets in the index can be calculated, as well as the variance of total returns for every single asset in the index. A variance-covariance matrix of the bond returns is then constructed based on these calculations. The variance-covariance matrix is then multiplied by the exposures vector, as shown in the equation below ... [Pg.782]

Furthermore, the general method presented in this chapter applies directly to solving the full Maxwell equations with currents. It can also be used to construct exact classical solutions of Yang-Mills equations with Higgs fields and their generalizations. Generically, the method developed in this chapter can be efficiently applied to any conformally invariant wave equation, on the solution set of which a covariant representation of the conformal algebra in Eq. (15) is realized. [Pg.349]

New techniques for data analysis abound in statistical literature. GAM is a powerful tool technique, and a full historical account of GAM with ample references can be found in the research monograph of Hastie and Tibshirani (15). GAM is closer to a reparameterization of the model than a reexpression of the response. Once an additive model is fitted to the data, one can plot their p coordinate functions separately to examine the roles of predictors in modeling response. With the GAM approach the dependence of a parameter (P) on covariates (predictors) Xi,..., Xp are modeled. Usually, the multiple linear regression (MLR) approach is the method of choice for this type of problem. The MLR model is expressed in the following form ... [Pg.388]

A new type of covariate screening method is to use partially linear mixed effects models (Bonate, 2005). Briefly, the time component in a structural model is modeled using a penalized spline basis function with knots at usually equally spaced time intervals. Under this approach, the knots are treated as random effects and linear mixed effects models can be used to find the optimal smoothing parameter. Further, covariates can be introduced into the model to improve the goodness of fit. The LRT between a full and reduced model with and without the covariate of interest can be used to test for the inclusion of a covariate in a model. The advantage of this method is that the exact structural model (i.e., a 1-compartment or 2-compartment model with absorption) does not have to be determined and it is fast and efficient at covariate identification. [Pg.236]

Bies et al. (2003) compared the stepwise model building approach to the GA approach. Three different stepwise approaches were used forward stepwise starting from a base model with no covariates, and a backwards elimination approach starting from a full model with all covariates, and then forward addition to a full model followed by backwards elimination to retain only the most important covariates. Bies et al. found that the GA approach identified a model a full 30 points lower based on objective function value than the other three methods and that the GA algorithm identified important... [Pg.239]

In this account, we have presented both the basic theory of two-component methods in quantum chemistry and new developments achieved within the framework of the generalised DK transformation, which have not yet been described elsewhere. Two-component methods have several advantages when compared to the traditional four-component formulation, which is the well-established covariant description of relativistic quantum mechanics. On the one hand, the computational requirements are significantly reduced by transition to a two-component formulation. While on the other hand, the full four-component machinery is primarily necessary to describe both electronic and positronic degrees of freedom, only the electronic degrees of freedom need to be treated explicitly for chemical purposes. It appears therefore justified to claim that two-component formulations are the natural description for relativistic quantum chemistry. [Pg.659]

Solution of the system of equations The system of Eq. (3), whose equations combine numerical values, theoretical expressions, and covariances, can be solved for the adjusted variables Z best estimates of their values can thus be calculated. The method used in [2,3] consists in using a sequence of linear approximations to system (3), around a numerical vector Z that converges toward the solution of the full, non-linear system (this is akin to Newton s method—see, e.g. [23]). Each of the successive linear approximations to system (3) is solved through the Moore-Penrose pseudo-inverse [20] (see, also. Ref. [2, App. E]). The numerical solution for Z as found in CODATA 2002 can be found on the web . These values are such that the equations in system (3) are satisfied, as a whole, as best as possible [3, App. E]). [Pg.264]

As discussed in Sect. 5.6.2, a full evaluation of the input uncertainties to a model should, where relevant, provide information on the correlations between input parameters. This can be represented through the joint probability distribution of the parameters or through a covariance matrix Sp such as that shown in Eq. (5.68). The joint probability distribution of model parameters can be determined from experimental data using the Bayes method (Berger 1985). Kraft et al. (Smallbone et al. 2010 Mosbach et al. 2014), Braman et al. (2013) and Miki et al. (Panes et al. 2012 Miki et al. 2013) have calculated the p< of rate parameters from experimental data. The covariance matrix of the rate parameters was calculated from the back propagation of experimental errors to the uncertainty of parameters by Sheen et al. (Sheen et al. 2009,2013 Sheen and Wang 201 la, b) and by [Turanyi... [Pg.123]


See other pages where Full covariance method is mentioned: [Pg.368]    [Pg.118]    [Pg.6432]    [Pg.311]    [Pg.218]    [Pg.221]    [Pg.176]    [Pg.269]    [Pg.274]    [Pg.293]    [Pg.77]    [Pg.6431]    [Pg.117]    [Pg.336]    [Pg.9]    [Pg.79]    [Pg.576]    [Pg.1]   
See also in sourсe #XX -- [ Pg.782 ]




SEARCH



Covariance

Covariance method

Covariant

Covariates

Covariation

© 2024 chempedia.info