Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Variance-covariance model

The variance-covariance model approach extracts volatility information from historical returns and builds a model intended to predict divergence in performance. For this type of model, the underlying data are typically a time series of yields, spreads or returns. The model relies heavily on historical data and assumes both stable correlations and a normal distribution of returns. [Pg.781]

Another form of the variance-covariance model incorporates a multifactor approach. Instead of looking at covariances between individual bonds the multifactor model aggregates information into common factors. These so-called principal factors can be obtained either by regres-... [Pg.783]

While this also uses a variance-covariance matrix much like the full covariance method, the actual matrix is much more condensed. As an example, the matrix used in a 20-factor model would have a size of (20 X 20) 400 cells, which is moderate compared with the one-million-cell matrix mentioned previously for the full variance-covariance model. The advantages of using a multifactor model are that it easily allows for mapping a new issue into past data for similar bonds by looking at its descriptive characteristics, and it can be inverted for use in a portfolio optimizer without too much effort. The multifactor model is also more tolerant to pricing errors in individual securities since prices are averaged within each factor bucket. [Pg.784]

Unlike the variance-covariance model, it is suitable for options and other derivatives. [Pg.793]

The primary purpose for expressing experimental data through model equations is to obtain a representation that can be used confidently for systematic interpolations and extrapolations, especially to multicomponent systems. The confidence placed in the calculations depends on the confidence placed in the data and in the model. Therefore, the method of parameter estimation should also provide measures of reliability for the calculated results. This reliability depends on the uncertainties in the parameters, which, with the statistical method of data reduction used here, are estimated from the parameter variance-covariance matrix. This matrix is obtained as a last step in the iterative calculation of the parameters. [Pg.102]

Equations (41.15) and (41.19) for the extrapolation and update of system states form the so-called state-space model. The solution of the state-space model has been derived by Kalman and is known as the Kalman filter. Assumptions are that the measurement noise v(j) and the system noise w(/) are random and independent, normally distributed, white and uncorrelated. This leads to the general formulation of a Kalman filter given in Table 41.10. Equations (41.15) and (41.19) account for the time dependence of the system. Eq. (41.15) is the system equation which tells us how the system behaves in time (here in j units). Equation (41.16) expresses how the uncertainty in the system state grows as a function of time (here in j units) if no observations would be made. Q(j - 1) is the variance-covariance matrix of the system noise which contains the variance of w. [Pg.595]

Thus, for a single-parameter model such as y,j = p + r,j, the estimated variance-covariance matrix contains no covariance elements the square root of the single variance element corresponds to the standard uncertainty of the single parameter estimate. [Pg.119]

In this chapter, we will examine the variance-covariance matrix to see how the location of experiments in factor space (i.e., the experimental design) affects the individual variances and covariances of the parameter estimates. Throughout this section we will be dealing with the specific two-parameter first-order model y, = Pq + + li only the resulting principles are entirely general, however, and can be... [Pg.119]

The effect on the variance-covariance matrix of two experiments located at different positions in factor space can be investigated by locating one experiment at X, = 1 and varying the location of the second experiment. The first row of the matrix of parameter coefficients for the model y,- = + p,jc, + r, can be made to... [Pg.120]

Consideration of the effect of experimental design on the elements of the variance-covariance matrix leads naturally to the area of optimal design [Box, Hunter, and Hunter (1978), Evans (1979), and Wolters and Kateman (1990)]. Let us suppose that our purpose in carrying out two experiments is to obtain good estimates of the intercept and slope for the model yj, = Po + Pi i, + r,. We might want to know what levels of the factor x , we should use to obtain the most precise estimates of po and... [Pg.126]

Generalized Covariance Models. When l x) is an intrinsic random function of order k, an alternative to the semi-variogram is the generalized covariance (GC) function of order k. Like the semi-variogram model, the GC model must be a conditionally positive definite function so that the variance of the linear functional of ZU) is greater than or equal to zero. The family of polynomial GC functions satisfy this requirement. The polynomial GC of order k is... [Pg.216]

Figure 12.18 shows a sums of squares and degrees of freedom tree for the data of Table 12.4 and the model of Equation 12.32. The significance of the parameter estimates may be obtained from Equation 10.66 using sr2 and (X X) l to obtain the variance-covariance matrix. The (X X) x matrix for the present example is... [Pg.244]

It first introduces the reader to the fundamentals of experimental design. Systems theory, response surface concepts, and basic statistics serve as a basis for the further development of matrix least squares and hypothesis testing. The effects of different experimental designs and different models on the variance-covariance matrix and on the analysis of variance (ANOVA) are extensively discussed. Applications and advanced topics such as confidence bands, rotatability, and confounding complete the text. Numerous worked examples are presented. [Pg.214]

Response Surfaces. 3. Basic Statistics. 4. One Experiment. 5. Two Experiments. 6. Hypothesis Testing. 7. The Variance-Covariance Matrix. 8. Three Experiments. 9. Analysis of Variance (ANOVA) for Linear Models. 10. A Ten-Experiment Example. 11. Approximating a Region of a Multifactor Response Surface. 12. Additional Multifactor Concepts and Experimental Designs. Append- ices Matrix Algebra. Critical Values of t. Critical Values of F, a = 0.05. Index. [Pg.214]

If the original model is sufficiently perfect, the linearization of the problem adequate, the measurements unbiased (no systematic error), and the covariance matrix of the observations, 0y, a true representation of the experimental errors and their correlations, then c2 (Eq. 21c) should be near unity [34], If 0y is indeed an honest assessment of the experimental errors, but a2 is nonetheless (much) larger than unity, model deficiencies are the most frequent source of this discrepancy. Relevant variables probably exist that have not been included in the model, and the experimental precision is hence better than can be utilized by the available model. Model errors have then been treated as if they were experimental random errors, and the results must be interpreted with great caution. In this often unavoidable case, it would clearly be meaningless to make a difference between a measurement with a small experimental error (below the useful limit of precision) and another measurement with an even smaller error (see ref. [41 ). A deliberate modification of the variance-covariance matrix y towards larger and more equal variances might then be indicated, which results in a more equally weighted and less correlated matrix. [Pg.75]

The first attempt at estimating interindividual pharmacokinetic variability without neglecting the difficulties (data imbalance, sparse data, subject-specific dosing history, etc.) associated with data from patients undergoing drug therapy was made by Sheiner et al. " using the Non-linear Mixed-effects Model Approach. The vector 9 of population characteristics is composed of all quantities of the first two moments of the distribution of the parameters the mean values (fixed effects), and the elements of the variance-covariance matrix that characterize random effects.f " " ... [Pg.2951]


See other pages where Variance-covariance model is mentioned: [Pg.192]    [Pg.781]    [Pg.781]    [Pg.192]    [Pg.781]    [Pg.781]    [Pg.165]    [Pg.578]    [Pg.582]    [Pg.585]    [Pg.24]    [Pg.354]    [Pg.73]    [Pg.82]    [Pg.217]    [Pg.36]    [Pg.173]    [Pg.27]    [Pg.171]    [Pg.182]    [Pg.316]    [Pg.24]    [Pg.209]    [Pg.244]    [Pg.10]    [Pg.132]    [Pg.349]    [Pg.364]    [Pg.350]    [Pg.350]    [Pg.297]   
See also in sourсe #XX -- [ Pg.781 , Pg.782 , Pg.783 , Pg.784 , Pg.785 , Pg.786 , Pg.787 , Pg.788 , Pg.789 ]




SEARCH



Covariance

Covariance model

Covariant

Covariates

Covariation

Model covariate

Variance model

Variance-covariance

© 2024 chempedia.info