Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Estimated covariance between parameter estimates

Classic parameter estimation techniques involve using experimental data to estimate all parameters at once. This allows an estimate of central tendency and a confidence interval for each parameter, but it also allows determination of a matrix of covariances between parameters. To determine parameters and confidence intervals at some level, the requirements for data increase more than proportionally with the number of parameters in the model. Above some number of parameters, simultaneous estimation becomes impractical, and the experiments required to generate the data become impossible or unethical. For models at this level of complexity parameters and covariances can be estimated for each subsection of the model. This assumes that the covariance between parameters in different subsections is zero. This is unsatisfactory to some practitioners, and this (and the complexity of such models and the difficulty and cost of building them) has been a criticism of highly parameterized PBPK and PBPD models. An alternate view assumes that decisions will be made that should be informed by as much information about the system as possible, that the assumption of zero covariance between parameters in differ-... [Pg.543]

When it comes to the covariance structure, however, problems become acute. Total inversion requires that a joint probability distribution is known for observations and parameters. This is usually not a problem for observations. The covariance structure among the parameters of the model becomes more obscure how do we estimate the a priori correlation coefficient between age and initial Sr ratio in our isochron example without infringing seriously the objectivity of error assessment When the a priori covariance structure between the observations and the model parameters is estimated, the chances that we actually resort to unsupported and unjustified speculation become immense. Total inversion must be well-understood in order for it not to end up as a formal exercise of consistency between a priori and a posteriori estimates. [Pg.310]

The value of the proportionality constant /3 can be determined experimentally. In the absence of any other experimental data, an acceptable model would be based on any combination of the adjustable parameters C, K, and Ns that yields the correct value of /3, according to Equation 29. Since three adjustable parameters are available to define the value of one experimentally observable quantity, covariability among these parameters is expected. In reality an independent estimate of Ng might be available, and curvature of the <7q vs. log a + plot might reduce some of the covariability, but Equation 29 provides an initial step in understanding the relationship between covarying adjustable parameters. [Pg.72]

Figure 7.4 plots the value of either of the off-diagonal elements (they are theoretically and numerically identical) of the (Af AO" matrix as a function of the location of the second experiment. As stated previously, the off-diagonal elements of the X Xy matrix are associated with the covariance of the parameter estimates for this model, the off-diagonals of the matrix represent the covariance between and... [Pg.124]

An alternative method to treating time as a categorical variable is to treat time as a continuous variable and to model time using a low-order polynomial, thereby reducing the number of estimable parameters in the model. In this case, rather than modeling the within-subject covariance the between-subject covariance is manipulated. Subjects are treated as random effects, as are the model parameters associated with time. The within-subject covariance matrix was treated as a simple covariance structure. In this example, time was modeled as a quadratic polynomial. Also, included in the model were the interactions associated with the quadratic term for time. [Pg.199]

In their study, Ribbing and Jonsson found that when only one covariate was in the model, 80% power was achieved with (i) 20 subjects having three samples per subject and high correlation ( 0.85) between covariate and pharmacokinetic parameter, (ii) 100 subjects having three samples per subject and medium correlation ( 0.50) between covariate and pharmacokinetic parameter, and (iii) 300 subjects having three samples per subject and low correlation ( 0.15) between covariate and pharmacokinetic parameter. They also found that selection bias increased when the number of subjects decreased. In other words, the estimated value of the parameter relating the covariate to the pharmacokinetic parameter of interest was overestimated as the... [Pg.237]

We should be aware that the estimations of the confidence intervals in the given way will only be valid if the parameters are independent of each other. All elements in the ofF-diagonals of the covariance matrix in Eq. (6.23) need to be zero. In the case of two parameters, the confidence intervals describe a square (cf. figure in the margin). If dependences exist between parameters, then an ellipse is obtained for the confidence interval of the parameters. The larger the elements in the ofF-diagonals in Eq. (6.23), the more pronounced deviations from the squared shape of the confidence intervals are to be expected. [Pg.224]

The covariances between the parameters are the off-diagonal elements of the covariance matrix. The covariance indicates how closely two parameters are correlated. A large value for the covariance between two parameter estimates indicates a very close correlation. Practically, this means that these two parameters may not be possible to be estimated separately. This is shown better through the correlation matrix. The correlation matrix, R, is obtained by transforming the co-variance matrix as follows... [Pg.377]

Let us review what we did with the depression example so far. First, we conjectured a taxon and three indicators. Next, we selected one of these indicators (anhedonia) as the input variable and two other indicators (sadness and suicidality) as the output variables. Input and output are labels that refer to a role of the indicator in a given subanalysis. We cut the input indicator into intervals, hence the word Cut in the name of the method (Coherent Cut Kinetics), and we looked at the relationship between the output indicators. Specifically, we calculated covariances of the output indicators in each interval, hence the word Kinetics —we moved calculations from interval to interval. Suppose that after all that was completed, we find a clear peak in the covariance of sadness and suicidality, which allows us to estimate the position of the hitmax and the taxon base rate. What next Now we need to get multiple estimates of these parameters. To achieve this, we change the... [Pg.42]

The errors in the fitting parameters may be obtained from the covariance matrix of the fit if it is available, but they are more commonly estimated by varying one parameter away from its optimal value while optimizing all other parameters until a defined increase in the statistical function is obtained. However, the statistical error values obtained do not represent the true accuracies of the parameters. In fact, it is difficult to determine coordination numbers to much better than 5%, and 20% is more realistic when the data are collected at room temperature taking into account the strong coupling between the coordination number and Debye Waller terms, the error in the latter may be 30%. [Pg.378]

Each of the upper left to lower right diagonal elements of V is an estimated variance of a parameter estimate, si, these elements correspond to the parameters as they appear in the model from left to right. Each of the off-diagonal elements is an estimated covariance between two of the parameter estimates [Dunn and Clark (1987)]. [Pg.119]

Let sIq be the estimated variance associated with the parameter estimate bo let be the estimated variance associated with b and let slf (or sl ) represent the estimated covariance between and b,. Then... [Pg.120]

The uncertainty in the estimate of Pi is smaller (s, = s x5.04), and the uncertainty in the estimate of Pn is smaller still (Sj, = s x0.04). The geometric interpretation of the parameters Pi and Ph in this model is not straightforward, but Pi essentially moves the apex of the parabola away from Xi = 0, and pn is a measure of the steepness of curvature. The geometric interpretation of the associated uncertainties in the parameter estimates is also not straightforward (for example, P, Pi, and p,i are expressed in different units). We will simply note that such uncertainties do exist, and note also that there is covariance between bg and hi, between and b,i, and between hi and hn. [Pg.145]

Rouen D, Scher H, Blunt M (1997) On the structure and flow processes in the capillary fringe of phreatic aquifers. Transp Porous Media 28 159-180 Rose CW (1993) The transport of adsorbed chemicals in eroded sediments. In Russo D, Dagan G (eds) Water flow and solute transport in soils. Springer, Heidelberg, pp 180-199 Rosenberry DO, Winter TC (1997) Dynamics of water-table fluctuations in an upland between two prairie-pothole wetlands in North Dakota. J Hydrol 191 266-289 Russo D (1997) On the estimation of parameters of log-unsaturated conductivity covariance from solute transport data. Adv Water Resour 20 191-205 Russo D, Toiber-Yasur 1, Laufer A, Yaron B (1998) Numerical analysis of field scale transport of bromacU. Adv Water Resour 21 637-647... [Pg.400]

Sometimes the usefulness of such a choice can be estimated from the corresponding covariance matrix. This problem has been partially solved by Dimitrov for the AsB case. The solution has been employed in a computational algorithm for lineshape fitting. (75) However, the author has admitted that the problem of strong correlations between the parameters is not yet overcome. Some limitation on the choice of parameters to be fitted results from difficulties in the extrapolation of... [Pg.277]

Based on the (symmetric) variance-covariance matrix of the parameter estimates V(b), one can determine confidence limits of the parameter estimates. The diagonal elements v contain the parameter estimate variances, and the off-diagonal elements the covariances between the parameter estimates. The interval of parameter values that are statistically not significantly different from the estimated value b, at a selected probability level (1 - a) is defined by... [Pg.315]

In practical cases, it will probably be difficult to estimate the covariances within the 0Bw and even more difficult to estimate any correlations between the parameters 6 and Bw. If the latter are neglected, the covariance matrix of the parameters determined by the fit, which includes also the errors due to the fixed parameters, will then be the sum of Eqs. 21b and 30 ... [Pg.77]

Another factor to be taken into account is the degree of over determination, or the ratio between the number of observations and the number of variable parameters in the least-squares problem. The number of observations depends on many factors, such as the X-ray wavelength, crystal quality and size, X-ray flux, temperature and experimental details like counting time, crystal alignment and detector characteristics. The number of parameters is likewise not fixed by the size of the asymmetric unit only and can be manipulated in many ways, like adding parameters to describe complicated modes of atomic displacements from their equilibrium positions. Estimated standard deviations on derived bond parameters are obtained from the least-squares covariance matrix as a measure of internal consistency. These quantities do not relate to the absolute values of bond lengths or angles since no physical factors feature in their derivation. [Pg.190]

The dispersion matrix is not a diagonal matrix. There are correlations between the model parameters. Hence, they are not independently estimated. Through a D-optimal design the parameters are estimated as independently as possible. This is the sacrifice which must be made when the number of experiments does not permit an orthogonal design. The covariances of the model parameters are rather small and the correlations are weak and will, hopefully, not lead to erroneous conclusions as to the influence of the variables. The estimated model parameters are summarized in Table 7.4. [Pg.188]

Plotted are the individual estimates of clearance, the difference between the individual estimates of CL and the typical individual estimate of clearance, and the r for clearance. Without any covariates in the model it does not matter much which variable is used. With covariates, on the other hand, we should not use the individual estimates of the parameter. Equation (7.7) explains why. [Pg.201]

The individual parameter estimates in Figure 7.16 were obtained from a model similar to Eq. (7.7) and we can see that the individual estimates of clearance show a clear relation to creatinine clearance while the other two measures of unexplained variability are reduced in comparison. To summarize, once the model includes covariates, we should not plot the individual estimates versus covariates but rather something like the rj values if the reason for creating the graph is to visualize potential relations between the unexplained variability and covariates. [Pg.202]

The NPML approach provides an estimate of the whole probability distribution of the PK parameters on a nonparametric basis (67). The method relies on maximization of the likelihood of the set of observations of all individuals to estimate the distribution of the parameters. The basic conceptual framework is similar to that described for NONMEM above. The difference is that no specihc model for the relationship between PK parameters and patient-specific covariates is specified. The individual parameters (pi are assumed to be independent realizations of a given random variable O with probability distribution F([Pg.278]


See other pages where Estimated covariance between parameter estimates is mentioned: [Pg.373]    [Pg.350]    [Pg.278]    [Pg.351]    [Pg.1139]    [Pg.237]    [Pg.367]    [Pg.38]    [Pg.183]    [Pg.282]    [Pg.197]    [Pg.312]    [Pg.132]    [Pg.134]    [Pg.2148]    [Pg.364]    [Pg.323]    [Pg.268]    [Pg.2592]    [Pg.140]    [Pg.146]    [Pg.401]    [Pg.433]    [Pg.646]   
See also in sourсe #XX -- [ Pg.105 ]




SEARCH



Covariance

Covariance estimated

Covariant

Covariates

Covariation

Estimate covariance

Parameter estimation

© 2024 chempedia.info