Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Homoscedastical error

The several variants deriving from the items 1 to 4 are represented in the flow sheet given in Fig. 6.6. Common calibration by Gaussian least squares estimation (OLS) can only be applied if the measured values are independent and normal-distributed, free from outliers and leverage points and are characterized by homoscedastic errors. Additionally, the error of the values in the analytical quantity x (measurand) must be negligible compared with the errors of the measured values y. [Pg.159]

Figure 2 Example of graphical presentation of a % dissolved vs. time simulated data set obtained by using Eq. (2) (W0 = 100, 6 = 1, c = 3), assuming a specific sampling scheme (indicated in the text) and perturbing the data with homoscedastic error with a mean of 0 and SD = 4 (dotted line) and the corresponding fitted line obtained by fitting Eq. (2) to the specific data set (continuous line). Figure 2 Example of graphical presentation of a % dissolved vs. time simulated data set obtained by using Eq. (2) (W0 = 100, 6 = 1, c = 3), assuming a specific sampling scheme (indicated in the text) and perturbing the data with homoscedastic error with a mean of 0 and SD = 4 (dotted line) and the corresponding fitted line obtained by fitting Eq. (2) to the specific data set (continuous line).
A suitable response variable is selected. This variable should be chosen such that it has a homoscedastical error and results in simple models. For reasons stated below is chosen (see Section 6.2.10). [Pg.246]

In the previous sections it has been stipulated that there are several response variables which can be modeled. The success of the optimization procedure depends on the selection of the response variable(s). There are several criteria which can be used to select a response variable [12,17]. The response variable should have a homoscedastical error structure and have to change continuously and smoothly. Both experimental data and chromatographic theory can be used to check these properties. [Pg.248]

From chromatographic theory [2] it is clear that the R value should result in simple models. For this reason it is preferred over, the k or the Rj. These latter response values can be calculated from predicted R values. It is more difficult to determine the error structure of the R . It is believed however that logarithmic transformation of the k values should result in homoscedastical error structures [3]. [Pg.249]

If there is no theory available to determine a suitable transformation, statistical methods can be used to determine a transformation. The Box-Cox transformation [18] is a common approach to determine if a transformation of a response is needed. With the Box-Cox transformation the response, y, is taken to different powers A, (e.g. -2transformed response can be fitted by a predefined (simple) model. Both an optimal value and a confidence interval for A can be estimated. The transformation which results in the lowest value for the residual variance is the optimal value and should give a combination of a homoscedastical error structure and be suitable for the predefined model. When A=0 the trans-... [Pg.249]

Assumes a homoscedastic error structure (common or homogeneous variance regardless of response). The random error is the same for all observations. [Pg.319]

The errors are only or essentially in the measured values y as the dependent variable (bsx sy) and in addition, the errors sy are constant in the several calibration points (Homoscedasticity) ... [Pg.157]

Commercial software packages are usually able to represent graphically the residual errors (deviations) of a given calibration model which can be examined visually. Typical plots as shown in Fig. 6.8 may give information on the character of the residuals and therefore on the tests that have to be carried out, such as randomness, normality, linearity, homoscedasticity, etc. [Pg.167]

As noted above, the variations in the data representing the error must meet the usual conditions for statistical validity they must be random and statistically independent, and it is highly desirable that they be homoscedastic and Normally distributed. The data should be a representative sampling of the populations that the experiment is supposed... [Pg.54]

To model the relationship between PLA and PLR, we used each of these in ordinary least squares (OLS) multiple regression to explore the relationship between the dependent variables Mean PLR or Mean PLA and the independent variables (Berry and Feldman, 1985).OLS regression was used because data satisfied OLS assumptions for the model as the best linear unbiased estimator (BLUE). Distribution of errors (residuals) is normal, they are uncorrelated with each other, and homoscedastic (constant variance among residuals), with the mean of 0. We also analyzed predicted values plotted against residuals, as they are a better indicator of non-normality in aggregated data, and found them also to be homoscedastic and independent of one other. [Pg.152]

In Eq. 13.15, the squared standard deviations (variances) act as weights of the squared residuals. The standard deviations of the measurements are usually not known, and therefore an arbitrary choice is necessary. It should be stressed that this choice may have a large influence of the final best set of parameters. The scheme for appropriate weighting and, if appropriate, transformation of data (for example logarithmic transformation to fulfil the requirement of homoscedastic variance) should be based on reasonable assumptions with respect to the error distribution in the data, for example as obtained during validation of the plasma concentration assay. The choice should be checked afterwards, according to the procedures for the evaluation of goodness-of-fit (Section 13.2.8.5). [Pg.346]

The variance of y at each point xt should be equal, i.e. constant over the whole working range of x, or, in other words, the errors in measuring y are independent of the values of x. This property is called homoscedasticity and can be tested by the COCHRAN test or by other tests (see [ISO 5725, clause 12]). If this condition is not met, weighted regression models may be considered. [Pg.52]

In all types of data analysis there are assumptions made. In a parametric approach, like the one in NONMEM, many assumptions concern the handling of the residual error (9,12) and, in a sense, the validity of the whole analysis rests on the degree to which we have accounted for the residual variability appropriately. The two most important assumptions in this respect are (a) that the residual variability is homoscedastic and (b) that the residuals are symmetrically distributed. [Pg.198]

The assumption of homoscedasticity means that the residual variability should be constant over all available data dimensions (predictions, covariates, time, etc). If we observe heteroscedasticity, then we need to change the residual error model to account for this. In practice, this means that we should weight the data differently by using a different model for the residual variability. [Pg.198]

The data are known as homoscedastic, which means that the errors in y are independent of the concentration. Data for which the uncertainty, for example, grows with the concentration are heteroscedastic data. [Pg.131]

Long and Ervin (2000) used Monte Carlo simulation to compare the four estimators under a homoscedastic and heteroscedastic linear model. The usually reported standard error estimator [Eq. (4.10)] was not studied. All heteroscedastic estimators of performed well even when heteroscedasticity was not present. When heteroscedasticity was present, Eq. (4.11) resulted in incorrect inferences when the sample size was less than 250 and was also more likely to result in a Type I error than the other estimators. When more than 250 observations were present, all estimators performed equally. Long and Ervin suggest that when the sample size is less than 250 that Eq. (4.16) be used to estimate . Unfortunately no pharmacokinetic and most statistical software packages use these heteroscedastic-consistent standard error estimators. [Pg.130]

Normally distributed homoscedastic random error with a standard deviation of 10. [Pg.135]

Sometimes it is useful to transform a nonlinear model into a linear one when the distribution of error terms is approximately normal and homoscedastic. Such a case might be when a suitable nonlinear function cannot be found to model the data. One might then try to change the relationship between x and Y so that a model can be found. One way to do this is to change the model... [Pg.139]

At first glance, the data suggests an Emax model might best describe the increase in percent inhibition with increasing concentration, eventually plateauing at a maximal value of 70% inhibition. See Mager, Wyska, and Jusko (2003) for a useful review of pharmacodynamic models. The first model examined was an Emax model with additive (homoscedastic) residual error... [Pg.309]

The residuals may be distributed in various different ways. First of all they may be scattered more or less symmetrically about zero. This dispersion can be described by a standard deviation of random experimental error. If this is (approximately) constant over the experimental region the system is homoscedastic, as has been assumed up to now. However the analysis of residuals may show that the standard deviation varies within the domain, and the system is heteroscedastic. On the other hand it may reveal systematic errors where the residuals are not distributed symmetrically about zero, but show trends which indicate model inadequacy. [Pg.308]

In determining a mathematical model, whether by linear combinations or by multilinear regression, we have assumed the standard deviation of random experimental error to be (approximately) constant (homoscedastic) over the experimental region. Mathematical models were fitted to the data and their statistical significance or that of their coefficients was calculated on the basis of this constant experimental variance. Now the standard deviation is often approximately constant. All experiments may then be assumed equally reliable and so their usefulness depends solely on their positions within the domain. [Pg.312]

The error variance is constant for all the investigated range of X, and is equal to a certain value This hypothesis is often made by stating that the observed responses are homoscedastic. [Pg.214]

If the experimental error is comparably large for all factor combinations, the data are termed homoscedastic. In the case of heteroscedastic data, the errors differ at different factor combinations. [Pg.221]

A second objection to using the line of regression of y on x, as calculated in Sections 5.4 and 5.5, in the comparison of two analytical methods is that it also assumes that the error in the y-values is constant. Such data are said to be homoscedastic. As previously noted, this means that all the points have equal weight when the slope and intercept of the line are calculated. This assumption is obviously likely to be invalid in practice. In many analyses, the data are heteroscedastic, i.e. the standard deviation of the y-values increases with the... [Pg.130]


See other pages where Homoscedastical error is mentioned: [Pg.640]    [Pg.1185]    [Pg.140]    [Pg.345]    [Pg.640]    [Pg.1185]    [Pg.140]    [Pg.345]    [Pg.582]    [Pg.239]    [Pg.43]    [Pg.235]    [Pg.234]    [Pg.130]    [Pg.130]    [Pg.135]    [Pg.139]    [Pg.270]    [Pg.228]    [Pg.382]    [Pg.411]    [Pg.423]    [Pg.131]   
See also in sourсe #XX -- [ Pg.248 ]




SEARCH



Error homoscedastic

Homoscedastic

Homoscedasticity

© 2024 chempedia.info