Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Backward elimination

Backward elimination works along similar principals to forward inclusion, except that we start with all variables in the model and then eliminate [Pg.323]

In backward elimination, all x, variables are initially entered into the model, but eliminated if their value is not greater than the F-to-Remove value, or Ft. Table 10.7 presents the backward elimination process. Note that Step 1 included the entire model, and Step 2 provides the finished model, this time, with both temperature and interaction included in the model. The model is [Pg.417]

This procedure drops the most important x, variable identified through forward selection, the media concentration, but then uses the interaction term, which is a meaningless variable without the media concentration value. Nevertheless, the regression equation is presented in Table 10.8. In practice, the researcher would undoubtedly drop the interaction term, because [Pg.417]

So what does one need to do First, the three methods obviously may not provide the researcher with the same resultant model. To pick the best model requires experience in the field of study. In this case, using the backward elimination method, in which all x, variables begin in the model and those less significant than F-to-Leave are removed, the media concentration was rejected. In some respects, the model was attractive in that and s were more favorable. Yet, a smaller, more parsimonious model usually is more useful across studies than a larger, more complex one. The fact that the interaction term was left in the model when X2 was rejected makes the interaction of Xj x X2 a moot point. Hence, the model to select seems to be the one detected by both stepwise and forward selection, y = ho + 2 2, as presented in Table 10.5. [Pg.418]

Regression Equation, Double Independent Variable, Example 10.1 [Pg.418]

Original Data and Predicted and Error Values, Reduced Model, Example 10.1 [Pg.419]


The computational formulae (10) and (15) constitute what is called the backward elimination path. [Pg.10]

In some cases, the first variable deleted in backward elimination is the first one inserted in forward selection. [Pg.137]

A backward-elimination analogue of the Efroymson procedure is possible. [Pg.137]

Both forward selection and backward elimination can fare arbitrarily poorly in finding the best-fitting subsets. [Pg.137]

The initial selection of variables can be further reduced automatically using a selection algorithm (often backward elimination or forward selection). Such an automated procedure sounds as though it should produce the optimal choice of predictive variables, but it is often necessary in practice to use clinical knowledge to over-ride the statistical process, either to ensure inclusion of a variable that is known from previous studies to be highly predictive or to eliminate variables that might lead to overfitting (i.e. overestimation of the predictive value of the model by inclusion of variables that appear to be predictive in the derivation cohort, probably by chance, but are unlikely to be predictive in other cohorts). [Pg.187]

Backward elimination (BE-SWR) is a stepwise technique starting with a model in which all the variables are included and then deleting one variable at a time. At any step, the variable with the smallest F-ratio is ehminated if this F-ratio does not exceed a specified value Fout. At each step, the j th variable is eliminated from a /c-size model if... [Pg.468]

Backward elimination (BE) was first applied to the data set using both selectivity riteria. In order to reduce the initial 76 wavelengths to a set of 5 wavelengths, 76 + 75 +. .. + 6 = 2991 combinations miist be calculated. For the GSEL criterion, the combination 37/ 51/ 5 76 with GSEL = 0.3254 was identified as best while BE found the subset 10/ 37/ 50/ 51/ 76 with SEL = 0.5747 to be best. These values... [Pg.37]

Unlike the K-matrix example, backward elimination does not deliver acceptable... [Pg.42]

An alternative method is described by backward elimination. This technique starts with a full equation containing every measured variate and successively deletes one variable at each step. The variables are dropped from the equation on the basis of testing the significance of the regression coefficients, i.e. for each variable is the coefficient zero The F-statistic is referred to as the computed F-to-remove. The procedure is terminated when all variables remaining in the model are considered significant. [Pg.186]

Mapping displays 23 Matrix, confusion, 127 determinant, 212 dispersion, 82 identity, 206 inverse, 210 quadratic form, 212 singular, 211 square, 204 symmetric, 204 Matrix multiplication, 207 Mean centring, 17 Mean value, 2 Membership function, 117 Minkowski metrics, 99 Moving average, 36 Multiple correlation, 183 Multiple regression, backward elimination, 182... [Pg.215]

There are several approaches to population model development that have been discussed in the literature (7, 9, 15-17). The traditional approach has been to make scatterplots of weighted residuals versus covariates and look at trends in the plot to infer some sort of relationship. The covariates identified with the scatterplots are then tested against each of the parameters in a population model, one covariate at a time. Covariates identified are used to create a full model and the final irreducible, given the data, is obtained by backward elimination. The drawback of this approach is that it is only valid for covariates that act independently on the pharmacokinetic (PK) or pharmacokinetic/pharmacodynamic (PK/PD) parameters, and the understanding of the dimensionality of the covariate diata is not taken into account. [Pg.229]

Maitre et al. (15) proposed an improvement on the traditional approach. The approach consists of using individual Bayesian posthoc PK or PK/PD parameters from a population modeling software such as NONMEM and plotting these parameter estimates against covariates to look for any possible model parameter covariate relationship. The individual model parameter estimates are obtained using a base model—a model without covariates. The covariates are in turn tested to determine individual significant covariate predictors, which are in turn used to form a full model. The final irreducible model is obtained by backward elimination. The drawback for this approach is the same as that for the traditional approach. [Pg.230]

Step 6. With the appropriate pharmacostatistical models, population model building is performed using covariates retained in step 5 with the covariate selection level set at a= 0.005. The backward elimination for covariate selection in applied to each of the 100 bootstrap samples. The covariates found to be important in explaining the variablilty in the parameter of interest are used to build the final population PM model. [Pg.231]

The fully parameterized model elucidated thus far was subject to the backward elimination procedure to achieve parsimony. When a parameter was removed from the model, an objective function value increase of at least 7.88, corresponding to a nominal significance level of 0.005, was required for retention of the covariate relationship quantified by the parameter. Ultimately, the effect of RACE on Kant was removed, as was the effect of gestational age on PRE and The influence of bronchopulmonary dysplasia was also determined to be insignificant. Removal of these covariates resulted in the final PD model as follows ... [Pg.711]

We present a pediatric population PK (PPK) model development example to illustrate the impact that the model development approach to scaling parameters by size can have on pediatric PPK analyses a typical pediatric study is included. It is intuitive that patient size will affect PK parameters such as clearance, apparent volume, and intercompartmental clearance and that the range of patient size in most pediatric PPK data sets is large. Thus, it is expected that in most pediatric PPK studies subject size will affect multiple PK parameters. However, because there are complex interactions between covariates and parameters in pediatric populations, there are also intrinsic pitfalls of stepwise forward covariate inclusion. Selection of significant covariates via backward elimination has appeal in nonlinear model building however, it requires knowledge of the relationship between the covariate and model parameters (linear vs. nonlinear impact) and can encounter numerical difficulties with complex models and limited volume of data often available from pediatric studies. Thus, there is a need for PK analysis of pediatric data to treat size as a special covariate. Specifically, it is important to incorporate it into the model, in a mechanistically appropriate manner, prior to evaluations of other covariates. [Pg.970]

Although simultaneous inclusion of size on all parameters improved the model, some of the size covariates did not remain statistically significant for all parameters if evaluated by stepwise backward elimination. While mechanistically this is not a plausible reflection of the true nature of cyclosporine s pediatric PK disposition, it is not totally unexpected that all of these covariates do not all reach statistical... [Pg.971]

Hierarchical terms added after backward elimination regression... [Pg.104]

Backwards elimination is similar to forward selection, except that the initial model contains all the covariates and removal from the model starts with the covariate of the least significance. Removal from the model then proceeds one variable at a time until no covariates meet the criteria for removal (Fout). Stepwise regression is a blend of both forward and backwards selection in that variables can be added or removed from the model at each stage. Thus, a variable may be added and a variable may be removed in the same step. [Pg.64]

To help identify a final model, sequential variable selection using backwards elimination was done with SAS. A p-value of 0.05 was required to stay in the model. The final model is shown in Table 5.7. Although the final model identified the 4-h time point as a significant covariate, it did not identify the 3-h time point as... [Pg.157]

Bies et al. (2003) compared the stepwise model building approach to the GA approach. Three different stepwise approaches were used forward stepwise starting from a base model with no covariates, and a backwards elimination approach starting from a full model with all covariates, and then forward addition to a full model followed by backwards elimination to retain only the most important covariates. Bies et al. found that the GA approach identified a model a full 30 points lower based on objective function value than the other three methods and that the GA algorithm identified important... [Pg.239]

Backward elimination starts from all variables and eliminates the one that contributes least to the model (i.e., which produces the lowest partial Ftest value). This procedure is repeated until no more X variables can be eliminated because the remaining ones are all justified by their sequential F values. Flowever, also this procedure often ends up in a local optimum in addition, it cannot be applied if the number of tested variables is larger than the number of objects. Stepwise regression avoids some of these problems. It is a forward selection procedure where, after every addition of a new variable, the possible elimination of any other one is checked by a sequential F test [41]. [Pg.547]


See other pages where Backward elimination is mentioned: [Pg.42]    [Pg.102]    [Pg.154]    [Pg.173]    [Pg.106]    [Pg.137]    [Pg.42]    [Pg.743]    [Pg.145]    [Pg.28]    [Pg.44]    [Pg.114]    [Pg.186]    [Pg.433]    [Pg.710]    [Pg.836]    [Pg.973]    [Pg.194]    [Pg.216]    [Pg.336]    [Pg.178]    [Pg.157]    [Pg.194]   
See also in sourсe #XX -- [ Pg.173 ]

See also in sourсe #XX -- [ Pg.178 ]

See also in sourсe #XX -- [ Pg.182 , Pg.246 , Pg.411 , Pg.412 , Pg.413 , Pg.418 , Pg.419 , Pg.420 ]

See also in sourсe #XX -- [ Pg.323 ]

See also in sourсe #XX -- [ Pg.341 ]

See also in sourсe #XX -- [ Pg.152 , Pg.164 ]




SEARCH



Backwardation

© 2024 chempedia.info