Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Testing the Model Assumptions

Karlsson et al. (1998) present a comprehensive list of assumptions made during a PopPK analysis and how to test them. The assumptions can be classified into the following groups  [Pg.240]

Each assumption will be dealt with specifically as follows and are presented are slightly different than Karlsson et al. (1998) [Pg.240]

At the heart of any analysis lies the question of whether the structural model was adequate. Notice that it was not said that the model was correct. No model is correct. The question is whether the model adequately characterizes the data (a descriptive model) and is useful for predictive purposes (a predictive model). Adequacy of the structural model for descriptive purposes is typically made through goodness of fit plots, particularly observed versus predicted plots, residual plots, histograms of the distribution of the random effects, and histograms of the distribution of the residuals. Adequacy of the model for predictive purposes is done using simulation and predictive checks. [Pg.241]

The same structural model applies to all subjects at all occasions. [Pg.241]

An adequate covariate submodel building strategy was used. [Pg.241]


After the series of metabolic pathways had been elucidated for the three model compounds 1-3, these data were implemented into the mathematical model PharmBiosim. The nonlinear system s response to varying ketone exposure was studied. The predicted vanishing of oscillatory behavior for increasing ketone concentration can be used to experimentally test the model assumptions in the reduction of the xenobiotic ketone. To generate such predictions, we employed as a convenient tool the continuation of the nonlinear system s behavior in the control parameters. This strategy is applicable to large systems of coupled, nonlinear, ordinary differential equations and shall together with direct numerical simulations be used to further extend PharmBiosim than was sketched here. This model already allows more detailed predictions of stereoisomer distribution in the products. [Pg.83]

At the second step, the model assumptions must be examined and confirmed. The reader is referred to the Section on Testing the Model Assumptions for details. Briefly, informative graphics are essential (Ette and Lud-den, 1995). Scatter plots of individual versus predicted concentrations, weighted residuals versus predicted concentrations, and weighted residuals versus time provide evidence of the goodness of fit of the model. Histograms and possibly QQ plots of the distribution of the residuals, the r s (deviations from the mean), and the EBEs of the random effects are used to examine the assumptions of normality. Further, sensitivity analysis can be done to assess the stability of the model. [Pg.251]

Computer simulations are ideally suited to this sort of problem indeed validating approximate theories is one of the most powerful applications of computer simulations. Computer simulations based on the Monte Carlo (MC) and Molecular Dynamics (MD) techniques are exact implementations of statistical mechanics for a model system [28]. Since the intermolecular potentials are known exactly within the simulations, it is possible to generate exact "experimentar results for the model system. Further, since the simulations provide complete information about the system, it is possible to test the model assumptions directly. In this way it is possible to expedite improvements in the theoretical models. [Pg.249]

Testing the model assumptions used to develop this description is also an aim of the FEBEX experiment mentioned above. [Pg.50]

Watanabe and Ohnishi [39] have proposed another model for the polymer consumption rate (in place of Eq. 2) and have also integrated their model to obtain the time dependence of the oxide thickness. Time dependent oxide thickness measurement in the transient regime is the clearest way to test the kinetic assumptions in these models however, neither model has been subjected to experimental verification in the transient regime. Equation 9 may be used to obtain time dependent oxide thickness estimates from the time dependence of the total thickness loss, but such results have not been published. Hartney et al. [42] have recently used variable angle XPS spectroscopy to determine the time dependence of the oxide thickness for two organosilicon polymers and several etching conditions. They did not present kinetic model fits to their results, nor did they compare their results to time dependent thickness estimates from the material balance (Eq. 9). More research on the transient regime is needed to determine the validity of Eq. 10 or the comparable result for the kinetic model presented by Watanabe and Ohnishi [39]. [Pg.224]

The following criteria are usually directly applied to the calibration set to enable a fast comparison of many models as it is necessary in variable selection. The criteria characterize the fit and therefore the (usually only few) resulting models have to be tested carefully for their prediction performance for new cases. The measures are reliable only if the model assumptions are fulfilled (independent normally distributed errors). They can be used to select an appropriate model by comparing the measures for models with various values of in. [Pg.128]

Ubrlp in the Rad6p/Ubrlp-mediated formation of a substrate-linked multi-Ub chain (31). A similar approach could be used to test the model s assumption in regard to the demonstrated Ufd4p-Rpt6p interaction. [Pg.23]

Neither one-point nor two-point calibrations have room to test the model or statistical assumptions, but as long as the model has been rigorously validated its use in the laboratory has been verified, these methods can work well. Typical use of this calibration is in process control in pharmaceutical companies, where the system is very well known and controlled and there are sufficient quality control samples to ensure on-going performance. [Pg.64]

In order to estimate the systematic errors introduced by the model assumptions, we perform some test calculations. Instead of the velocity-law exponent of 8=1, another fit is obtained with 8=0.5 (Fig. 1). This fit yields an effective temperature of about 3000K higher than with 8=1. In order to simulate the effect of the suspected hydrogen content, a further fit (Fig. 2) is made when one free electron per helium atom is added artificially. This has only marginal influence on the derived temperature (+100K). Thus, we conclude that our model assumptions may introduce a systematic error of the order of 5000K. [Pg.143]

In order to test the small % assumptions in our calculations of condensed phase vibrational transition probabilities and rates, we have performed model calculations,88,101 102 for a colinear system with one molecule moving between two solvent particles. The positions ofthe solvent particles are held fixed. The center of mass position of the solute molecule is the only slow variable coordinate in the system. This allows for the comparison of surface hopping calculations based on small x approximations with calculations without these approximations. In the model calculations discussed here, and in the calculations from many particle simulations reported in Table II, the approximations made for each trajectory are that the nonadiabatic coupling is constant that the slopes of the initial and final... [Pg.199]

For the test case, being considered, the optimum coal feed size given by Eq. 22 is 1.6mm. Clearly, the optimum depends upon operating conditions, and also on the model assumptions. [Pg.92]

These models [also] have been applied successfully to soil colloids to bring matters full circle but, like their predecessors, they rely solely on prior molecular concepts and are tested only by good-ness-of-fit to adsorption data. Since the model assumptions are so different and the models so plausible, one is left to wonder what physical truth they bear. One fears that the fog will lift only to reveal a Tower of Babel. [Pg.44]

Analyze, test, and revise the model. This task, analyzing a model and learning from it, should be the most time consuming and demanding one. We have to make sure that the model is implemented correctly, observe model output, compare it to data, and test how changes in the model assumptions affect the model s behavior. Finally, we can also try to deduce new predictions for validation Does the model predict phenomena or patterns that we did not know and use in some way for model development and calibration ... [Pg.46]

Within the limits of the assumptions on which the model is predicated, the results are reasonable, and the model could be used as a preliminary tool in evaluation of discharge episodes. However, there is a need to improve capabilities and to test the model rigorously. Among the areas where future development and research are needed the following appear to be the most pressing ... [Pg.210]

The prediction of multicomponent equilibria based on the information derived from the analysis of single component adsorption data is an important issue particularly in the domain of liquid chromatography. To solve the general adsorption isotherm, Equation (27.2), Quinones et al. [156] have proposed an extension of the Jovanovic-Freundlich isotherm for each component of the mixture as local adsorption isotherms. They tested the model with experimental data on the system 2-phenylethanol and 3-phenylpropanol mixtures adsorbed on silica. The experimental data was published elsewhere [157]. The local isotherm employed to solve Equation (27.2) includes lateral interactions, which means a step forward with respect to, that is, Langmuir equation. The results obtained account better for competitive data. One drawback of the model concerns the computational time needed to invert Equation (27.2) nevertheless the authors proposed a method to minimize it. The success of this model compared to other resides in that it takes into account the two main sources of nonideal behavior surface heterogeneity and adsorbate-adsorbate interactions. The authors pointed out that there is some degree of thermodynamic inconsistency in this and other models based on similar -assumptions. These inconsistencies could arise from the simplihcations included in their derivation and the main one is related to the monolayer capacity of each component [156]. [Pg.325]

According to the model assumption, success data of the time-terminated test stage si follows a Binomial... [Pg.1617]

In testing the model, two major assumptions were made these were the plug-flow assumption (the absence of radial concentration gradients) and an isothermal profile throughout the reactor. Calculations by Dunkleman (8) confirmed that these assumptions were closely approached in the reactor. [Pg.255]

Another example of a process in which a charge is moved across an interface is interfacial electron transfer reactions. As in the case of ion transfer, experimental data on electron transfer across liquid-liquid interfaces are very limited. For this process, however, there exists a theoretical framework developed within a dielectric continuum model,which built on the fundamental theory of electron transfer in bulk media. Computer simulations, which complement experiments and theory, have not yet dealt with chemically realistic systems but, instead, considered idealized molecules to test the basic assumptions of the continuum model. [Pg.42]


See other pages where Testing the Model Assumptions is mentioned: [Pg.240]    [Pg.240]    [Pg.511]    [Pg.301]    [Pg.147]    [Pg.164]    [Pg.103]    [Pg.84]    [Pg.69]    [Pg.163]    [Pg.248]    [Pg.4169]    [Pg.337]    [Pg.104]    [Pg.916]    [Pg.39]    [Pg.194]    [Pg.45]    [Pg.20]    [Pg.484]    [Pg.320]    [Pg.324]    [Pg.261]    [Pg.174]    [Pg.393]    [Pg.98]    [Pg.375]    [Pg.172]    [Pg.12]    [Pg.319]    [Pg.466]    [Pg.255]   


SEARCH



Assumptions, testing

Modeling assumptions

Modeling testing

Models testing

Testing the Models

The modeling assumptions

© 2024 chempedia.info