Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Effect models methods

Nonlinear mixed-effects modeling methods as applied to pharmacokinetic-dynamic data are operational tools able to perform population analyses [461]. In the basic formulation of the model, it is recognized that the overall variability in the measured response in a sample of individuals, which cannot be explained by the pharmacokinetic-dynamic model, reflects both interindividual dispersion in kinetics and residual variation, the latter including intraindividual variability and measurement error. The observed response of an individual within the framework of a population nonlinear mixed-effects regression model can be described as... [Pg.311]

Most of the non-linear mixed-effects modeling methods estimate the parameters by means of ML. The probability of the data under the model is written as a function of the model parameters, and parameter estimates are chosen to maximize this probability. This amounts to asserting that the best parameter estimates are those that render the observed data more probable than they would be under any other set of parameters. [Pg.2951]

ON THE DEVELOPMENT OF SOLVENT EFFECT MODELS METHOD DEVELOPMENT AND INITIAL APPLICATIONS ... [Pg.18]

Special methods tailored to these phenomena have been developed for modeling such effects. These methods consist of a collection of experimental data framed in graphs or semiempirical expressions. [Pg.134]

Because physicochemical cause-and-effect models are the basis of all measurements, statistics are used to optimize, validate, and calibrate the analytical method, and then interpolate the obtained measurements the models tend to be very simple (i.e., linear) in the concentration interval used. [Pg.10]

FD-MS is also an effective analytical method for direct analysis of many rubber and plastic additives. Lattimer and Welch [113,114] showed that FD-MS gives excellent molecular ion spectra for a variety of polymer additives, including rubber accelerators (dithiocar-bamates, guanidines, benzothiazyl, and thiuram derivatives), antioxidants (hindered phenols, aromatic amines), p-phcnylenediamine-based antiozonants, processing oils and phthalate plasticisers. Alkylphenol ethoxylate surfactants have been characterised by FD-MS [115]. Jack-son et al. [116] analysed some plastic additives (hindered phenol AOs and benzotriazole UVA) by FD-MS. Reaction products of a p-phenylenediaminc antiozonant and d.v-9-lricoscnc (a model olefin) were assessed by FD-MS [117],... [Pg.375]

Perhaps the biggest gap in terms of effective models is the capability of simultaneously handling changeovers, inventories and resource constraints. Sequential methods can handle well the first, while discrete time models (e.g., STN, RTN), can handle well the last two. While continuous-time models with global time intervals can theoretically handle all of the three issues, they are at this point still much less efficient than discrete time models, and therefore require further research. [Pg.182]

Despite advances in MILP solution methods, problem size is still a major issue since scheduling problems are known to be NP-hard (i.e., exponential increase of computation time with size in worst case). While effective modeling can help to overcome to some extent the issue of computational efficiency, special solution strategies such as decomposition and aggregation are needed in order to address the ever increasing sizes of real-world problems. [Pg.182]

There are two common methods for obtaining estimates of the fixed effects (the mean) and the variability the two-stage approach and the nonlinear, mixed-effects modeling approach. The two-stage approach involves multiple measurements on each subject. The nonlinear, mixed-effects model can be used in situations where extensive measurements cannot or will not be made on all or any of the subjects. [Pg.356]

The mean-centering operation effectively removes the absolute intensity information from each of the variables, thus enabling snbsequent modeling methods to focus on the response variations about the mean. In PAT instrument calibration applications, mean-centering is almost always nsefnl, because it is almost always the case that relevant analyzer signal is represented by variation in responses at different variables, and that the absolute values of the responses at those variables are not relevant to the problem at hand. [Pg.370]

Although the MCR method can be a very effective exploratory method, several warnings are appropriate. One must be careful interpreting MCR-determined K and C as absolute pure component spectra and pure component time profiles, respectively, due to the possible presence of intercorrelations between chemical components and spectral features, as well as nonlinear spectral interaction effects. Furthermore, the optimal number of components to be determined (A) must be specified by the user, and is usually not readily apparent a priori. In practice, A is determined by trial and error, and any attempts to overfit an MCR model will often result in K or C profiles that either contain anomalous artifacts, or are not physically meaningful. The application of inaccurate or inappropriate constraints can also lead to misleading or anomalous results with limited or no information content. [Pg.403]

Improved x-data pretreatment Newly discovered effects in the on-line analyzer data could be more effectively filtered out using a different pretreatment method. More effective pretreatment would then reduce the burden on the modeling method to generate an effective model. [Pg.426]

Methods of statistical meta-analysis may be useful for combining information across studies. There are 2 principal varieties of meta-analytic estimation (Normand 1995). In a hxed-effects analysis the observed variation among estimates is attributable to the statistical error associated with the individual estimates. An important step is to compute a weighted average of unbiased estimates, where the weight for an estimate is computed by means of its standard error estimate. In a random-effects analysis one allows for additional variation, beyond statistical error, making use of a htted random-effects model. [Pg.47]

Bayesian statistics are applicable to analyzing uncertainty in all phases of a risk assessment. Bayesian or probabilistic induction provides a quantitative way to estimate the plausibility of a proposed causality model (Howson and Urbach 1989), including the causal (conceptual) models central to chemical risk assessment (Newman and Evans 2002). Bayesian inductive methods quantify the plausibility of a conceptual model based on existing data and can accommodate a process of data augmentation (or pooling) until sufficient belief (or disbelief) has been accumulated about the proposed cause-effect model. Once a plausible conceptual model is defined, Bayesian methods can quantify uncertainties in parameter estimation or model predictions (predictive inferences). Relevant methods can be found in numerous textbooks, e.g., Carlin and Louis (2000) and Gelman et al. (1997). [Pg.71]

In projecting results of short-term trials over patients lifetimes, it is typical to present at least two of the many potential projections of lifetime treatment benefit. A one-time effect model assumes that the clinical benefit observed in the trial is the only clinical benefit received by patients. Under this model, after the trial has ended, the conditional probability of disease progression for patients is the same in both arms of the trial. Given that it is unlikely that a therapy will lose all benefits as soon as one stops measuring them, this projection method generally is pessimistic compared to the actual outcome. A continuous-benefit effect model assumes that the clinical benefit observed in the trial is continued throughout the patients lifetimes. Under this model, the conditional probability of disease progression for treatment and control patients continues at the same rate as that measured in the clinical trial. In contrast to the one-time model, this projection of treatment benefit most likely is optimistic compared to the treatment outcome. [Pg.48]

The method provides a model for the hazard function. As in Section 6.6, let z be an indicator variable for treatment taking the value one for patients in the active group and zero for patients in the control group and let Xj, X2, etc. denote the covariates. If we let t) denote the hazard rate as a function of t (time), the main effects model takes the form ... [Pg.204]

Prior to considering semiempirical methods designed on the basis of HF theory, it is instructive to revisit one-electron effective Hamiltonian methods like the Huckel model described in Section 4.4. Such models tend to involve the most drastic approximations, but as a result their rationale is tied closely to experimental concepts and they tend to be inmitive. One such model that continues to see extensive use today is the so-called extended Huckel theory (EHT). Recall that the key step in finding the MOs for an effective Hamiltonian is the formation of the secular determinant for the secular equation... [Pg.134]

Model potential methods and their utilization in atomic structure calculations are reviewed in [139], main attention being paid to analytic effective model potentials in the Coulomb and non-Coulomb approximations, to effective model potentials based on the Thomas-Fermi statistical model of the atom, as well as employing a self-consistent field core potential. Relativistic effects in model potential calculations are discussed there, too. Paper [140] has examples of numerous model potential calculations of various atomic spectroscopic properties. [Pg.260]

The main advantage of the effective potential method consists in the relative simplicity of the calculations, conditioned by the comparatively small number of semi-empirical parameters, as well as the analytical form of the potential and wave functions such methods usually ensure fairly high accuracy of the calculated values of the energy levels and oscillator strengths. However, these methods, as a rule, can be successfully applied only for one- and two-valent atoms and ions. Therefore, the semi-empirical approach of least squares fitting is much more universal and powerful than model potential methods it combines naturally and easily the accounting for relativistic and correlation effects. [Pg.260]


See other pages where Effect models methods is mentioned: [Pg.132]    [Pg.132]    [Pg.241]    [Pg.405]    [Pg.427]    [Pg.205]    [Pg.96]    [Pg.366]    [Pg.55]    [Pg.512]    [Pg.79]    [Pg.177]    [Pg.42]    [Pg.895]    [Pg.346]    [Pg.356]    [Pg.132]    [Pg.371]    [Pg.421]    [Pg.73]    [Pg.73]    [Pg.308]    [Pg.305]    [Pg.53]    [Pg.558]    [Pg.452]    [Pg.181]    [Pg.714]    [Pg.60]    [Pg.228]   
See also in sourсe #XX -- [ Pg.235 ]




SEARCH



Modeling methods

Modelling methods

Nonlinear mixed effects models parameter estimation methods

Theoretical methods solvent effect modeling

© 2024 chempedia.info