Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Model covariate distribution

Realistic predichons of study results based on simulations can be made only with realistic simulation models. Three types of models are necessary to mimic real study observations system (drug-disease) models, covariate distribution models, and study execution models. Often, these models can be developed from previous data sets or obtained from literature on compounds with similar indications or mechanisms of action. To closely mimic the case of intended studies for which simulations are performed, the values of the model parameters (both structural and statistical elements) and the design used in the simulation of a proposed trial may be different from those that were originally derived from an analysis of previous data or other literature. Therefore, before using models, their appropriateness as simulation tools must be evaluated to ensure that they capture observed data reasonably well [19-21]. However, in some circumstances, it is not feasible to develop simulation models from prior data or by extrapolation from similar dmgs. In these circumstances, what-if scenarios or sensitivity analyses can be performed to evaluate the impact of the model uncertainty and the study design on the trial outcome [22, 23]. [Pg.10]

Mould, D. R. Defining covariate distribution models for clinical trial simulation. In Kimko, H. C., Duffull, S. B., eds. Simulation for designing clinical trials. A pharmacokinetic-pharmacodynamic modeling perspective. (Drugs and the pharmaceutical sciences, volume 127) Marcel Dekker, New York, 2003. [Pg.28]

The covariate distribution model defines the distribution and correlation of covariates in the population to be studied. The aim of a covariate distribution model is to create a virtual patient population that reflects the target population for simulations including patients covariates. This model is of great importance for the realistic simulation of clinical trials. [Pg.477]

In general, a covariate distribution model considers only the covariates influencing the PK and/or PD of the compound of interest. For example, if the covariates age, sex, and weight are identified as important covariates than correlated covariates like height, body mass index, and others might not be incorporated. [Pg.477]

For the appropriate development of covariate distribution models, the pharmaceutical industry has huge amount of data in their clinical databases. In addition, there are also public databases available which can be used, like the Congestive Heart Failure Database (http //www.physionet.org/) derived from patients undergoing cardiac catheterization at Duke Medical Centre during 1990-1996 (about 4000 patients, data on demographics, risk factors histories, cardiac catheterization, EKG, cardiac scores, follow-up data). [Pg.477]

The Trial Simulator (Pharsight Corp., http //www.pharsight.com) is a comprehensive and powerful tool for the simulation of clinical trials. Population PK/PD models developed with tools mentioned in Section 17.10.3 can be implemented in a Trial Simulator. In addition, treatment protocols, inclusion criteria, and observations can be specified. Also covariate distribution models, compliance models, and drop-out models can be specified. All of these models can be implemented via a graphical user interface. For the analysis of simulation results a special version of S-Plus is implemented and results can also be exported in different formats, like SAS. [Pg.481]

Each simulation included 100 hypothetical subjects. The model parameters used were derived from an adult population and there were no covariate distribution models for the virtual trial population. Subjects were assumed to be healthy and on valproate monotherapy (31). The simulations assumed that the extended release (ER) formulation was administered once daily and the delayed release (DR) preparation was administered twice daily. Unbound and total valproic acid concentrations were simulated from the time of dose administration to 280 h and the simulations were based on the administration of 1000 mg ER once daily, 500 mg DR twice daily, 2500 mg ER once daily, and 1000 mg DR twice daily. For once-daily regimens, simulation scenarios included doses taken 6, 12, 18, and 24h late from schedule and then two doses taken 24h late (replacement dose for the missed dose). For the twice-daily regimens, doses were simulated 3,6,9, and 12 h later than the scheduled times and then two doses were simulated 12 h later than scheduled to mimic replacement dosing for a missed dose. More extreme cases where two doses are delayed at various times or missed were also simulated. [Pg.173]

The second task involves defining the boundaries within which the model was derived and can be expected, without further motivation, to be an adequate description of the data. Within these bounds the model can replace the raw data (after all, the model is supposed to capture all salient features of the data), which will be useful when presenting the knowledge summarized by the developed model to nonpharmacometricians. The definition of these bounds can be based on the inclu-sion/exclusion criteria of the study or the realized covariate distribution. In the latter case, some of the exploratory plots from the before-analysis phase are useful. [Pg.210]

Complex pharmacokinetic/pharmacodynamic (PK/PD) simulations are usually developed in a modular manner. Each component or subsystem of the overall simulation is developed one-by-one and then each component is linked to run in a continuous manner (see Figure 33.2). Simulation of clinical trials consists of a covariate model and input-output model coupled to a trial execution model (10). The covariate model defines patient-specific characteristics (e.g., age, weight, clearance, volume of distribution). The input-output model consists of all those elements that link the known inputs into the system (e.g., dose, dosing regimen, PK model, PK/PD model, covariate-PK/PD relationships, disease progression) to the outputs of the system (e.g., exposure, PD response, outcome, or survival). In a stochastic simulation, random error is introduced into the appropriate subsystems. For example, between-subject variability may be introduced among the PK parameters, like clearance. The outputs of the system are driven by the inputs... [Pg.854]

The covariate distribution models, which describe the characteristics of the population (weight, height, sex, race, etc.), must be determined and used for the creation of the study population. The virtual subjects are drawn from a probability distribution that can be one of many types (normal, lognormal, binomial, uniform) but that needs to be described in the study plan. For assignments to sex one must account for what proportion of patients will be female versus male. Furthermore, when creating this population the joint distribution of variables such as height and weight or sex and size must be accounted for. This then leads to the execution model. [Pg.878]

In addition to the Markov model, compliance may be modeled using a more simplified model as a mixture (fraction) of patients who are either compliant or noncompliant (all-or-none) (24). Or, similar to drawing covariate distributions from databases of representative populations, a nonmodel-based option for compliance would be to draw from prior compliance data collected from a representative patient population. [Pg.886]

Model equations can be augmented with expressions accounting for covariates such as subject age, sex, weight, disease state, therapy history, and lifestyle (smoker or nonsmoker, IV drug user or not, therapy compliance, and others). If sufficient data exist, the parameters of these augmented models (or a distribution of the parameters consistent with the data) may be determined. Multiple simulations for prospective experiments or trials, with different parameter values generated from the distributions, can then be used to predict a range of outcomes and the related likelihood of each outcome. Such dose-exposure, exposure-response, or dose-response models can be classified as steady state, stochastic, of low to moderate complexity, predictive, and quantitative. A case study is described in Section 22.6. [Pg.536]

These various covariance models are Inferred directly from the corresponding indicator data i(3 z ), i-l,...,N. The indicator kriging approach is said to be "non-parametric, in the sense that it draws solely from the data, not from any multivariate distribution hypothesis, as was the case for the multi- -normal approach. [Pg.117]

Equations (41.15) and (41.19) for the extrapolation and update of system states form the so-called state-space model. The solution of the state-space model has been derived by Kalman and is known as the Kalman filter. Assumptions are that the measurement noise v(j) and the system noise w(/) are random and independent, normally distributed, white and uncorrelated. This leads to the general formulation of a Kalman filter given in Table 41.10. Equations (41.15) and (41.19) account for the time dependence of the system. Eq. (41.15) is the system equation which tells us how the system behaves in time (here in j units). Equation (41.16) expresses how the uncertainty in the system state grows as a function of time (here in j units) if no observations would be made. Q(j - 1) is the variance-covariance matrix of the system noise which contains the variance of w. [Pg.595]

When the Gauss-Newton method is used to estimate the unknown parameters, we linearize the model equations and at each iteration we solve the corresponding linear least squares problem. As a result, the estimated parameter values have linear least squares properties. Namely, the parameter estimates are normally distributed, unbiased (i.e., (k )=k) and their covariance matrix is given by... [Pg.177]

It should be emphasized that for Markovian copolymers a knowledge of the values of structural parameters of such a kind will suffice to find the probability of any sequence Uk, i.e. for an exhaustive description of the microstructure of the chains of these copolymers with a given average composition. As for the composition distribution of Markovian copolymers, this obeys for any fraction of Z-mers the Gaussian formula whose covariance matrix elements are Dap/l where Dap depend solely on the values of structural parameters [2]. The calculation of their dependence on time, and the stoichiometric and kinetic parameters of the reaction system permits a complete statistical description of the chemical structure of Markovian copolymers to be accomplished. The above reasoning reveals to which extent the mathematical modeling of the processes of the copolymer synthesis is easier to perform provided the alternation of units in macromolecules is known to obey Markovian statistics. [Pg.167]

We also use a linearized covariance analysis [34, 36] to evaluate the accuracy of estimates and take the measurement errors to be normally distributed with a zero mean and covariance matrix Assuming that the mathematical model is correct and that our selected partitions can represent the true multiphase flow functions, the mean of the error in the estimates is zero and the parameter covariance matrix of the errors in the parameter estimates is ... [Pg.378]

Time-to-event analysis in clinical trials is concerned with comparing the distributions of time to some event for various treatment regimens. The two nonparametric tests used to compare distributions are the log-rank test and the Cox proportional hazards model. The Cox proportional hazards model is more useful when you need to adjust your model for covariates. [Pg.259]

In the previous development it was assumed that only random, normally distributed measurement errors, with zero mean and known covariance, are present in the data. In practice, process data may also contain other types of errors, which are caused by nonrandom events. For instance, instruments may not be adequately compensated, measuring devices may malfunction, or process leaks may be present. These biases are usually referred as gross errors. The presence of gross errors invalidates the statistical basis of data reconciliation procedures. It is also impossible, for example, to prepare an adequate process model on the basis of erroneous measurements or to assess production accounting correctly. In order to avoid these shortcomings we need to check for the presence of gross systematic errors in the measurement data. [Pg.128]

As was shown, the conventional method for data reconciliation is that of weighted least squares, in which the adjustments to the data are weighted by the inverse of the measurement noise covariance matrix so that the model constraints are satisfied. The main assumption of the conventional approach is that the errors follow a normal Gaussian distribution. When this assumption is satisfied, conventional approaches provide unbiased estimates of the plant states. The presence of gross errors violates the assumptions in the conventional approach and makes the results invalid. [Pg.218]

Couple it with a model for the joint scalar dissipation rate that predicts the correct scalar covariance matrix, including the effect of the initial scalar length-scale distribution. [Pg.284]

For higher-order reactions, a model must be provided to close the covariance source terms. One possible approach to develop such a model is to extend the FP model to account for scalar fluctuations in each wavenumber band (instead of only accounting for fluctuations in In any case, correctly accounting for the spectral distribution of the scalar covariance chemical source term is a key requirement for extending the LSR model to reacting scalars. [Pg.345]

When it comes to the covariance structure, however, problems become acute. Total inversion requires that a joint probability distribution is known for observations and parameters. This is usually not a problem for observations. The covariance structure among the parameters of the model becomes more obscure how do we estimate the a priori correlation coefficient between age and initial Sr ratio in our isochron example without infringing seriously the objectivity of error assessment When the a priori covariance structure between the observations and the model parameters is estimated, the chances that we actually resort to unsupported and unjustified speculation become immense. Total inversion must be well-understood in order for it not to end up as a formal exercise of consistency between a priori and a posteriori estimates. [Pg.310]

We mentioned earlier, in Section 13.1, that if we did not have censoring then an analysis would probably proceed by taking the log of survival time and undertaking the unpaired t-test. The above model simply develops that idea by now incorporating covariates etc. through a standard analysis of covariance. If we assume that InT is also normally distributed then the coefficient c represents the (adjusted) difference in the mean (or median) survival times on the log scale. Note that for the normal distribution, the mean and the median are the same it is more convenient to think in terms of medians. To return to the original scale for survival time we then anti-log c, e, and this quantity is the ratio (active divided by control) of the median survival times. Confidence intervals can be obtained in a straightforward way for this ratio. [Pg.207]


See other pages where Model covariate distribution is mentioned: [Pg.11]    [Pg.477]    [Pg.477]    [Pg.527]    [Pg.2807]    [Pg.363]    [Pg.394]    [Pg.385]    [Pg.165]    [Pg.116]    [Pg.384]    [Pg.341]    [Pg.95]    [Pg.522]    [Pg.322]    [Pg.265]    [Pg.354]    [Pg.356]    [Pg.119]    [Pg.36]    [Pg.45]    [Pg.158]    [Pg.400]    [Pg.144]   
See also in sourсe #XX -- [ Pg.5 , Pg.477 ]




SEARCH



Covariance

Covariance model

Covariant

Covariates

Covariation

Distribution models

Model covariate

Model distributed

Modeling distribution

© 2024 chempedia.info