Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Sparse data situation

The FO method was the first algorithm available in NONMEM and has been evaluated by simulation and used for PK and PD analysis [9]. Overall, the FO method showed a good performance in sparse data situations. However, there are situations where the FO method does not yield adequate results, especially in data rich situations. For these situations improved approximation methods such as the first-order conditional estimation (FOCE) and the Laplacian method became available in NONMEM. The difference between both methods and the FO method lies in the way the linearization is done. [Pg.460]

Whether to model a pharmacodynamic model parameter using an arithmetic or exponential scale is largely up to the analyst. Ideally, theory would help guide the choice, but there are certainly cases when an arithmetic scale is more appropriate than an exponential scale, such as when the baseline pharmacodynamic parameter has no constraint on individual values. However, more often than not the choice is left to the analyst and is somewhat arbitrarily made. In a data rich situation where each subject could be fit individually, one could examine the distribution of the fitted parameter estimates and see whether a histogram of the model parameter follows an approximate normal or log-normal distribution. If the distribution is approximately normal then an arithmetic scale seems more appropriate, whereas if the distribution is approximately log-normal then an exponential scale seems more appropriate. In the sparse data situation, one may fit both an arithmetic and exponential scale model and examine the objective function values. The model with the smallest objective function value is the scale that is used. [Pg.212]

In the field of pharmacokinetics, there has been much recent work on developing methods for estimating interindividual variation in kinetic model parameters, particularly in sparse data situations where there are... [Pg.265]

Mixed effect modeling deals with the situation in between. Inter and intra individual variability are separated and calculated within the same step. Inter individual random effects are calculated for those parameters for which this information can be drawn from the data set. In general only one residual error is calculated. The method is very well suited for sparse and unbalanced data situations. [Pg.749]

Mixed effect modeling is a very flexible one step method. It can cope with many situations. It is the only method which can deal with sparse data and... [Pg.749]

Worldwide data are not readily available as many nations do not publish the results of their animal residue monitoring programs. The best available data are those published regularly by the Food Safety Inspection Service (FSIS) of the U.S. Department of Agriculture (USDA). It is possible to go back over data for many years and demonstrate improvements in the residue situation, however the records for the past few years are the important ones as they are representative of current or recent events. Since the publication of worldwide residue data is at best sparse and not consistent, this chapter has made use of the regularly published residue data from the FSIS/USDA surveys, which are available on the Internet. The assumption made in this chapter, and perhaps there is a certain naivete to this assumption, is that international residue usage is similar to that found by the FSIS/USDA. This assumption is based upon the frequency of residues found in meat products imported into the U.S. [Pg.272]

After merging of the single zones, data sets of approx. 100-300 independent reflections can be obtained as described in chapter 2.5. In a first step a kinematical structure refinement should be performed using the program SHELXL [13]. The temperature factors for FAPPO were chosen as U = 0.06 for C, N and O as U = 0.10 for H atoms apart from H atoms situated at N with U = 0.12 A (Electron scattering factors [20]). To prevent the molecules from being distorted a refinement, where the whole molecule was kept rigid, was performed. This also improves the usually bad parameter/reflection ratio. In the case of modification I we obtained R-values of 31% (481 unique reflections with I > 2cr) for the 100 kV data and of 25% (385 unique reflections with I > 2a). The sparse 100 kV data of modification II was not analysed quantitatively. From 300 kV data we obtained an R-value of 23% (226 unique reflections with I > 2a). [Pg.418]

Using the PPK approach in the development of a new drag has the advantage that the relevant pharmacokinetic parameters for a reasonably large population can be obtained from only a few blood samples per subject. The PPK approach is the method of choice in all situations when only sparse and unbalanced data can be obtained. This situation exists when the PK needs to be studied in elderly, critically ill and pediatric patients, but also very often in preclinical studies investigating the effects of the drug in animals. [Pg.747]

There is now a large amount of data on environmental levels and trends for BDEs. In order to develop a fuller picture of the current situation regarding the full range of BFRs currently in use, some of this effort should in the future be diverted to the study of novel compounds for which data are sparse or nonexistent. [Pg.18]

From previous chapters it is clear that the evaluation. of pharmacokinetic parameters is an essential part of understanding how drugs function in the body. To estimate these parameters studies are undertaken in which transient data are collected. These studies can be conducted in animals at the preclinical level, through all stages of clinical trials, and can be data rich or sparse. No matter what the situation, there must be some common means by which to communicate the results of the experiments. Pharmacokinetic parameters serve this purpose. Thus, in the field of pharmacokinetics, the definitions and formulas for the parameters must be agreed upon, and the methods used to calculate them understood. This understanding includes assumptions and domains of validity, for the utility of the parameter values depends upon them. This chapter focuses on the assumptions and domains of validity for the two commonly used methods — noncompartmental and compartmental analysis. Compartmental models have been presented in earlier chapters. This chapter expands upon this, and presents a comparison of the two methods. [Pg.89]

Not much is known about the thermophysical properties of liquid metals, especially the transport properties such as chemical and thermal diffusivities. The existing data are sparse and the scatter makes it difficult to make an accurate determination of the temperature dependency of these properties. This situation was the motivation for Froberg s experiment on Space-lab-1 in which he measured the temperature dependence of the self-diffusion of Sn from 240°C to 1250°C. He found that the diffusion coefficients were 30-50% lower than the accepted values and seemed to follow a 7 dependence as opposed to the Arrhenius behavior observed in solid state diffusion. ... [Pg.1636]

These blind predictions of the FEBEX data do not make a strong case that, for this particular geomechanical situation, a coupled analysis is entirely necessary. The granite in this case is sparsely fractured, and most of the inflow occurs at the lamprophyre and other more fractured areas. Also, the rock mass is sufficiently nonporous and saturated that inelastic deformation of the rock matrix is not a significant issue for repository performance. However, the exercise was very valuable for developing rationale for modeling the more complex coupled problems associated with the introduction of the bentonite barrier and the heat of the simulated waste. [Pg.130]

Identify the resources available. What computational methods can be applied and what parameters and data are needed to implement a particular method Critical properties Heat capacities Vapor pressures Parameters for a PvTx equation of state Parameters in models for excess properties When available data are sparse (the usual situation) or unreliable or conflicting, then set upper and lower bounds on the property and do a sensitivity analysis (which input data have the largest impact on the calculated property ). Considerations should also be given to the resources needed to set up the calculation (pencil and paper, calculator commands, computer software, original computer codes) and the hardware needed to carry them out (brain, fingers, calculator, PC, workstation). [Pg.469]

There is one immediate difficulty, however. What happens if there is not sufficient data available with which to draw the probability distribution This is, in fact, just the situation which has faced standards committees rewriting codes of practice into the limit state format. Firstly, it must be forcibly argued that surveys of the various classes of structural type must be undertaken in order to remedy the situation and obtain some data, but inevitably this takes time and money and competes with other demands upon limited resources. Surveys have been undertaken but the information is still rather sparse. The British codes of practice for buildings, written in the limit state format, have in fact been written to take as characteristic loads the same mixture of median, maximal and statistical estimates of dead loads, imposed loads, and wind loads as used for the limiting stress and load factor methods. This is plainly inconsistent and has led to some confusion where the basis of the method has not been clearly understood. [Pg.64]

Finally, mention must be made of the situation in which data needed to support requirement estimates are sparse. For several of the nutrients we continue to lack good markers of satiation of need and, as a result,... [Pg.105]

The agreement between optical and muonic and electron scattering measurements tends to be quite satisfactory. We have said nothing so far about isotope shift measurements by electron K X-ray measurements. These shifts, very small relative to the line widths, require great precision to yield reliable data. Although a number of experiments have been done, the additional information has been sparse and at times in disaccord with the optical data. Recent high precision experiments in lead appear to be changing this situation and the results may supplement isotope shift data obtained by other techniques. In Table 2 we show the RIS (Eq. 14) obtained in a number of different optical and K X-ray transitions. It is seen that... [Pg.529]


See other pages where Sparse data situation is mentioned: [Pg.401]    [Pg.276]    [Pg.1035]    [Pg.271]    [Pg.271]    [Pg.321]    [Pg.321]    [Pg.10]    [Pg.44]    [Pg.29]    [Pg.997]    [Pg.455]    [Pg.51]    [Pg.304]    [Pg.45]    [Pg.119]    [Pg.588]    [Pg.997]    [Pg.108]    [Pg.423]    [Pg.424]    [Pg.472]    [Pg.209]    [Pg.20]    [Pg.94]    [Pg.116]    [Pg.459]    [Pg.99]    [Pg.209]    [Pg.258]    [Pg.314]    [Pg.414]    [Pg.70]   
See also in sourсe #XX -- [ Pg.265 , Pg.272 ]




SEARCH



Situation

Situational

Sparse

Sparse data

© 2024 chempedia.info