Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Intensive sampling data

This ability is available in many software programs. NONMEM (Iconus, EUicott City, MD) has been widely used to estimate population models arising from both sparse and intensely sampled data. Other programs include WinNonMix (Pharsight Corp., Palo Alto, CA), Kinetica 2000 (Innaphase Corp, Philadelphia, PA), and Pop-Kinetics (SAAM Institute, Seattle, WA). ADAPT II and WinNonlin have focused on PK/PD models and have been combined with Bayesian approaches to estimate population models. [Pg.467]

The annual dry depositional fluxes of RGHg and Hg-P were estimated using the intensive sampling data (Table III). The dry depositional flux was greater than the wet flux at SC and was about half of the wet flux at STP. Therefore, it is clear that the dry deposition is an important pathway for Hg to enter the Chesapeake Bay, with higher fluxes associated with urban air. [Pg.239]

At 24 hours, a strong decrease in the number of topi-positive cells and in topi staining intensity appeared in tumors treated with irinotecan, whereas there was almost complete disappearance of topi staining in tumors treated with edotecarin (Figure 11D). Morphologic analysis of treated samples show absence of mitotic figures after both treatments (mitotic index 2% in controls, 0% in treated samples data not shown). [Pg.90]

The objectives were met by a study plan designed to acquire data for seventeen weeks at seven sites throughout the valley. This included both weekly monitoring and dally Intensive sampling of aerosols. The samples were weighed and then analyzed for elemental composition on the UC Davis cyclotron. [Pg.328]

When the antioxidants were used in the cooked/stored samples, data indicated that they were very effective in inhibiting lipid oxidation and MFD. The chemical and off-flavor indicators were reduced and the on-flavor notes were increased. Thus, phenolic-type primary antioxidants that function as free radical scavengers are very effective tools for preventing lipid oxidation and MFD in ground beef. It should also be noted that the intensity of the desirable flavor notes remained at very high levels, which meant that the patties retained their beefy tastes. Therefore, for an antioxidant to be highly effective, it should not only prevent lipid oxidation, but it should also retain the desirable flavor properties of the food commodity. [Pg.65]

Because of the very low scattered intensity, the data at the shortest sampling interval is usually the poorest in quality. Arbitrary renormalization of the data followed by the graphical representation outlined above is most likely to amplify errors in the data analysis, focus attention on the inherent errors in the construction of the composite relaxation function, and give undue importance to the worst data. When the data is as limited in quality as it is for this problem, any method of analysis should be as numerically stable as possible and the maximum allowable smoothing of the data should be employed. This procedure may obscure subtle features, but only very high quality data could reliably demonstrate their presence anyway. At the present time a conservative approach seems more sensible. [Pg.138]

Kennedy, B. M., Lynch, M. A., Reynolds, J. H., Smith, S. R (1985) Intensive sampling of noble gases in fluids at Yellowstone I. Early overview of the data regional patterns. Geochim. Cosmochim. Acta, 49, 1251-61. [Pg.264]

Initial PXD patterns of phase B materials (encountered, as above, when fumed silica was used as the Si02 source, or when products were crystallized in the presence of LiOH [47]) were clearly multiphasic, but by cross comparing reflection appearances and intensities in data from three different samples, it proved possible to assign the main peaks in the PXD patterns to one of three phases, of which the predominant was labelled B. Attempts at indexing using the treor program [55] based on the positions of a number of the peaks ascribed to phase B yielded two possible unit cells, one tetragonal (with a = 8.812(1)A and c = 12.460(2)A) and the other monoclinic (with a = 6.950(3)A, b = 12.467(5)A, c = 4.911(1)A and P = 116.24(2)°). A repeated synthesis of phase B yielded a much cleaner PXD pattern, that showed a well-defined (if weak) peak at 20 -12.3° which indicated the former cell to... [Pg.612]

WATOX-84, the third Intensive, used data from samples collected onboard the RV Knorr as it traveled between North America and Africa. These data were used to support the gas and aerosol data from WATOX-82 and WATOX-83 and to test new shipboard precipitation-collection instruments. [Pg.47]

WATOX-82, -83, and -84 sampled air in the marine boundary layer. These sea-level measurements gave no information about upper-air transport. To overcome this deficiency, the fourth through the seventh Intensives incorporated data collected onboard two NOAA research aircraft, the NOAA WP-3D and the NOAA KingAir. (See Table III for the specific species measured.) Both aircraft carried sampling and analytical equipment designed to determine the vertical and horizontal chemical structure of the atmosphere. [Pg.47]

The XRD patterns of calcosilicate zeolite-like crystal material CAS-1 have been indexed (Figure 1). The sample has very fine crystallization intensity. The data of XRD data for CAS-1 was also indexed in Table 2. The data showed that the product CAS-1 may be a crystal material with new topology structure. [Pg.236]

Another approach to the data based on low-level counting uses the method of maximum likelihood. The likelihood of a set of data is the probability of obtaining the particular set, given the chosen probability distribution model. The idea is to determine the parameters that maximize the likelihood of the sample data. The methodology is simple, but the implementation may need intense mathematics [12], The method has been used, for instance, to treat data on production rates [12] and... [Pg.196]

PK and PD have been linked by many models, sometimes mechanistic and at other times empirical. These models are especially useful in better understanding the dose strategy and response, especially when applied by stochastic simulation. The population approach can be applied to multiple types of data—for example, both intensely and sparsely sampled data and preclinical to Phase 4 clinical data— and therefore has found great utility when applied to PK/PD modeling. [Pg.6]

The PPK approach can allow one to combine heterogeneous types of data from varying sources. For example, one could pool data from several different studies, study centers, variable biomatrices (plasma plus serum), intensely plus sparsely sampled data, or experimental plus observational data. The combining of differing data sets often increases the power to identify multicompartment or nonhnear models, to incorporate additional covariates, or to gain precision in the estimation of the model. [Pg.266]

Traditional approaches used in the estimation of TPAR have been compared with the PpbB approach and the recently proposed RS approach. The traditional approaches—independent time points and naive data averaging approaches—are inferior to the samphng/resampling approaches. The RS approach performed better than the PpbB approach because of its unique algorithm. Also, fewer rephcations are required for robust estimation of TPAR. The computer intensive methods provide estimates of TPAR with measures of dispersion and uncertainty. The RS approach is the method of choice for obtaining robust estimates of TPAR, when analyzing extremely sparsely sampled data. [Pg.1049]

Sheiner and Beal (1983) presented the first study on the role of experimental design in one of their seminal papers on nonlinear mixed effects models. They showed that increasing the number of subjects improves parameter estimation accuracy, but that increasing the number of samples per subject subjects does not improve estimation to the same degree when the data were simulated from a 1-compartment model. Hence, it is better to have sparse data from more subjects than intensive pharmacokinetic data with fewer subjects. They also showed that relatively accurate and precise parameter estimates (except for residual variance) can be obtained using FO-approximation with as few as 50 subjects having a single sample collected per subject. Keep in mind, however, this was a very simple pharmacokinetic model with only two estimable parameters. [Pg.290]

The Laue method therefore appears very effective at sampling data between d max and d m lJ2. These RLPs are largely recorded as singles but, of course, their intensities need wavelength normalisation if they are to be used for structure analysis. [Pg.288]

RNA isolated from cells or tissue of interest and control samples are labeled with a fluorescent dye and allowed to bind in a quantitative manner to complementary sequences on the microarray. Relative expression fold of difference of the sequences in the test samples can be estimated by comparing the fluorescence intensities, measured by laser scanner, with those of the control samples. Data management and mining methods applied to microarray data analysis essentially have been correlation-based approaches that apply methods developed for the analysis of data that are more highly constrained than at the transcriptional level. [Pg.556]


See other pages where Intensive sampling data is mentioned: [Pg.2806]    [Pg.265]    [Pg.239]    [Pg.2806]    [Pg.265]    [Pg.239]    [Pg.905]    [Pg.13]    [Pg.188]    [Pg.378]    [Pg.240]    [Pg.36]    [Pg.302]    [Pg.460]    [Pg.72]    [Pg.271]    [Pg.181]    [Pg.226]    [Pg.573]    [Pg.289]    [Pg.961]    [Pg.293]    [Pg.672]    [Pg.358]    [Pg.32]    [Pg.56]    [Pg.304]    [Pg.818]    [Pg.518]    [Pg.475]    [Pg.47]   
See also in sourсe #XX -- [ Pg.229 , Pg.239 ]




SEARCH



Data sampling

Sampled data

© 2024 chempedia.info