Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Process development compile data

Thus, it is clear that the safety surveillance process is an iterative one. It looks at multiple data sources, whether screening large regulatory databases, looking at company databases or looking at manufacturing Lot related AEs for patential problems. The surveillance process screens the data using both the intraproduct and the interproduct methods. The object is to identify topics for further review to develop case definition, to compile a case series and then to characterize that case series. [Pg.548]

Process validation starts with the identification of product quality attributes and justification of acceptance criteria, followed by a review of the risk analysis, execution of process development runs, and compilation of clinical material manufacturing data to set specifications considering process variability [11]. There is a greater focus on process validation for downstream steps rather than for upstream steps because downstream steps are associated with virus removal. Process validation is just one approach used to control virus contamination, however others include cell bank characterization, in-process testing, inactivation procedures, control of raw materials, containment, and postmarket surveillance [6]. [Pg.332]

The basis for all CAT models is the fundamental understanding of the transit flow of drugs in the gastrointestinal tract. Yu et al. [61] compiled published human intestinal transit flow data from more than 400 subjects, and their work showed the human mean small intestinal transit time to be 199 min. and that seven compartments were optimal in describing the small intestinal transit process using a compartmental approach. In a later work, Yu et al. [58] showed that between 1 and 14 compartments were needed to optimally describe the individual small intestine transit times in six subjects but in agreement with the earlier study, the mean number of compartments was found to be seven. This compartmental transit model was further developed into a compartmental absorption and transit (CAT) model ([60], [63]). The assumptions made for this CAT model was that no absorption occurs in the stomach or in the colon and that dissolution is instantaneous. Yu et al. [59] extended the CAT model... [Pg.496]

Palm s group has continued to develop statistical procedures for treating solvent effects. In a previous paper, a set of nine basic solvent parameter scales was proposed. Six of them were then purifled via subtraction of contributions dependent on other scales. This set of solvent parameters has now been applied to an extended compilation of experimental data for solvent effects on individual processes. Overall, the new procedure gives a signiflcantly better flt than the well-known equations of Kamlet, Abboud, and Taft, or Koppel and Palm. [Pg.338]

In Fig. 1, various elements involved with the development of detailed chemical kinetic mechanisms are illustrated. Generally, the objective of this effort is to predict macroscopic phenomena, e.g., species concentration profiles and heat release in a chemical reactor, from the knowledge of fundamental chemical and physical parameters, together with a mathematical model of the process. Some of the fundamental chemical parameters of interest are the thermochemistry of species, i.e., standard state heats of formation (A//f(To)), and absolute entropies (S(Tq)), and temperature-dependent specific heats (Cp(7)), and the rate parameter constants A, n, and E, for the associated elementary reactions (see Eq. (1)). As noted above, evaluated compilations exist for the determination of these parameters. Fundamental physical parameters of interest may be the Lennard-Jones parameters (e/ic, c), dipole moments (fi), polarizabilities (a), and rotational relaxation numbers (z ,) that are necessary for the calculation of transport parameters such as the viscosity (fx) and the thermal conductivity (k) of the mixture and species diffusion coefficients (Dij). These data, together with their associated uncertainties, are then used in modeling the macroscopic behavior of the chemically reacting system. The model is then subjected to sensitivity analysis to identify its elements that are most important in influencing predictions. [Pg.99]

To ensure comprehensive coverage of foods and relevant flavonoids, compilation of the flavonoid composition database followed a preset development profile (Figure 4.1). This was a multistage process that evolved from a review of two major food composition data-bases" and from other early stage nutrient bases such as those for vitamin and... [Pg.222]

In the first step of the partnership, BASF provided its eco-efficiency expertise (see Section 6.1.3 for detailed discussion of BASF eco-efficiency analysis) to help Moroccan textile dye works operate in a more efficient and environmentally friendly way. Based on the eco-efficiency analysis and the experience of many years with products and processes in the textile sector, BASF developed a software package, which was given to Moroccan companies free of charge. The software tool substantially simplifies the compilation of an eco-efficiency analysis. It uses key technical data to calculate how the manufacturing process can be improved. [Pg.420]

Validation is one of the most difficult aspects of environmental QSAR development due to the comparatively small size of the database. Cross-validation has been useful in validating the effectiveness of the model. In this method, one compound is removed from the database, the equation is recalculated, and the toxicity of the omitted compound is estimated. The process is repeated for all compounds in the dataset and the results are tabulated. In this manner, a calculation of the accuracy of prediction of continuous data and the rate of misclassification for categorial data can be compiled. A more useful estimate of the validity of the QSAR model is its ability to predict the toxicity of new compounds. Generally, this is difficult to accomplish in a statistically significant way due to the slow accumulation of new data that meet the criteria used in the modeling process and the associated expense. [Pg.140]

These studies also revealed the difficulty in transfer of training sets between different NMR instruments without some type of standardization to minimize spectral variation. In many instances large data sets may be compiled from individual NMR experiments run over several months (if not years), as well as data collected on different instruments and from different groups. There is no guarantee that the performance of an NMR instrument over time, or different NMR instruments, are equivalent. Gislason and co-workers have recently reported a study on the protocol for transferring PLS methods between low field process NMR spectrometers in which they found that a piece-wise direct standardization methods for accurate model transfer. This study appears to be one of the few concerning instrumental transfer of chemometric models in NMR. The development of efficient methods that allow for accurate transfer and combination of NMR spectral data from a variety of sources is an important area for future research. [Pg.54]


See other pages where Process development compile data is mentioned: [Pg.378]    [Pg.92]    [Pg.80]    [Pg.144]    [Pg.457]    [Pg.362]    [Pg.228]    [Pg.1]    [Pg.62]    [Pg.1]    [Pg.489]    [Pg.374]    [Pg.60]    [Pg.327]    [Pg.460]    [Pg.561]    [Pg.412]    [Pg.315]    [Pg.108]    [Pg.488]    [Pg.417]    [Pg.185]    [Pg.165]    [Pg.452]    [Pg.935]    [Pg.783]    [Pg.433]    [Pg.371]    [Pg.377]    [Pg.147]    [Pg.783]    [Pg.673]    [Pg.296]    [Pg.282]    [Pg.170]    [Pg.2]    [Pg.1785]    [Pg.935]    [Pg.758]    [Pg.143]    [Pg.555]   
See also in sourсe #XX -- [ Pg.229 ]




SEARCH



Compilation

Compiler

Data processing

Process data

Process development data

© 2024 chempedia.info