Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

The Model Development Process

While clever, most books on modeling don t use this simple mnemonic and instead present a more formal process initially proposed by Box and Hill (1967). They stated that the process of model-building can be thought to involve three stages  [Pg.10]

Although these stages have been repeatedly reported throughout the statistical literature, they are really only the middle part of the process. Chatfield (1988) expanded the number of stages to five, which include the original three by Box and Hill  [Pg.10]

Models are not static—they change over time as more data and experience with the drug are accumulated. Basic assumptions made about a model may later be shown to be inaccurate. Hence, a more comprehensive model development process is  [Pg.10]

The first step of the process should be to identify the problem, which has already been extensively discussed. The next step is to identify the relevant variables to collect. Data are usually not cheap to collect. There is ordinarily a fine line between money available to perform an experiment and the cost of collecting data. We want to collect as much data as possible, but if a variable is not needed then perhaps it should not be collected. Once the variables to be collected are identified, the accuracy and bias of the measurement methods should be examined because collected data are of no value if it is biased or inaccurate. Sometimes, however, the modeler is not involved in choosing which variables to collect and is brought in to analyze data after an experiment is already completed. It may be that the data needed to solve the problem was not collected or that only some of the data were collected, in which case some creative thinking may be needed to obtain a solution. [Pg.10]

Rarely, however, will the data be in a format suitable for analysis. The more complex the data, the more pre-processing will be needed to put the data in a format suitable for analysis. The next step then is to look at the data and clean as needed. Check the quality of the data. Have the data been entered to suitable precision, for instance, two places behind the decimal Perform descriptive statistics and look at histograms to examine for discordant results. It is not uncommon in large clinical multi-national clinical trials for clinical chemistry data to be of different units between the United States [Pg.10]


Some factors or covariates may cause deviations from the population typical value generated from system models so that each individual patient may have different PK/PD/disease progression profiles. The relevant covariate effects on drug/disease model parameters are identified in the model development process. Clinical trial simulations should make use of input/output models incorporating... [Pg.10]

To compare and evaluate different kinds of models created during the model development process graphical and statistical methods should be applied. A good description of the model building process can be found elsewhere [14]. [Pg.461]

The modeling toolkit ModKit aims at simplifying the model development process by providing reusable model building blocks [52]. Further, ModKit provides interactive support for the user during the assembly phase of the model building blocks. [Pg.485]

Simulation has emerged as an important tool for extrapolating from scenarios that generated the data for model development activities into scenarios of potential interest for the drug development program (15). Similar to the model development process, effectiveness in the case of simulations is the correspondence between the model and reality. Efficiency hinges on the applicability of the results as the target scenarios step outside the boundaries of the initial model. Here we need to ask about the extensibility of the simulation results and how broadly applicable the results are. [Pg.915]

One goal of population pharmacokinetic models is to relate subject-specific characteristics or covariates, e.g., age, weight, or race, to individual pharmacokinetic parameters, such as clearance. There are many different methods to determine whether such a relationship exists, some of which were discussed previously in the chapter, and they can be characterized as either manual or automated in nature. With manual methods, the user controls the model development process. In contrast, automated methods proceed based on an algorithm defined by the user a priori and a computer, not the user, controls the model development process. Consequently, the automated methods are generally considered somewhat less subjective than manual procedures. The advantage of the automated method is its supposed lack of bias and ability to rapidly test many different models. The advantage of the manual method is that the user... [Pg.231]

Today, good modeling practices dictate that a data analysis plan (DAP) be written prior to any modeling being conducted and prior to any unblinding of the data. The reason being that model credibility is increased to outside reviewers when there is impartiality in the model development process. The DAP essentially provides a blueprint for the analysis. It should provide details... [Pg.267]

In summary, tobramycin pharmacokinetics were best characterized with a 2-compartment model where CL was proportional to CrCL and VI was proportional to body weight. BSV in CL was 29% with 13% variability across occasions. Residual variability was small, 14%, which compares well to assay variability of <7.5%. The model was robust to estimation algorithm and was shown to accurately predict an internally derived validation data set not used in the model development process. [Pg.336]

Chapter 1 closely reviews the model development process. After a formal foundation of bond graphs, principles and fundamental concepts of port-based physical systems modelling in terms of bond graphs are pointed out and are discussed in a clarifying way that may help avoid model developers to fall into traps, to overlook assumptions and the context dependency of models, or to mix concepts which may give rise to confusions, e.g. with regard to ideal concepts and physical components. A key issue of Chapter 1 is that it emphasises the distinction between configuration structure, physical stiucture, and conceptual structure. [Pg.1]

In support of thorough and auditable testing of the models, a Test Plan was developed. Firstly, as part of the models development process, all models were reviewed by domain experts at different stages of the project. [Pg.99]

Analysts The above is a formidable barrier. Analysts must use limited and uncertain measurements to operate and control the plant and understand the internal process. Multiple interpretations can result from analyzing hmited, sparse, suboptimal data. Both intuitive and complex algorithmic analysis methods add bias. Expert and artificial iutefligence systems may ultimately be developed to recognize and handle all of these hmitations during the model development. However, the current state-of-the-art requires the intervention of skilled analysts to draw accurate conclusions about plant operation. [Pg.2550]

Before setting about the task of developing such a model, the product development process requires definition along with an indication of its key stages, this is so the appropriate tools and techniques can be applied (Booker et al., 1997). In the approach presented here in Figure 5.11, the product development phases are activities generally defined in the automotive industry (Clark and Fujimoto, 1991). QFD Phase 1 is used to understand and quantify the importance of customer needs and requirements, and to support the definition of product and process requirements. The FMEA process is used to explore any potential failure modes, their likely Occurrence, Severity and Detectability. DFA/DFM techniques are used to minimize part count, facilitate ease of assembly and project component manufacturing and assembly costs, and are primarily aimed at cost reduction. [Pg.266]

Most of the models developed to describe the electrochemical behavior of the conducting polymers attempt an approach through porous structure, percolation thresholds between oxidized and reduced regions, and changes of phases, including nucleation processes, etc. (see Refs. 93, 94, 176, 177, and references therein). Most of them have been successful in describing some specific behavior of the system, but they fail when the... [Pg.372]

The tools for in silico toxicology are broadly applied in the drug development process. The particular use of the tools is clearly context-dependent, which includes the quality of the prediction and the applicability domain of the model. [Pg.475]

Modelling is a process of continuous development, in which it is generally advisable to start off with the simplest conceptual representation of the process and to build in more and more complexities, as the model develops. Starting off with the process in its most complex form, often leads to confusion. A process of continuous validation is necessary, in which the model theory, data, equation formulation and model predictions must all be examined repeatedly. In formulating any model, it is therefore important to... [Pg.2]

To construct the reference model, the interpretation system required routine process data collected over a period of several months. Cross-validation was applied to detect and remove outliers. Only data corresponding to normal process operations (that is, when top-grade product is made) were used in the model development. As stated earlier, the system ultimately involved two analysis approaches, both reduced-order models that capture dominant directions of variability in the data. A PLS analysis using two loadings explained about 60% of the variance in the measurements. A subsequent PCA analysis on the residuals showed that five principal components explain 90% of the residual variability. [Pg.85]

As can be seen, the catalytic process over a zeolite-supported cation, or an oxide-supported cation, can be considered as a supported homogeneous catalysis, as far as adsorbed reactants and products behave like reactive ligands. The model developed for lean DcNO. catalysts over supported cations (function 3), as well as this supported homogeneous catalysis approach, is also suitable for stoichiometric mixture (TWC) comprising CO and H2 as reductants over supported transition metal cations [20-22],... [Pg.148]

Physiologically based pharmacokinetic models provide a format to analyze relationships between model parameters and physicochemical properties for a series of drug analogues. Quantitative structure-pharmacokinetic relationships based on PB-PK model parameters have been pursued [12,13] and may ultimately prove useful in the drug development process. In this venue, such relationships, through predictions of tissue distribution, could expedite drug design and discovery. [Pg.75]

Figure 1 presents an overview of the model testing/valida-tion process as developed at the Pellston workshop. A distinction is drawn between validation of empirical versus theoretical models as discussed by Lassiter (4 ). In reality, many models are combinations of empiricism and theory, with empirical formulations providing process descriptions or interactions lacking a sound, well-developed theoretical basis. The importance of field data is shown in Figure 1 for each step in the model validation process considerations in comparing field data with model predictions will be discussed in a later section. [Pg.154]

USEtox . Environmental concentrations can be obtained for the theoretical case of 1 kg emitted into the urban air (default USEtox ) or considering the emissions obtained with the developed scenarios (Chap. 1) [51]. It is important to highlight that these concentration values are calculated by the model considering processes such as advection, transportation, and degradation among the different scales implemented by USEtox . [Pg.360]

Although the basic mechanisms are generally agreed on, the difficult part of the model development is to provide the model with the rate constants, physical properties and other model parameters needed for computation. For copolymerizations, there is only meager data available, particularly for cross-termination rate constants and Trommsdorff effects. In the development of our computer model, the considerable data available on relative homopolymerization rates of various monomers, relative propagation rates in copolymerization, and decomposition rates of many initiators were used. They were combined with various assumptions regarding Trommsdorff effects, cross termination constants and initiator efficiencies, to come up with a computer model flexible enough to treat quantitatively the polymerization processes of interest to us. [Pg.172]


See other pages where The Model Development Process is mentioned: [Pg.363]    [Pg.120]    [Pg.87]    [Pg.179]    [Pg.1006]    [Pg.3]    [Pg.10]    [Pg.11]    [Pg.158]    [Pg.216]    [Pg.242]    [Pg.4]    [Pg.263]    [Pg.56]    [Pg.3905]    [Pg.363]    [Pg.120]    [Pg.87]    [Pg.179]    [Pg.1006]    [Pg.3]    [Pg.10]    [Pg.11]    [Pg.158]    [Pg.216]    [Pg.242]    [Pg.4]    [Pg.263]    [Pg.56]    [Pg.3905]    [Pg.735]    [Pg.35]    [Pg.199]    [Pg.15]    [Pg.296]    [Pg.470]    [Pg.516]    [Pg.227]    [Pg.73]    [Pg.382]    [Pg.73]    [Pg.528]    [Pg.161]    [Pg.489]    [Pg.102]   


SEARCH



Model developed

Model development process

© 2024 chempedia.info