Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Validation data collection

To achieve a high degree of validity, data collected from several sources were compared and cross-checked [300, 301]. Answers from respondents were checked against ... [Pg.96]

In the context of validation and PAT, designed experiments are used throughout the development journey to find and quantify cause-and-effect relationships. The data collected to develop the product and its specifications then also support validation. Concurrent development and validation data collection are realized. Thus, the factors causing variability are identified, quantified, and controlled. Every point on Chapman s validation timeline benefits from DOE data collection, which is used most effectively in the design stage and in prospective performance qualification (4). The concept of a timeline is in the PAT guidance and in the GMP revision as a lifecycle. [Pg.97]

A new one-dimensional mierowave imaging approaeh based on suecessive reeonstruetion of dielectrie interfaees is described. The reconstruction is obtained using the complex reflection coefficient data collected over some standard waveguide band. The problem is considered in terms of the optical path length to ensure better convergence of the iterative procedure. Then, the reverse coordinate transformation to the final profile is applied. The method is valid for highly contrasted discontinuous profiles and shows low sensitivity to the practical measurement error. Some numerical examples are presented. [Pg.127]

The raw data collected during the experiment are then analyzed. Frequently the data must be reduced or transformed to a more readily analyzable form. A statistical treatment of the data is used to evaluate the accuracy and precision of the analysis and to validate the procedure. These results are compared with the criteria established during the design of the experiment, and then the design is reconsidered, additional experimental trials are run, or a solution to the problem is proposed. When a solution is proposed, the results are subject to an external evaluation that may result in a new problem and the beginning of a new analytical cycle. [Pg.6]

It should be noted that the data collection and conversion effort is not trivial, it is company and plant-specific and requires substantial effort and coordination between intracompany groups. No statistical treatment can make up for inaccurate or incomplete raw data. The keys to valid, high-quality data are thoroughness and quality of personnel training comprehensive procedures for data collection, reduction, handling and protection (from raw records to final failure rates) and the ability to audit and trace the origins of finished data. Finally, the system must be structured and the data must be coded so that they can be located within a well-designed failure rate taxonomy. When done properly, valuable and uniquely applicable failure rate data and equipment reliability information can be obtained. [Pg.213]

An analysis is only as good as the data therefore, the equipment used to collect the data is critical and determines the success or failure of a predictive maintenance or reliability improvement program. The accuracy as well as proper use and mounting determines whether valid data are collected. [Pg.687]

In addition, vibration data collected with a microprocessor-based analyzer is filtered and conditioned to eliminate non-recurring events and their associated vibration profiles. Anti-aliasing filters are incorporated into the analyzers specifically to remove spurious signals such as impacts. While the intent behind the use of anti-aliasing filters is valid, however, their use can distort a machine s vibration profile. [Pg.699]

The type of transducers and data acquisition techniques that you will use for the program is the final critical factor that can determine the success or failure of your program. Their accuracy, proper application and mounting will determine whether valid data will be collected. [Pg.812]

Prospective sources include encounter data, which may or may not be contained in EHRs patient data input and randomized, prospective clinical trials. Advantages of prospective sources to inform interactive software include the ability to control and monitor the circumstances of data collection reduction (as a result of randomization) of sources of bias potential minimization of missing data potential to modify design of data collection ability to verify data accuracy and ability to validate and further test assumptions and modify existing programs. [Pg.581]

In the United States, the threshold mercury concentration for commercial sale of fish is determined by the Food and Drag Administration, whereas consumption advice for recreational (noncommercial) fish is developed by individual states and tribes. Mercury data collected for development of fish-consumption advisories are typically from analyses of filets (axial muscle tissue, with or without skin) for total mercury, with concentrations expressed on a wet-weight basis. Analysis of filets for total mercury yields a valid estimate of MeHg concentration (Grieb et al. 1990 Bloom 1992), whether the analyzed sample consists of a large filet or a small mass of tissue obtained with a biopsy needle (Cizdziel et al. 2002 Baker et al. 2004). [Pg.93]

The third and last phase of the trial is the analysis of the validation samples. All data collected are reported. No results are discarded unless a determinate error can be identified. Any request to repeat the assay of a sample should be approved by... [Pg.91]

Each individual method collection comprises a large number of methods, which often have different validation statuses. For instance, the most important Swedish multi-residue method (based on ethyl acetate extraction, GPC and GC) is validated for many pesticides by four laboratories, but other methods are presented with singlelaboratory validation data. Some methods in the Dutch and German manuals were tested in inter-laboratory method validation studies, but others by an independent laboratory or in a single laboratory only. [Pg.116]

Obviously, a best or generally accepted documentation of performance data of validated multi-residue methods does not exist. Too many data are collected and then-detailed presentation may be confusing and impractical. Additionally, the validation of multi-residue methods is a continuous on-going process which started for many pesticides 20 years ago, when less comprehensive method requirements had to be fulfilled. For this reason, a complete and homogeneous documentation of method validation data cannot be achieved. [Pg.129]

The QA unit should have written procedures (SOPs) for the conduct of inspections and audits. These procedures should incorporate all considerations for the review of electronic data systems. The QA unit SOPs should address the role and responsibilities of the QA unit in software development, purchase, and validation activities, in-process audit procedures for data collected on line, procedures for on-line review of data (i.e., what will be verified and how much data will be reviewed), and the procedure for auditing reports using on-line data. [Pg.1048]

To construct the reference model, the interpretation system required routine process data collected over a period of several months. Cross-validation was applied to detect and remove outliers. Only data corresponding to normal process operations (that is, when top-grade product is made) were used in the model development. As stated earlier, the system ultimately involved two analysis approaches, both reduced-order models that capture dominant directions of variability in the data. A PLS analysis using two loadings explained about 60% of the variance in the measurements. A subsequent PCA analysis on the residuals showed that five principal components explain 90% of the residual variability. [Pg.85]

Future needs in support of model validation and performance testing must continue to be in the area of coordinated, well-designed field data collection programs supplemented with directed research on specific topics. The FAT workshop produced a listing of the field data collection and research needs for the air, streams/lakes/estuaries, and runoff/unsaturated/saturated soil media categories, as follows ... [Pg.169]

The Housekeeping Menu contains functions to format fresh discs to receive data, to verify that there is enough space to contain additional data on a disc, to print the directories of disks, and to print out selected data files. The Collect Free Space function squeezes the blocks containing valid data Into the beginning of the disc so that the remaining space is available In one large block. [Pg.133]

Our specimen database also contains additional parameters that are used to control the data collection process and to provide archival information to each data file written by the collection process. The console display for editing the specimen database is of the "fill in the form" type and the user revises the parameters for each specimen position (including the zeroth) as required. New parameter values are checked for validity at the time they are entered. All other parameters retain the values they possessed during the previous set of analyses. Thus, only minor changes are needed to program for a set of samples similar to the previous ones. All records in the database can be cleared if the analytical conditions are markedly different. [Pg.134]

The major problem in data reduction is to select the relevant range of data from a chromatogram such as the one shown in Figure 5. We do not limit our data collection process to the data that will actually be used to calculate the distribution parameters because we find that values outside of the range used in data reduction help to characterize baseline drift and provide indications of the validity of the results. [Pg.135]

The updating function for the run database is another editing program that is used to create an entry in the run database for each data collection task in a batch. The run database entry contains optional descriptive information as well as the parameters required to drive the task. The user edits the parameter values to suit his needs for the tasks to be performed. Parameters in the run database are validated before the run entry is made permanent. [Pg.145]

In order to make the problem solvable, a linearized process model has been derived. This enables the use of standard Mixed Integer Linear Programming (MILP) techniques, for which robust solvers are commercially available. In order to ensure the validity of the linearization approach, the process model was verified with a significant amount of real data, collected from production databases and production (shift) reports. [Pg.100]


See other pages where Validation data collection is mentioned: [Pg.213]    [Pg.213]    [Pg.812]    [Pg.216]    [Pg.277]    [Pg.720]    [Pg.73]    [Pg.134]    [Pg.110]    [Pg.290]    [Pg.559]    [Pg.560]    [Pg.608]    [Pg.619]    [Pg.620]    [Pg.623]    [Pg.662]    [Pg.48]    [Pg.229]    [Pg.107]    [Pg.1036]    [Pg.1058]    [Pg.661]    [Pg.288]    [Pg.98]    [Pg.126]    [Pg.79]    [Pg.328]    [Pg.90]    [Pg.196]   
See also in sourсe #XX -- [ Pg.94 ]




SEARCH



Data collection

Data validation

Data validity

© 2024 chempedia.info