Performance-Based Approach


In the previous section we described several internal methods of quality assessment that provide quantitative estimates of the systematic and random errors present in an analytical system. Now we turn our attention to how this numerical information is incorporated into the written directives of a complete quality assurance program. Two approaches to developing quality assurance programs have been described a prescriptive approach, in which an exact method of quality assessment is prescribed and a performance-based approach, in which any form of quality assessment is acceptable, provided that an acceptable level of statistical control can be demonstrated.  [c.712]

In a performance-based approach to quality assurance, a laboratory is free to use its experience to determine the best way to gather and monitor quality assessment data. The quality assessment methods remain the same (duplicate samples, blanks, standards, and spike recoveries) since they provide the necessary information about precision and bias. What the laboratory can control, however, is the frequency with which quality assessment samples are analyzed, and the conditions indicating when an analytical system is no longer in a state of statistical control. Furthermore, a performance-based approach to quality assessment allows a laboratory to determine if an analytical system is in danger of drifting out of statistical control. Corrective measures are then taken before further problems develop.  [c.714]

Once a control chart is in use, new quality assessment data should be added at a rate sufficient to ensure that the system remains in statistical control. As with prescriptive approaches to quality assurance, when a quality assessment sample is found to be out of statistical control, all samples analyzed since the last successful verification of statistical control must be reanalyzed. The advantage of a performance-based approach to quality assurance is that a laboratory may use its experience, guided by control charts, to determine the frequency for collecting quality assessment samples. When the system is stable, quality assessment samples can be acquired less frequently.  [c.721]

In the previous section we described several internal methods of quality assessment that provide quantitative estimates of the systematic and random errors present in an analytical system. Now we turn our attention to how this numerical information is incorporated into the written directives of a complete quality assurance program. Two approaches to developing quality assurance programs have been described a prescriptive approach, in which an exact method of quality assessment is prescribed and a performance-based approach, in which any form of quality assessment is acceptable, provided that an acceptable level of statistical control can be demonstrated.  [c.712]

In a performance-based approach to quality assurance, a laboratory is free to use its experience to determine the best way to gather and monitor quality assessment data. The quality assessment methods remain the same (duplicate samples, blanks, standards, and spike recoveries) since they provide the necessary information about precision and bias. What the laboratory can control, however, is the frequency with which quality assessment samples are analyzed, and the conditions indicating when an analytical system is no longer in a state of statistical control. Furthermore, a performance-based approach to quality assessment allows a laboratory to determine if an analytical system is in danger of drifting out of statistical control. Corrective measures are then taken before further problems develop.  [c.714]

Once a control chart is in use, new quality assessment data should be added at a rate sufficient to ensure that the system remains in statistical control. As with prescriptive approaches to quality assurance, when a quality assessment sample is found to be out of statistical control, all samples analyzed since the last successful verification of statistical control must be reanalyzed. The advantage of a performance-based approach to quality assurance is that a laboratory may use its experience, guided by control charts, to determine the frequency for collecting quality assessment samples. When the system is stable, quality assessment samples can be acquired less frequently.  [c.721]

Hazards and their degrees vary from site to site. Over the years, hazardous waste guidelines have been used when dealing with the hazards of underground storage tank removals at the corner gas station, landfills, industrial sites, and large-scale mixed chemical or radiological sites. This hazard-based approach allows the remediation firm to use a performance-based approach when it comes to protecting workers. The greater the hazard, the more extensive the engineering controls, administrative controls, or increased levels of PPL that will be necessary. Remedial actions and associated activities at hazardous waste sites can range from low-risk, short-term to high-risk, full-scale, and long-term remediation activities [4].  [c.6]

The United States has moved to follow the approach established by the United Nations to performance-based specifications for the packaging and packaging materials used for hazardous materials. The U.S. DOT, through Rule Making Docket HM I8I defined the United States position for performance-oriented packaging. Through many hearings and requests for industiy comment, the DOT has now published in CFR 49 the changes necessaiy to bring U.S. packaging specification for hazardous materials in compliance with the United Nations specifications.  [c.1944]

Essential for interpreting 3D protein models is the estimation of their accuracy, both the overall accuracy and the accuracy in the individual regions of a model. The errors in models arise from two main sources, the failure of the conformational search to find the optimal conformation and the failure of the scoring function to identify the optimal conformation. The 3D models are generally evaluated by relying on geometrical preferences of the amino acid residues or atoms that are derived from known protein structures. Empirical relationships between model errors and target-template sequence differences can also be used. It is convenient to approach an evaluation of a given model in a hierarchical manner [9]. It first needs to be assessed if the model at least has the correct fold. The model will have a correct fold if the correct template is picked and if that template is aligned at least approximately correctly with the target sequence. Once the fold of a model is confirmed, a more detailed evaluation of the overall model accuracy can be performed based on the overall sequence similarity on which the model is based (Eig. 8). Einally, a variety of error profiles can be constructed to quantify the likely errors in the different regions of a model. A good strategy is to evaluate the models by using several different methods and identify the consensus between them. In addition, energy functions are in general designed to work at a certain level of detail and are not appropriate to judge the models at a finer or coarser level [197]. There are many model evaluation programs and servers [198,199] (Table 1).  [c.294]

A concurrent engineering framework allows a more efficient flow of information from the various tools and techniques used and effectively communicates the design through requirements based performance measures. The primary advantage of employing concurrent principles in terms of the use of tools and techniques is that the overlap of the engineering activities, which is natural in any case, enhances a team-based approach. The application of the tools and techniques in practice has also been discussed together with a review of each, including their effective positioning in the product development process, implementation and management issues and likely benefits from their usage.  [c.276]

The objective of this approach is to improve the reliability of the system without having to design acoustical filters. For many systems, this is all that is needed. API 618 contains a chart that recommends the type of analysis that should be performed, based on horsepower and pressure.  [c.85]

The amount and type of hazards will determine the performance standard specified in site-specific control plans. This includes the content, detail, and formality of review. The approval of the plans is based on risk and hazard potential. Using the hazard-based approach, levels of risk or methods to rank risk (degree) are standardized.  [c.38]

The chromatograms reported by Gordon et al. (23) shown in Figure 3.5 illustrate the huge complexity of even small heart-cuts made from the primary separation. Once again, a Deans-type switch was used for sample transfer. For the primary chromatogram, each cut is seen to contain only a handful of peaks, yet when a further secondary separation is performed (based on polarity rather than boiling point) a large numbers of extra species can be isolated. The huge complexity in even the second-dimension chromatogram required that the second column was temperature programmed, and a two-oven approach was therefore applied. In the case of the tobacco condensate it becomes questionable, even with a second separation with full temperature programming, whether the analytical system has sufficient capacity, and that possibly a higher dimension was required to truly characterize the sample. In this  [c.59]

A novel approach for suppression of grain noise in ultrasonic signals, based on noncoherent detector statistics and signal entropy, is presented. The performance of the technique is demonstrated using ultrasonic B-scans from samples with coarse material structure.  [c.89]

A novel approach for suppression of material noise in ultrasonic signals, based on noncoherent detector statistics and signal entropy, has been presented. Experimental evaluation of the technique, using ultrasonic images from samples with coarse material structure, has proven its high performance.  [c.95]

Gasteiger and co-workers followed an approach based on models of the reactions taking place in the spectrometer [94, 95). Automatic knowledge extraction is performed from a database of spectra and the corresponding structures, and rules are saved. The rules concern possible elementary reactions, and models relating the probability of these reactions with physicochemical parameters calculated for the structures. The knowledge can then be applied to chemical structures in order to predict a) the reactions that occur in the spectrometer, b) the resulting products, and c) the corresponding peaks in the mass spectrum.  [c.535]

As we have seen, a common route to calculating both the partition coefficient and molar refractivity is by combining in some way the contributions from the fragments or atoms in the molecule. The fragment contributions are often determined using multiple linear regression, which will be discussed below (Section 12.12.2). Such an approach can be applied to many other properties, of which we shall mention only one other here, solubility, Klopman and colleagues were able to derive a regression model for predicting aqueous solubility based upon the presence of groups, most of which corresponded to a single atom in a specific hybridisation state but also included acid, ester and amide groups [Klopman et al. 1992]. This gave a reasonably general model that was able to predict the solubility of a test set within about 1.3 log units. A more specific model which contained more groups performed better but was of less generic applicability.  [c.687]

Efficient network synthesis principles and techniques refleve the process engineer of the burden of accepting designs based on art which caimot be shown to be superior. The opportunities for process improvement are best before the stmcture of the network is determined. Methods for rating iadividual heat exchangers are well developed and can be used to design new exchangers or simulate the performance of existing units. The overall approach described hereia is not limited to new plants but can be used for modification of existing plants as well.  [c.518]

The primary goals in process control are to improve the efficiency and/or selectivity and to reduce the operating cost of each unit operation. Overall goals are also strongly influenced by economic factors prevailing in the market. FlexibiUty is therefore a key factor in process control. As for any process control system, the key elements are measurement or sensing, comparison to a target value, manipulation of the variable value, and feedback to the controller. Separate algorithms are developed for each control unit based on empirical factors and experimentation. Continuous improvements and corrections are often made as data are accumulated and as ore characteristics change. Control strategies become more effective when predictions can be made for any unit operation at a high degree of confidence. The more modem control systems are based on multivariable control and model-based concept and digital instmmentation (50). Present trends are toward knowledge-based and artificial intelligence (see Expert systems), controlled systems which optimize overall performance rather than performance of individual unit operations. These are rule-based systems that attempt to implement human expert knowledge or a rule-of-thumb approach and the uncertainty inherent in human decision making involving linguistic variables (fuzzy terms) and subjective interpretation. Expert system programs or shells have already been marketed and are used in many plants.  [c.416]

Computerized optimization using the three-parameter description of solvent interaction can facihtate the solvent blend formulation process because numerous possibihties can be examined quickly and easily and other properties can also be considered. This approach is based on the premise that solvent blends with the same solvency and other properties have the same performance characteristics. Eor many solutes, the lowest cost-effective solvent blends have solvency that is at the border between adequate and inadequate solvency. In practice, this usually means that a solvent blend should contain the maximum amount of hydrocarbon the solute can tolerate while still remaining soluble.  [c.264]

Some countries have opted for a broader approach and have adopted regulation to minimise exposure of the general pubHc to environmental asbestos fibers, ie, by banning or restricting asbestos imports and types of appHcations. Because the environmental levels of airborne asbestos fibers are generally found to be three to four orders of magnitude lower than the maximum allowed workplace exposure levels, health risks from environmental asbestos are not immediately obvious. The outcome of ongoing reviews on these matters (in particular by the U.S. Health Effect Institute) will likely have a strong impact on future uses of asbestos fibers. However, it seems likely that since safe technologies have been developed for industrial processing of fibers, asbestos fibers will remain an attractive option based on cost performance ratio, wherever mineral fibers are required in composite materials apphcations.  [c.357]

Low pressure rhodium processes which give higher n iso butyraldehyde ratios (eg, 10 1) have gradually replaced cobalt processes, dramatically effecting the isobutyraldehyde supply. Supply restraints and strong demand for certain value added derivatives will limit the overall growth of isobutyraldehyde to about 0.9% annually. The production of isobutyl alcohol the least valued isobutyraldehyde derivative, should actually decline as strong growth for neopentyl glycol and isobutyraldehyde condensation products limits the avaHabiUty of isobutyraldehyde for conversion to isobutyl alcohol. As isobutyl and -butyl alcohol prices approach parity some isobutyl alcohol consumers are expected to switch back to the normal isomer based on better solvency and perceived better performance.  [c.381]

The fast copper precipitation rates obtainable usiag spoage or particulate iroa promise economic and processiag advantages if particulate iron becomes cost-competitive with scrap iron. Another precipitant of potential importance is shredded automobile scrap usiag dmm-type precipitators (42). Solvent Extraction. Solveat extractioa ia combination with electrowinning is the most common approach to recovering copper from acidic solutioas. A variety of extractants based on aldoximes are available and show superior performance to the eadier ketoximes (43).  [c.206]

Neural Nets. Neural nets arose out of an alternative approach to solving the problems raised by AI. Instead of modeling high level reasoning processes, neural networks attempt to produce intelligent behavior using the brain as the architectural model. The motivation behind neural network research is the modeling of cognitive activities such as perception, learning, and data interpretation, which symbohc AI approaches do not address adequately. Erom a practical standpoint, neural nets attempt to address many limitations of knowledge-based systems, including lack of adaptabihty (self-modification in response to a changing world) lack of robustness (tolerance for missing, bad, or incomplete information) problems of knowledge acquisition (extracting knowledge from experts) problems of storage capacity (amount of knowledge that can be stuffed into a knowledge base) problems of scalabihty (performance of large systems in comparison to small ones) and problems of speed, especially with increase in the size of the system.  [c.539]

The Japanese program system AlPHOS is developed by Funatsu s group at Toyo-hashi Institute of Technology [40]. AlPHOS is an interactive system which performs the retrosynthetic analysis in a stepwise manner, determining at each step the synthesis precursors from the molecules of the preceding step. AlPHOS tries to combine the merits of a knowledge-based approach with those of a logic-centered approach.  [c.576]

The procedure of assessing health-based occupational exposure limits for chemical substances includes determination of no-observed-adverse-effect level (NOAEL) for the critical toxic effect and application of an appropriate safety factor based on expert judgment (see Section 5..3). In principle, the same procedure could be used for assessing the TLs. However, the quantitative risk assessment procedure entails notable uncertainties at low-dose regions—say, below one-tenth of the current OELs. In addition, exposure limits are revised at certain intervals in the light of new research information and actual policy objectives. In most cases, the limits have been reduced over the years. In theory, one possibility for assessing a target level for desired air quality could be the determination of an exposure that cannot be distinguished from the biological monitoring values of the nonoccupational population. However, adequate data for this purpose exist only for a few substances in advanced industrialized countries, and for that reason a technology-based approach for target level assessment is considered in this paper. Similar control strategies, based on performance standards and risk assessment, have been proposed for some industries—for example, the pharmaceutical industry and technology transition in the defense sector.  [c.399]

Abstract At the Institute fuer Theoretische Nachrichtentechnik uiid Informationsver-arbeituiig at the University of Hannover investigations were carried out in cooperation with the Institute of Nuclear Engineering and Non-Destructive Testing concerning 3D analysis of internal defects using stereoradioscopy based on camera modelling. A camera calibration approach is used to determine 3D position and volume of internal defects using only two different X-ray images recorded from arbitrary directions. The volume of defects is calculated using intensity evaluation considering polychromatic radiation of microfocus X-ray tubes. The system performance was determined using test samples with different types of internal defects. Using magnifications between 1.1 and 1.4 the system achieves an accuracy of 0.5mm calculating the 3D positions of defects using samples rotated only 10° between two views and an accuracy of 0.3mm using 25° rotation. During calibration the distortion inherent in the image detector system is reduced from a maximum of 3.8mm to less than 0.1mm (0.3 pixel). The defect volumes are calculated with an overall accuracy of 10%. Additional results will be presented using the system to analyse casting defects.  [c.484]

A major motivation for the study of conical intersections is their assumed importance in the dynamics of photoexcited molecules. Molecular dynamics methods are often used for this purpose, based on available potential energy surfaces [118-121]. We briefly survey some methods designed to deal with relatively laige molecules (>5 atoms). Several authors combine the potential energy surface calculations with dynamic simulations. A relatively stiaightfor-ward approach is illustrated by the work of Ohmine and co-workers [6,122]. Ab initio calculations of the ground and excited potential surfaces of polyatomic molecules (ethylene and butadiene) were performed. Several specific nuclear motions were chosen to inspect their importance in inducing curve crossing. These included torsion, around C=C and C-C bonds, bending, stretching and hydrogen-atom migration. The ab initio potentials were parametrized into an analytic form in order to solve the dynamic equations of motion. In this way, Ohmine was able to show that hydrogen migration is important in the radiationless decay of ethylene.  [c.385]

This paper presents the theoretical background and some practical applications of a new conformational free energy simulation approach, aimed at correcting the above shortcomings. The new method, called Conformational Free energy Thermodynamic Integration (CFTI), is based on the observation that it is possible to calculate the conformational free energy gradient with respect to an arbitrary number of conformational coordinates from a single simulation with all coordinates in the set kept fixed [2, 8]. The availability of the conformational gradient makes possible novel techniques of multidimensional conformational free energy surface exploration, including locating free energy minima by free energy optimization and analysis of structural stability based on second derivatives of the free energy. Additionally, by performing simulations with all "soft degrees of freedom of the system kept fixed, free energy averages converge very quickly, effectively overcoming the conformational sampling problem.  [c.164]

Neural networks have been applied to IR spectrum interpreting systems in many variations and applications. Anand [108] introduced a neural network approach to analyze the presence of amino acids in protein molecules with a reliability of nearly 90%. Robb and Munk [109] used a linear neural network model for interpreting IR spectra for routine analysis purposes, with a similar performance. Ehrentreich et al. [110] used a counterpropagation network based on a strategy of Novic and Zupan [111] to model the correlation of structures and IR spectra. Penchev and co-workers [112] compared three types of spectral features derived from IR peak tables for their ability to be used in automatic classification of IR spectra.  [c.536]

The Ewald method has been widely used to study highly polar or charged systems. Its use is considered routine for many types of solid-state materials. It is increasingly used foi calculations on much larger molecular systems, such as proteins and DNA, due both tc the increases in computer performance and to the new methodological advances we have just discussed [Darden et al. 1999]. For example, an early application of the peirticle-mesl Ewald method was the molecular dynamics simulation of a crystal of tlie protein bovine pancreatic trypsin inhibitor [York et al. 1994]. The full crystal environment was reproduced with four protein molecules in the unit cell, together with associated water molecules anc chloride counterions. Over the course of the 1 ns simulation the deviation of the simulated structures from the initial crystallographic structure was monitored. Once equilibrium was achieved this deviation (measured as the root-mean-square positional deviation) settled down to a value of 0.63 A for all non-hydrogen atoms and 0.52 A for the backbone atoms alone. By contrast, an equivalent simulation run with a 9 A residue-based cutoff showed e deviation of more than 1.8 A. In addition, the atomic fluctuations calculated from the Ewald simulation were in close agreement with those derived from the crystallographic temperature factors, unlike the non-Ewald simulation, which was significantly overestimated due to the use of the electrostatic cutoff. The highly cheirged nature of DNA makes it particularly important to deal properly with the electrostatic interactions and simulations using the particle-mesh Ewald approach are often much more stable, with the trajectories remaining much closer to the experimental structures [Cheatham et al. 1995].  [c.353]

Once a protein model has been constructed, it is important to examine it for flaws. Much ( this analysis can be performed automatically using computer programs that examine tl structure and report any significant deviations from the norm. A simple test is to genera a Ramachandran map, in order to determine whether the amino acid residues occupy tl energetically favourable regions. The conformations of side chains can also be examine to identify any significant deviations from the structures commonly observed in X-ray stru tures. More sophisticated tests can also be performed. One popular approach is Eisenberg 3D profiles method [Bowie et al. 1991 Liithy et al. 1992]. This calculates three properties f( each amino acid in the proposed structure the total surface area of the residue that is burie in the protein, the fraction of the side-chain area that is covered by polar atoms and the loc secondeiry structure. These three parameters are then used to allocate the residue to one ( eighteen environment classes. The buried surface area and fraction covered by polar aton give six classes (Figure 10.25) for each of the three types of secondary structure (a-helix, / sheet or coil). Each amino acid is given a score that reflects the compatibility of that amir acid for that environment, based upon a statistical analysis of known protein structure Specifically, the score for a residue i in an environment is calculated using  [c.559]

Ab initio molecular dynamics has been applied to many materials science problems. One interesting early application was the ah initio molecular dynamics simulation of the reaction between a chlorine molecule and a silicon surface [Stich et al. 1994]. This reaction is particularly important in silicon chip manufacture, where the dissociative chemisorption of chlorine (and other halogens) is widely used for processes such as dry etching and surface cleaning. A series of simulations was performed, in each of which a chlorine molecule was fired towards the silicon surface. The subsequent motion and reaction was then determined using the ab initio molecular dynamics approach based upon conjugate gradients minimisation. The motions of the nuclei were determined using the Verlct algorithm with a time step of approximately 0.5 fs, and each simulation was performed for a total time of between 200 and 400 fs.  [c.636]

The physical designs and operating conditions of spray-pond installations vary greatly, and it is difficult to develop exacl rating data that can be used for determining cooling performance in all cases. However, Fig. 12-23 shows the performance that can be obtained with a well-designed spray pond, based on a 21.1°C (70°F) wet-bulb temperature and a 2.2-iti/s (5-mi/h) wind. This curve shows that a 3.3°C (6°F) approach to the wet bulb is possible at a 2.2°C (4°F) range, but at higher ranges the obtainable approach increases. If it is necessaiy to cool water through a large temperature range to a reasonably close approach, the spray pond could be staged. With this method, the water is initially sprayed, collected, and then resprayed in another part of a sectionahzedpond basin.  [c.1169]

The design of sorption systems is based on a few underlying principles. First, knowledge oi soiption equilihrium is required.. This equilibrium, between solutes in the fluid phase and the solute-enriched phase of the solid, supplants what in most chemical engineering separations is a flmd-fluid equilibrium. The selection of the sorbent material with an understanding of its eqmlibrium properties (i.e., capacity and selectivity as a function of temperature and component concentrations) is of primary importance. Second, because sorption operations take place in batch, in fixed beds, or in simulated moving beds, the processes have dynamical character. Such operations generally do not run at steady state, although such operation may be approached in a simulated moving bed. Fixed-bed processes often approach a periodic condition called a periodic state or cyclic steady state, with several different feed steps constituting a cycle. Thus, some knowledge of how transitions travel through a bed is required. This introduces both time and space into the an ysis, in contrast to many chemical engineering operations that can be analyzed at steady state with only a spatial dependence. For good design, it is crucial to understand fixed-bed performance in relation to adsorption equilibrium and rate behavior. Finally, many practical aspects must be included in design so that a process starts up and continues to perform well, and that it is not so overdesigned that it is wasteful. While these aspects are process-specific, they include an understanding of dispersive phenomena at the bed scale and, for regenerative processes, knowledge of aging charac teristics of the sorbent material, with consequent changes in sorption equilibrium.  [c.1497]

Ctit-Power Correlation Another design method, also based on scrubber power consumption, is the cut-power method of Calvert [ J. Air PoUut. Control Assoc., 24, 929 (1974) Chem. Eng., 84(18), 54 (1977)]. In this approach, the cut diameter (the particle diameter for which the collection efficiency is 50 percent) is given as a function of the gas pressure drop or of the power input per unit of volumetric gas flow rate. The functional relationship is presented as a log-log plot of the cut diameter versus the pressure drop (or power input). In principle, the function could be constructed by experimentally determining scrubber performance cui ves for discrete particle sizes and then plotting the particle sizes against the corresponding pressure drops necessary to give efficiencies of 50 percent. In practice, Calvert and coworkers evidently have in most cases construc ted the cut-power functions for various scrubbers by modeling (Yung and Calvert, U.S. EPA 600/8-78-005b, 1978). They show a variety of cui-ves, whereas empirical studies have indicated that different types of scrubbers generally have about the same performance at a given level of power consumption.  [c.1593]

The holistic thermodynamic approach based on material (charge, concentration and electron) balances is a firm and valuable tool for a choice of the best a priori conditions of chemical analyses performed in electrolytic systems. Such an approach has been already presented in a series of papers issued in recent years, see [1-4] and references cited therein. In this communication, the approach will be exemplified with electrolytic systems, with special emphasis put on the complex systems where all particular types (acid-base, redox, complexation and precipitation) of chemical equilibria occur in parallel and/or sequentially. All attainable physicochemical knowledge can be involved in calculations and none simplifying assumptions are needed. All analytical prescriptions can be followed. The approach enables all possible (from thermodynamic viewpoint) reactions to be included and all effects resulting from activation barrier(s) and incomplete set of equilibrium data presumed can be tested. The problems involved are presented on some examples of analytical systems considered lately, concerning potentiometric titrations in complex titrand + titrant systems. All calculations were done with use of iterative computer programs MATLAB and DELPHI.  [c.28]

Trace residue analysis of compounds in various matrices is an essential process for evaluation of different exposures to such toxicants, in which, preparation of samples is one of the most time-consuming and error-prone aspects prior to chromatographic analyses. A comparative study of sample preparation was performed to preconcentrate urinary 1-hydroxypyrene (1-OFIP) as a major metabolite and biological indicator of the overall exposure to polycyclic aromatic hydrocarbons (PAFIs) generated by various industrial and environmental processes. To perform this study, solid phase extraction (SPE) was optimized with regard to sample pFI, sample concentration, loading flow rate, elution solvent, washing solvent, sample volume, elution volume, and sorbent mass. The present approach proved that, 1-OFIP could be efficiently retained on CIS sorbent based on specific interaction. Further study employed methanol to extract the analyte from spiked urine. Along with, a nonclassic form of liquid-liquid extraction (LEE) also was optimized with regard to solvent type, solvent volume, extraction temperature, mixing type, and mixing duration. The results showed that, 1-OFIP could be relatively well extracted by methanol at optimum time of 2 minutes based on moderate specific interaction. At the developed conditions, obtained recovery of SPE was 99.96%, while, the EEE extraction recovery did not exceed 87.3% and also, based on applied sample volume, the limit of detection (EOD) achieved by SPE was 0.02 p.g/1 showing at least ten times less than that of EEE. The procedures were validated with three different pools of spiked urine samples showed a good reproducibility over six consecutive days as well as six within-day experiments for both developed methods as suitable results were obtained for CV% (less than 3.1% for SPE and between 2.8% and 5.05% for EEE). In this study, a high performance liquid chromatography (HPEC), using reverse-phase column was used. The mobile phase was methanol/water am at constant flow rate of 0.8 ml/min and a fluorescence detector was used, setting at 242 nm and 388 nm. Although the recovery and EOD were obtained for SPE method shows more efficiency, such results for EEE is also relatively efficient and can be applied for majority of similar studies. However, there is a significant difference between the obtained recoveries of SPE and EEE (P<0.05), showing that, SPE is superior.  [c.378]


See pages that mention the term Performance-Based Approach : [c.714]    [c.714]    [c.537]    [c.601]    [c.707]    [c.274]    [c.238]   
See chapters in:

Modern analytical chemistry  -> Performance-Based Approach

Modern Analytical Chemistry  -> Performance-Based Approach