All main aspects of analytical and bioanalytical sciences is covered by the conference program. AC CA-05 consists of 12 invited lectures and seven symposia General Aspects of Analytical Chemistry, Analytical Methods, Objects of the Analysis,. Sensors and Tests, Separation and Pre-concentration, Pharmaceutical and Biomedical Analysis, History and Methodology of Analytical Chemistry. Conference program includes two special symposia Memorial one, dedicated to Anatoly Babko and Analytical Russian-Germany-Ukrainian symposium (ARGUS-9).  [c.3]

Variability prediction - A key objective of the analysis is predicting, in the early stages of the product development process, the likely levels of out of tolerance variation when in production.  [c.76]

The choice of which of the many preprocessing methods to apply depends on the type of data involved and the context and objectives of the problem. The importance of appropriate preprocessing caimot be overemphasized improper treatment of the data at this stage may make the rest of the analysis meaningless.  [c.422]

An objective of statistical analysis is to sei ve as a guide in decision making in the context of normal variation. In the case of the production supervisor, it is to make a decision, with a high probability of being correct, that something has in fact changed the operation.  [c.490]

The design objectives of the analyst and the production line engineer are generally quite different. For analysis the primary concern is typically resolution. Hence operating conditions near the minimum value of the HETP or the HTU are desirable (see Fig. 16-13).  [c.1539]

The method of the analysis of the CS described in the international standai d ASTM (D 820 - 93) is a long time, multi-stage procedure. Accuracy of AIST determination is low, since surfactants determined by an indirect method of subtraction. Thus, the objective of our research was to develop an exact and express method of AIST determination in CS.  [c.133]

Therefore the basic task of scientific work is development of sensitive, selective and simple methods of quantitative definition of the total contents of the alkaloids in the expert-criminal objects with the help of electrochemical methods of the analysis.  [c.383]

Features of analytical chemistry as parts of chemistry, discipline laying between chemistry and metrology (K. Doerffel) who in essence is chemical metrology (N.P. Komar), close connections of analytical chemistry with many fields of physics and biology ai e considered (As is well known physical and biological methods of obtaining and measurement of analytical signals ai e used in increasing frequency). The applied aspect of analytical chemistry is caused by that all branches of an economy ai e interested in the data on chemical composition of various objects as well as natural and engineering sciences, environmental protection, pharmacy, medicine, archeology, criminalistics and even astronautics. It is shown by the example of function materials, that tasks of analytical chemistry ar e not reduced to development of techniques of the analysis and the control of the material composition according to the requirements of other sciences and branches of technics. Finding of new laws that allow understanding features of a researched material and to optimize its par ameters can be as the result of interaction of analysts and specialists in the field of material science. Methods of sample prepar ation and separ ation of components realizable at the analysis can serve as model of technology of fine cleaning of substances, obtaining of new materials and researching their properties. At last, chemical analysis of function materials in some cases becomes a part of technology of obtaining of materials with the adjusted properties.  [c.410]

Any attempt to interpret QRA results must begin with a review of the analysis objective(s). If your objective was to identify the most important contributors to potential accidents, then the results may be completely unsuitable for presentation to zoning commissioners interested in the total risk of a toxic material release. It is essential that QRA results be interpreted only in the context of the study objective(s).  [c.50]

When used as an input to design, HTA allows functional objectives to be specified at the higher levels of the analysis prior to final decisions being made about the hardware. This is important when allocating functions between personnel and automatic systems.  [c.167]

As discussed in Chapter 4, task analysis is a very general term that encompasses a wide variety of techniques. In this context, the objective of task analysis is to provide a systematic and comprehensive description of the task structure and to give insights into how errors can arise. The structure produced by task analysis is combined with the results of the PIF analysis as part of the error prediction process.  [c.212]

The objective of consequence analysis is to evaluate the safety (or quality) consequences to the system of any human errors that may occur. Consequence Analysis obviously impacts on the overall risk assessment within which the human reliability analysis is embedded. In order to address this issue, it is necessary to consider the nature of the consequences of human error in more detail.  [c.216]

The main objective of the In-Plant Reliability Data System (IPRDS) was to develop a comprehensive and component-specific data base for PRA and other component reliability-related statistical analysis. Data base personnel visited selected plants and copied all the plant maintenance wor)c requests. They also gathered plant equipment lists and plant drawings and in some cases interviewed plant personnel for Information on component populations and duty cycles. Subsequently, the maintenance records were screened to separate out the cases of corrective maintenance applying to particular components these were reviewed to determine such things as failure modes, severity, and, if possible, failure cause. The data from these reports were encoded into a computerized data base.  [c.78]

The objective of the profiling mode of LC-LC is to fractionate all components of the analysed mixture. This may be accomplished by so-called comprehensive two-dimensional LC in which the entire chromatogram eluting from the primary column is Submitted to the secondary column. The secondary instrument must operate fast enough to preserve the information contained in the primary signal. That is, it should be able to generate at least one chromatogram during the time required for a peak to elute from the primary column. Until now, a limited number of studies involving comprehensive LC-LC for the analysis of compounds of biological interest have been reported. Most of these studies were carried out within the group of Jorgenson and mainly deal with the design, construction and implementation of comprehensive LC-LC systems for the separation of either a complex protein mixture or an enzymatic digest of a protein (i.e. a mixture of peptides) (5-10). Lor these purposes, orthogonal on-line combinations of ion-exchange chromatography, size-exclusion chromatography (SEC) or reversed-phase (RP) LC are used. In a recent study, peptide fragments generated in the tryptic digests of ovalbumin and serum albumin are separated by SEC in the first dimension (run time, 160 min) and fast RPLC in the second (run time, 240 s) (7). Lollowing RPLC, the peptides flow to an electrospray mass spectrometer for on-line identification. The complete LC system yields a peak capacity of almost 500, thereby maximizing the chance of completely resolving each peptide of the digest and, thus, permitting highly reliable peptide mapping. In addition, a comprehensive ion-exchange LC-RPLC-mass spectrometry (MS) system for the analysis of proteins was demonstrated in which a 120-min ion-exchange LC run is sampled by 48 RPLC runs of 150 s, leading to a peak capacity of over 2500 (8). The system was succesfully applied to the screening of an Escherichia coli lysate without any prior knowledge of the characteristics (e.g. molecular weight, isoelectric point, hydrophobicity, etc.) of its individual components.  [c.253]

Company compatibility requires the analysis of the engineering, manufacturing (with quality assurance), distribution, and sales capabilities of an organization to produce and realize a profit from a given engineering design or product. Often an engineering department within a company may be capable of designing a particular device or system, but the production and sales departments are not capable of carrying out their respective tasks. Also, a new product line under consideration may be beyond the scope of the overall business goals and objectives of the company.  [c.378]

The first step in any design is to identify the real need, and this is often the most difficult task. Without it, designs can be produced which do not satisfy the requirements and the result is often unsatisfactory. It is essential to clearly define the objectives of the task and to re-confirm the objectives as time progresses. A useful aid is a value analysis at the end of the concept design stage. This assesses the design for value for money while meeting the defined project objectives. A good source document is A Study of Value Management and Quantity Surveying Practice, published by The Royal Institute of Chartered Surveyors.  [c.67]

Life-cycle analysis, in principle, allows an objective and complete view of the impact of processes and products on the environment. For a manufacturer, life-cycle analysis requires an acceptance of responsibility for the impact of manufacturing in total. This means not just the manufacturers operations and the disposal of waste created by those operations but also those of raw materials suppliers and product users.  [c.296]

EIA Preparation is the scientific and objective analysis of the scale, significance and importance of impacts identified. Various methods have been developed, in relation to baseline studies impact identification prediction evaluation and mitigation, to execute this task.  [c.72]

Quantitative techniques of RCT allow to carry out the analysis of objects, when its matrix from carbonaceous elements has a different degree of impregnation by heavy metals. The given problem is urgent for improvement of technology of manufacturing nozzles of engines of space vehicles, in particular, for determination of distribution of heavy metals on a layer of objects and general contents of it in a product.  [c.600]

The specific character of NDT related to the quality assessment of safety critical products and objects requires constant analysis and continuous improvement of processes and their interconnection. Sometimes interaction of processes is very complicated (Figure 3) therefore the processes have to be systematized and simplified when possible to realize total quality management in NDT.  [c.954]

In this section we consider electromagnetic dispersion forces between macroscopic objects. There are two approaches to this problem in the first, microscopic model, one assumes pairwise additivity of the dispersion attraction between molecules from Eq. VI-15. This is best for surfaces that are near one another. The macroscopic approach considers the objects as continuous media having a dielectric response to electromagnetic radiation that can be measured through spectroscopic evaluation of the material. In this analysis, the retardation of the electromagnetic response from surfaces that are not in close proximity can be addressed. A more detailed derivation of these expressions is given in references such as the treatise by Russel et al. [3] here we limit ourselves to a brief physical description of the phenomenon.  [c.232]

Another problem is to determine the optimal number of descriptors for the objects (patterns), such as for the structure of the molecule. A widespread observation is that one has to keep the number of descriptors as low as 20 % of the number of the objects in the dataset. However, this is correct only in case of ordinary Multilinear Regression Analysis. Some more advanced methods, such as Projection of Latent Structures (or. Partial Least Squares, PLS), use so-called latent variables to achieve both modeling and predictions.  [c.205]

Automated, miniaturized, and parallelized synthesis and testing (combinatorial chemistry/high-throughput screening) are accelerating the development of a complex of methods for data mining and computer screening (virtual screening) of object libraries. Clusters of objects are recognized (duster analysis) on the basis of the estimation of the distances in the descriptor space (dissimilarities). In the case of object selection, dasses that are as diverse as possible are selected so that all the different types of properties (e.g., bioactivities) within a larger collection are sampled using as few objects as possible (diversity analysis). Key chemical features and the spatial relationships among them that are considered to be responsible for a desired biological activity may be identified (pharmacophore recognition) using local similarity, e.g., via common substructures in sets of active molecules pharmacophore searching in 3D databases (see below) may be carried out using a pharmacophore as the query. Shape similarity of ligands to a receptor site (ligand docking) may be used for finding structures that fit into proteins.  [c.313]

It is possible to determine the metacide content with the use of ionic associates of metacide with BKM, BPR, CPR polyguanidine with azodyes SB and MG by spectrophotometry. The monomers, from which one synthesizes of metacide and polyguanidine, and which are present in actual objects of the analysis, do not react with dyes. 0,01-0,20 mg metacide at use BKM (0,01-0,10 mg at use CPR) is determined in 25 ml of solution. It s possible to determine 9-16 mg/1 of polyguanidine (pH 4-5) and 35 -400 mg/1 (pH 11-12) using magneson.  [c.109]

As with plate buckling, plate vibration, or oscillation about a state of static equilibrium, is an eigenvalue problem. The objective of the analysis is to determine the natural frequencies and the mode shapes in which laminated plates vibrate. The magnitude of the deformations in a particular mode, however, is indeterminate because vibration is an eigenvalue problem. The governing vibration differential equations are obtained from the buckling differential equations by adding an acceleration term to the right-hand side of Equation (5.15) and reinterpreting all variations to occur during vibration about an equilibrium state (no difficulty is presented because the variations during buckling are also from the equilibrium state)  [c.288]

As with plate buckling, plate vibration, or oscillation about a state of static equilibrium, is an eigenvalue problem. The objective of the analysis is to determine the natural frequencies and the mode shapes in which laminated plates vibrate. The magnitude of the deformations in a particular mode, however, is indeterminate because vibration is an eigenvalue problem. The governing vibration differential equations are obtained from the buckling differential equations by adding an acceleration term to the right-hand side of Equation (5.15) and reinterpreting all variations to occur during vibration about an equilibrium state (no difficulty is presented because the variations during buckling are also from the equilibrium state)  [c.288]

A detailed analysis of fast SAMM has shown [47] that the most time consuming tasks are task 2 and task 4 described above. In task 2 for each hierarchy level (except for level 0) a local Taylor expansion is calculated for ( iich object. Note that here we refer to expansions which comprise only contributions from objects of the same hierarchy level which, in addition, fulfill th( distance criterion given in Fig. 1. From each of these local expansions, approximated electrostatic forces acting on the atoms contained in the associated object could be computed and, in analogy to the exact forces F( ) used in the multiple time step scheme described above (see Fig. 2), the multipole-derived forces could be extrapolated by multiple time stepping. We further improved that obvious scheme, however, in that we applied multiple time step extrapolations to the coefficients of the local Taylor expansions instead. That strategy reduces memory requirements by a significant factor without loss of accuracy, since the number of local Taylor coefficients tliat have to be kept for the extrapolation is smaller than the number of ftjrces acting on all atoms of the respective object.  [c.83]

From the outset of writing these articles, it was never intended that they should be exhaustively comprehensive, because this approach would have defeated the aims of the whole exercise, viz., the provision of short, quick explanations of major elements of mass spectrometry and closely allied topics. The traditional, more all-embracing approach to writing about mass spectrometry was left to the many excellent authors, who have provided impressive textbooks. In contrast, the major objective of the Back-to-Basics series was the provision of quick explanations of fundamental concepts in mass spectrometry, without overelaboration. As far as possible, descriptions of processes, applications, and underlying science were made with a minimum of text backed up by easily and rapidly understood pictures. Although some major equations of relevance to mass spectrometry have been introduced, the mathematical derivations of these equations were largely omitted. This was not an attempt to dumb down an important discipline. Rather, the intent was to make some of the esoteric aspects of an important area of analysis readily comprehensible to the many people who have to deal with mass spectrometers but who have not been trained specifically in this branch of science and engineering. The series began about ten years ago and encompasses recent and past developments in mass spectrometry. However, since it is the principles of mass spectrometry that are explained and not specific instrumentation, the information content remains as relevant now as it was ten years ago.  [c.475]

The accuracy of molecular mechanics and that of molecular dynamics simulations share an inexorable dependence on the rigor of the force field used to elaborate the properties of interest. This aspect of molecular modeling can easily fill a volume by itself. The topic of force field development, or force field parameterization, although primarily a mathematical fitting process, represents a rigorous and highly subjective aspect of the discipline (68). A perspective behind this high degree of rigor has been summarized (69). Briefly put, the different schools of thought regarding the development of force fields arose principally from the initial objectives of the developers. For example, in the late 1960s through the 1970s, the AUinger school targeted the computation of the stmcture and energetics of small organic and medicinal compounds (68,70,71). These efforts involved an incremental development of the force field, building up from hydrocarbons and adding new functional groups after certain performance criteria were met, eg, reproduction of experimental stmctures, conformational energies, rotational barriers, and heats of formation. Unlike the consistent force field approach of Lifson and co-workers (59,62—63,65), the early AUinger force fields treated a dozen or more functional groups simultaneously, and were not derived by an analytical least squares fit to aU the data (61). However, because the focus of Lifson was the analysis and prediction of the properties of hydrocarbons or peptides, it was not surprising that a consistent force field was possible. The number of variables to be optimized concurrentiy to permit calculation of aU the stmcture elements, conformational energies, and vibrational spectra concurrentiy was, and stiU is, a massive quantity. However, the calculation for a limited number of functional groups could be accompHshed, albeit slowly. If the goal is to reproduce and predict vibrational spectra, the full second derivative force  [c.164]

Example 5. There are six dynamometers available for engine testing. The test duration is set at 200 h which is assumed to be equivalent to 20,000 km of customer use. Failed engines are removed from testing for analysis and replaced. The objective of the test is to analy2e the emission-control system. Failure is defined as the time at which certain emission levels are exceeded.  [c.11]

Fundamental reaction kinetics and chemical and physical properties, where necessary, are best obtained with experimental reactors that are different from those used during process development or as part of commercial operations. It is important that the reactor be similar to one of the basic types, have known flow patterns, and in the case of multiphase reactors, have known flow regimes, operate isothermaHy, and provide data over wide ranges of variables. The range of the conditions to be studied must be sufficiently large for adequately defining the effects of variables, but the rate equations need not accurately reflect tme reaction mechanisms. However, extracting the intrinsic chemical kinetics leads to a better understanding of the process and more assured extrapolations of results to conditions that have not been studied. Recommended experimental reactor types are shown in Table 1. The best selection depends on careful analysis of the anticipated reactions, specific chemical and physical properties of the system, and appraisals of the objectives of the study (49).  [c.515]

The conducted researches of complexing processes of noble metals on a sulfur-containing CMSG surface formed the basis for development of sorption-photometric, sorption-luminescent, soi ption-atomic-absoi ption, sorption-atomic-emission and sorption-nuclear-physic techniques of the analysis of noble metals in rocks, technological objects and environmental objects. Techniques of separation and detenuination of noble metals in various oxidation levels have been proposed in some cases.  [c.259]

Recently test-methods of the analysis are widely used they differ by rapidity, cheapness, simplicity of detenuination and don t demand availability of the expensive equipment. These methods are used at the control of manufacture, in diagnostic labs, in field and domestic conditions etc. Test -technique have received special distribution in the analysis of objects of environment natural and sewages, soils, air. The improvement both existing and developing of new methods and techniques of test-determination of elements is an actual problem of modern analytical chemistry.  [c.330]

There are various types of charts that can be used to record an activity analysis. For tasks requiring continuous and precise adjustments of process variables, a chart displaying the graphs of these variables and the appropriate control settings will fulfill the objectives of the activity analysis. Figure 4.1 shows an activity chart of a subtask for a machine operator in a papermaking plant. This describes how to adjust the weight of a given area of paper to the desired value for each successive customer order and ensure that it remains within the specified limits until the order is completed.  [c.158]

Hierarchical task analysis is a systematic method of describing how work is organized in order to meet the overall objective of the job. It involves identifying in a top down fashion the overall goal of the task, then the various subtasks and the conditions under which they should be carried out to achieve that goal. In this way, complex planning tasks can be represented as a hierarchy of operations—different things that people must do within a system—and plans—the conditions which are necessary to undertake these operations. HTA was developed by Aimett et al. (1971) and further elaborated by Duncan (1974) and Shepherd (1985) as a general method of representing various  [c.162]

Figure 5.6 shows an extract from the HTA of the chlorine tanker filling operation which will be used as an example. The first level (numbered 1,2,3, etc.) indicates the tasks that have to be carried out to achieve the overall objective. These tasks are then broken down to a further level of detail as required. As well as illustrating the hierarchical nature of the analysis. Figure 5.6 shows that plans, such as those associated with operation 3.2, can be quite complex. The term operation is used to indicate a task, subtask, or task step, depending on the level of detail of the analysis.  [c.212]

These images cannot be studied with classical approach. The analysis of the matrix based on the blocks Bl and B4 cannot be exploited. The objects can even be present in the bloek B2 andB3.  [c.235]

For several reasons, such as ease of use, cost, and practicability, TEM today is the standard instrument for electron diffraction or the imaging of thin, electron-transparent objects. Especially for structural imaging at atomic level (spatial resolution of about 1 A) tlie modem, aberration-corrected TEM seems to be the best instmment. SEM provides the alternative for imaging the surface of thick bulk specimens. Analytical microscopy can either be perfomied using a scaiming electron probe in STEM and SEM (as for electron probe micro-analysis (EPMA), energy-dispersive x-ray spectroscopy (EDX) and electron energy loss spectroscopy (EELS)) or energy-selective flood-beam imaging in EFTEM (as for image-EELS and electron spectroscopic imaging (ESI)). The analytical EM is mainly limited by the achievable probe size and the detection limits of the analytical signal (number of inelastically scattered electrons or produced characteristic x-ray quanta). The rest of this chapter will concentrate on the stmctiiral aspects of EM. Analytical aspects are discussed in more detail m specialized chapters (see, for example, B1.6).  [c.1625]

Light microscopy is of great importance for basic research, analysis in materials science and for the practical control of fabrication steps. Wlien used conventionally it serves to reveal structures of objects which are otherwise mvisible to the eye or magnifying glass, such as micrometre-sized structures of microelectronic devices on silicon wafers. The lateral resolution of the teclmique is detennined by the wavelength of tire light  [c.1654]

The analysis (in the case of two structures) starts by a translational-rotational fit of the two structures and constructing the displacement vectors of all backbone atoms. Considering these as samples of a vector field, the curl of that vector field is computed, as sampled by each aminoacid. Thus a collection of rotation vectors is obtained. If a rigid body exists, the rotation vectors of all aminoacids in that body are equal, and different from rotation vectors in other rigid bodies. A standard cluster analysis on the rotation vectors, using the ratio of external to internal motion as a discrimination criterion, is then carried out. This yields a subdivision of the protein in semirigid bodies (if they exist) and identifies the links between them. The type of motion of one rigid body with respect to another is then analysed in terms of a unique axis, and such (hinge bending) motions can be objectively characterized as closing or twisting motions (Fig. 5).  [c.24]

For example, the objects may be chemical compounds. The individual components of a data vector are called features and may, for example, be molecular descriptors (see Chapter 8) specifying the chemical structure of an object. For statistical data analysis, these objects and features are represented by a matrix X which has a row for each object and a column for each feature. In addition, each object win have one or more properties that are to be investigated, e.g., a biological activity of the structure or a class membership. This property or properties are merged into a matrix Y Thus, the data matrix X contains the independent variables whereas the matrix Ycontains the dependent ones. Figure 9-3 shows a typical multivariate data matrix.  [c.443]

See pages that mention the term OBJECTS OF THE ANALYSIS : [c.163]    [c.170]    [c.181]    [c.357]    [c.230]    [c.300]    [c.178]    [c.342]    [c.208]   
See chapters in: