Objective meaning


Life-cycle analysis, in principle, allows an objective and complete view of the impact of processes and products on the environment. For a manufacturer, life-cycle analysis requires an acceptance of responsibility for the impact of manufacturing in total. This means not just the manufacturers operations and the disposal of waste created by those operations but also those of raw materials suppliers and product users.  [c.296]

Oil and gas reservoirs are rarely as simple as early maps and sections imply. Even though this is often recognised, development proceeds with the limited data coverage available. As more wells are drilled and production information is generated, early geological models become more detailed and the reservoir becomes better understood. It may become possible to identify reserves which are not being drained effectively and which are therefore potential candidates for infill drilling. Infill drilling means drilling additional wells, often between the original development wells. Their objective is to produce yet unrecovered oil.  [c.351]

Therefore an automatic method, which means an objective and reproducible process, is necessary to determine the threshold value. The results of this investigations show that the threshold value can be determined reproducible in the point of intersection of two normal distributed frequency approximations.  [c.14]

Another important issue for modem microscopy is the three-dimensional information. Most existing microscopes can visualize either the object surface or a transmission image through a thin section. That means the three-dimensional internal object structure can only be investigated destructively. Even with the most delicate preparation or cutting methods the specimen stmcture can change dramatically. For living or exceptional objects any cutting is not even possible.  [c.579]

In NDT the key question is the reliability of the methods, verified by validation. Regarding the visual methods, the reliability, that means probability of the visual recognition depends on the optical abilities of the scene with the objects. These abilities were determined quantitatively with an image-processing system for fluorescent magnetic particle inspection. This system was used, determining quality of detection media, following the type testing of the european standard prEN 9934-2. Another application was the determination of the visibility in dependance of the inspection parameters as magnetization, application of the detection medium, inclination of the test surface, surface conditions and viewing conditions. For standardized testing procedures and realistic parameter variations the probability of detection may be determined on the basis of physiological lightening in dependance of the defect dimensions.  [c.669]

The arrangement consists of a usual video camera and a video-capturing system ("frame grabber") which adopts the video signal to the PC. The image-processing program (under WINDOWS 95) allows a flexible valuation of the scene that means the determination of the visibility level or a proportional value. The key of the program is a contour-following algorithm which surrounds all objects (indications) over a predetermined luminance level The luminances (grey values) of the surrounded object are summed up which presents the light stream of the indication. For the valuation of the indication, the corresponding light stream of the surroundings may be substracted (i.e. the difference of the luminances Lo - Ls).  [c.671]

The development and improvement of scientific-technical level of NDT and TD means for safety issues is connected with the necessity to find additional investments that must be taken into account at the stage of new technogenic objects designing, when solving new arising problems in social, economic, ecological and medical safety. It is not accidental, that the expenses for safe nuclear power plants operation cover 50% of total sum for construction work capital investments. That is why the investments for NDT and TD have to cover 10% of total amount for development and manufacturing of any product.  [c.915]

Historically the legal framework in the North Sea has been prescriptive in nature, that is specifying through statute precisely what should be undertaken and when. Following Piper Alpha, a comprehensive review of all aspects of health and safety in the North Sea was undertaken by Lord Cullen and his team. The resulting Cullen Report (ref 1) included consideration of the way in which other industries in the UK were regulated and in particular the goal setting philosophy advocated in the Robens Report (ref 2) which was published in 1974. In this Robens recognised that the best interests of health and safety require the commitment and involvement of two key parties - those who create the risks and those who are affected by them. The role of the government in this approach should be to set the minimum objectives and enforce them, not dictate the detail and prescribe the means to meet the set objectives. Robens stated in his report  [c.1010]

Unlike most words in a glossary of tenns associated with the theoretical description of molecules, the word synnnetry has a meaning in every-day life. Many objects look exactly like their mirror image, and we say that they are syimnetrical or, more precisely, that they have reflection synnnetry. In addition to having reflection synnnetry, a pencil (for example) is such that if we rotate it tln-ough any angle about its long axis it  [c.136]

In this case, the scattering serves as a means for counting the number of molecules (or particles, or objects) per unit volume (N/V). It is seen that the polarizability, a, will be greater for larger molecules, which will scatter more. If we take the Clausius-Mosotti equation [16]  [c.1389]

In this chapter, we have reviewed the general scattering principles from matter by light, neutrons and x-rays and the data treatments for the different states of matter. The interaction between radiation and matter has the same fonnalism for all three cases of scattering, but the difference arises from the intrinsic property of each radiation. The major difference in data treatments results from the different states of matter. Although we have provided a broad overview of tire different treatments, the content is by no means complete. Our objective in this chapter is to provide the reader a general background for the applications of scattering teclmiques to materials using light, neutrons and x-rays.  [c.1417]

The ordinary way to get acquainted with objects like the non-adiabatic coupling terms is to derive them from first principles, via ab initio calculations [4-6], and study their spatial structure—somewhat reminiscent of the way potential energy surfaces are studied. However, this approach is not satisfactory because the non-adiabatic coupling terms are frequently singular (in addition to being vectors), and therefore theoretical means should be applied in order to understand their role in molecular physics. During the last decade, we followed both courses but our main interest was directed toward studying their physical-mathematical features [7-13]. In this process, we revealed (1) the necessity to form sub-Hilbert spaces [9,10] in the region of interest in configuration space and (2) the fact that the non-adiabatic coupling matrix has to be quantized for this sub-space [7-10].  [c.636]

Another misleading feature of a dataset, as mentioned above, is redundancy. This means that the dataset contains too many similar objects contributing no  [c.206]

Let us first define the information content per object. A (discrete) system can be split into classes of equivalence, whose number can vary from 1 to n, where n is the number of the elements (objects) in the system. No element can belong simultaneously to more than one class. Therefore, the information content (JQ of the system is additive, at least class-wise. This means that the information content of the system is the sum of the information contents of the classes. The IC of a class can be given by Eq. (2), where h is the number of elements in the ith class. (Recall, also, that log (x) = -log (l/x)).  [c.212]

The first few principal components store most of the relevant information, the rest being merely the noise. This means that one can use two or three principal components and plot the objects in two or three-dimensional space without losing information.  [c.213]

The easiest way to extract a set of objects from the basic dataset, in order to compile a test set, is to do so randomly. This means that one selects a certain number of compounds from the initial (primary) dataset without considering the nature of these compounds. As mentioned above, this approach can lead to errors.  [c.223]

The characteristic of a relational database model is the organization of data in different tables that have relationships with each other. A table is a two-dimensional consti uction of rows and columns. All the entries in one column have an equivalent meaning (c.g., name, molecular weight, etc. and represent a particular attribute of the objects (records) of the table (file) (Figure 5-9). The sequence of rows and columns in the tabic is irrelevant. Different tables (e.g., different objects with different attributes) in the same database can be related through at least one common attribute. Thus, it is possible to relate objects within tables indirectly by using a key. The range of values of an attribute is called the domain, which is defined by constraints. Schemas define and store the metadata of the database and the tables.  [c.235]

The facts, in particular the dependence of first-order rate upon the concentration of acetyl nitrate (Appendix),could not be accounted for if protonated acetyl nitrate were the reagent. The same objections apply to the free nitronium ion. It might be possible to devise a means of generating dinitrogen pentoxide which would account for the facts of zeroth- and first-order nitration, but the participation of this reagent could not be reconciled with the anticatalysis by nitrate of first-order nitration.  [c.104]

The foregoing is by no means a comprehensive list of the remarkable structures formed by the crystallization of polymers from solution. The primary objective of this brief summary is the verification that single crystals can be formed and characterized, not only by x-ray diffraction, but by direct electron-microscopic observation. These single crystals are mostly formed from dilute solution, however, and our concern throughout previous parts of this chapter has been the crystallization of polymer from the melt. What is the relationship between these idealized single crystals and the morphological forms which result from crystallization of bulk polymer We begin to answer this question by considering the microscopic examination of material resulting from the crystallization of polymer melt.  [c.240]

A second important mode of electron microscopy is transmission electron microscopy (tern) (11). An image of a sample in tern is obtained using the transmission of electrons by a sample in a method analogous to optical microscopy using photons. Thus, this method provides a magnified image of a transparent sample with an electron beam using objective and projector lenses as shown in Figure 2. Obviously, the requirement for an electron beam—transparent sample is the most important limitation of this approach. For most materials, this means that the sample thickness must be on the order of ca 20—200 nm. Given the extremely small thickness of tern samples, the surface area-to-bulk volume is very large therefore, tern provides information which is essentially surface in nature.  [c.271]

Miniature Electrical Components. In the manufacture of miniature transformers and motor armatures, it is often necessary that the winding be electrically isolated from the bobbin. The insulating varnish of fine copper wire often does not survive the rigors of the winding operation, and, particularly in the case of the smaller devices, the bobbin requires insulation. However, a thicker insulation than is absolutely necessary takes up space that otherwise could be used for more turns of wire. Thus, thickness control in the coating of the bobbin means better performance of the finished device. Parylene is used in the manufacture of high quality miniature stepping motors, such as those used in wristwatches, and as a coating for the ferrite cores of pulse transformers, magnetic tape-recording heads, and miniature inductors, where the abrasiveness of the ferrite is particularly damaging. In the coating of complex, tiny objects such as these, the VDP process has an extra labor-saving advantage. It is possible to coat thousands of such articles simultaneously by tumbling them during the VDP operation (65).  [c.442]

In most leaching operations the maintenance of constant fluid flows, pressures, and temperatures are important. These, together with the need to provide a sufficient contact time between the solvent and the soflds, usually indicate a need for continuous, multistage, countercurrent processes in which fresh solvent is fed to the final stage while the soflds are fed to the first stage. The objective is to be able to operate at steady conditions, and to be able to avoid extraction of undesirable material while preventing loss of solvent for both economic and safety reasons. This is usually achieved through the use of the usual control equipment, and recording instmments provide a useful means of studying plant performance. There are other factors which must be taken into account in the early stages of a design such as the particle size of the soHd and the solvent employed.  [c.88]

Chemical analysis of the metal can serve various purposes. For the determination of the metal-alloy composition, a variety of techniques has been used. In the past, wet-chemical analysis was often employed, but the significant size of the sample needed was a primary drawback. Nondestmctive, energy-dispersive x-ray fluorescence spectrometry is often used when no high precision is needed. However, this technique only allows a surface analysis, and significant surface phenomena such as preferential enrichments and depletions, which often occur in objects having a burial history, can cause serious errors. For more precise quantitative analyses samples have to be removed from below the surface to be analyzed by means of atomic absorption (82), spectrographic techniques (78,83), etc.  [c.421]

If the temperature of the space in which an object is placed were truly constant, a sealed case having a constant absolute humidity would also have a constant relative humidity. Because temperature is subject to some variations and totally leakproof cases are not easy to buHd, a second solution is often sought by placing the objects in reasonably weU-sealed cases in which the relative humidity is kept at a constant value by means of a buffeting agent.  [c.429]

Figure 2 shows the three yield levels in Table 2 together with the percentage of the U.S. area needed to supply SNG from biomass for any selected gas demand. Although relatively large areas are required, the use of land- or freshwater-based biomass for energy appHcations is stiU practical. The area distribution pattern of the United States (Table 3) shows selected areas or combinations of areas that might be utilized for biomass energy appHcations (9), ie, areas not used for productive purposes. It is possible that biomass for both energy and foodstuffs, or energy and forest products appHcations, can be grown simultaneously or sequentially in ways that would benefit both. Relatively small portions of the bordering oceans also might supply needed biomass growth areas, ie, marine plants would be grown and harvested. The steady-state carbon suppHes in marine ecosystems can conceivably be increased under controUed conditions over 1993 low levels by means of marine biomass energy plantations in areas of the ocean dedicated to this objective.  [c.11]

Wells can be broadly divided into two groups in terms of how logging operations should be prioritised information wells and development wells. Exploration and appraisal wells are drilled for Information and failure to acquire log data will compromise well objectives. Development wells are primarily drilled as production and injection conduits and whilst information gathering is an important secondary objective it should normally remain subordinate to well integrity considerations. In practical terms this means that logging operations will be curtailed in development wells if hole conditions deteriorate.This need not rule out further data acquisition, as logging through casing options still exist.  [c.131]

For sensitivity detection the standard defectometers were used. Relative sensitivity comparison at steel objects radiographic control by radiation with energy of 25 and 45 MeV shows that sensitivity minimum for 45 MeV energy is very displaced towards the big thickness and has not yet reached its minimal meaning at 500mm thickness.  [c.515]

As a rule, to provide the technogenic safety of complicated objects it is necessary to control a wide range of different parameters, such as kinematic (time, velocity, flow, etc ), static and dynamic (mass, force, pressure, energy, etc), mechanical (specific weight, substance quantity, density, etc ), geometrical, electrical, thermic, magnetic, acoustic and other physical parameters. Now day, the role of methods and means for defectoscopy, introscopy, structuroscopy, dimensions measurements and monitoring of physical-mechanical parameters of materials and units has been considerably extended. For the inspection of objects with the rotational principle of operation, such as turbines, generators, electric motors, compressor installations and others - the best results may be obtained with the help of vibrodiagnostic methods and means.  [c.911]

The main peculiarity of NDT and TD means for safety development is in their continuous in-tellectualization. The objective high level information about the inspected object condition, state of environment and about other events may be obtained with the help of complex systems. Such systems may be integrated basing on different methods corresponding to special physical phenomena. Diagnostic systems may be combined in accordance with different physical investigation methods. Still the main problem exists and that is the statistical data base establishing. In this data base the information about defects, accidents or emergency situations must be kept as well as the development results of scientific physical-mathematical base of quality metering, that is, the quantitative estimation of objects and media quality and condition. At the same time the task of multidimensional and multiparametric information processing must be solved. The information that is received from various physical fields and phenomena, due to the improved models of transducers, measuring channels and algorithms.  [c.915]

Abstract. The paper presents basic concepts of a new type of algorithm for the numerical computation of what the authors call the essential dynamics of molecular systems. Mathematically speaking, such systems are described by Hamiltonian differential equations. In the bulk of applications, individual trajectories are of no specific interest. Rather, time averages of physical observables or relaxation times of conformational changes need to be actually computed. In the language of dynamical systems, such information is contained in the natural invariant measure (infinite relaxation time) or in almost invariant sets ("large finite relaxation times). The paper suggests the direct computation of these objects via eigenmodes of the associated Probenius-Perron operator by means of a multilevel subdivision algorithm. The advocated approach is different from both Monte-Carlo techniques on the one hand and long term trajectory simulation on the other hand in our setup long term trajectories are replaced by short term sub-trajectories, Monte-Carlo techniques are connected via the underlying Probenius-Perron structure. Numerical experiments with the suggested algorithm are included to illustrate certain distinguishing properties.  [c.98]

An example of how redundancy in a dataset influences the quality of learning follows. The problem implied classification of objects in a dataset through their biological activities with the help of Kohonen Self-Organizing Maps. Three subgroups were detected. The first one contained highly active compounds, the second subgroup comprised compounds with low activity, and the rest, the intermediately active compounds, fell into the third subgroup. There were 91 highly active, 540 intermediately active, and 492 inactive compounds. As one can see, the dataset was not balanced in the sense that the intermediately active and the inactive compounds outnumbered the active compounds. A first attempt of balancing the dataset by means of a mechanical (i.e., by chance) removal of the intermediately active and the inactive compounds deaeased the quality of learning. To address this problem, an algorithm for finding and removing redundant compounds was elaborated.  [c.207]

Object-oriented programming combines abstract data types, which means types consisting of variables and operations on these variables, with the concept of heredity. The prototype is a class, which is the basis of an arbitrary number of objects. Heredity means that a new class can be derived from a class that already-exists. The derived class owns the variables and operations from the class from which it originates, but further variables or operations can be added to them. This enables the adaptation of a basic class to that what is actually needed. When the code is being compiled it is often clear which class within the class hierarchy has to be used. But this is not necessarily the case in a dynamic program flow Imagine a molecule viewer that draws a molecule alternatively as van der Waals radii or a baU-and-stick diagram depending on the user s choice. The corresponding basic class molecule has two child classes with drawing functions the van der Waals radii class, and the ball-and-stick class. The goal is that, depending on the user s choice, the correct drawing function is used for the molecule, either the one for the charge distribution or the one for the ball-and-stick diagram. The technique that can be used to realize this flexible behavior is called dynamic polymorphy (late binding). It can be used to decide, at run time, which object has to be shown and which corresponding object-function has to be used. Another advantage is that this principle eases the expandability In our example a new molecule class can be realized by adding a new child class of the basic class molecule, whereas most parts of the code do not have to be touched.  [c.628]

Deciding on the main objective for buying a mass spectrometer is an obvious first step, but it is essential in achieving a satisfactory result. Before approaching suppliers of commercial mass spectrometers, it is wise to set out on paper the exact analytical requirements for both the immediate and near future. The speed of advance in science — especially in analysis and mass spectrometry — means that long-range prognostications of future requirements are likely to be highly speculative and therefore of little relevance.  [c.275]

The application of scientific techniques to the study of art objects is an interdisciplinary undertaking (1 7). The physical scientist is trained to approach a stated problem by analysing for the identification of measurable variables and devising means to obtain numerical values for these variables. On the other hand, the art historian rehes on the trained eye, enabling visual recognition of styUstic characteristics and the mote subjective comparison of these with observations about numerous other art works. Communication between these speciaUsts has requited mutual efforts. The development of scientific examinations of art objects has had a synergistic relation with the growth of a new profession that of the art conservator, a speciahst having both scientific and artistic training. The conservator consults and coUaborates with both scientists and curators, providing appropriate cate to objects in the collections to promote long-term preservation.  [c.416]

Examinations utilizing uv or ir radiation are frequently used. Ultraviolet light has been in use for a long time in the examination of paintings and other objects, especially for the detection of repairs and restorations (32). Because of the variations in fluorescent behavior of different materials having otherwise similar optical properties, areas of repaint, inpainting, or replacements of losses with color-matched filling materials often can be observed easily. Also, the fluorescence of many materials changes with time as a result of various chemical aging processes, thus providing a means of distinguishing fresh and older surfaces (2,33). Infrared irradiation is often used in the examination of paintings because, owing to the limited absorption by the organic medium, ir light can penetrate deeply into the paint layers. It is reflected or absorbed in varying degrees by different pigments. Study of the reflected ir image may enable the detection of changes in composition, pentimenti, ie, changes made by the artist to already painted areas, restorations, and, especially important, underdrawings, ie, working drawings appHed by the artist on the prepared ground surface. Infrared illumination is also used frequently in the examination of paper artifacts, for example, writing or drawing in now faded ink can sometimes be legible under infrared light. Infrared-sensitive photographic films or vidicon cameras are used to transform the reflected ir image into a visible one (34,35). The vidicon camera with its wider spectral sensitivity extends the wavelength range beyond the limits imposed by the sensitivity of photographic emulsions and, when the image is digitized, allows for computer based image manipulation (see Infrared TECHNOLOGY AND RAMAN SPECTHOSCOPY Specthoscopy, optical).  [c.417]

The first forming technique, hammering of the metal, evolved into many different and highly complex cold-working techniques, such as raising, sinking, spinning, turning, reUef appHcation by means of repoussH, and, for the production of coinage, striking. A second class of forming techniques was developed upon the abiUty to melt metals. Originally, casting was done in open stone molds to produce relatively simple forms such as ax heads. The iavention of lost-wax casting was the greatest breakthrough in this area. In its simplest variation, a model of the object is made in wax. Subsequentiy, an outer layer or investment, made of refractory clay, is appHed to the wax model. Heating of the assembly results in the molten wax mnning out, leaving a hoUow space within the investment, into which metal can be poured. This method allows for the casting of soHd objects only, requiring large amounts of expensive metal for the fabrication of sizable objects. A significant improvement was the development of casting around a core in which the model of the object is made by forming the approximate shape of the object out of a clay and sand mixture, upon which a thin wax layer is appHed. The final modeling is done in the wax. Again, an investment is appHed and the wax removed by melting. A narrow space is left between the core and investment, into which metal is poured. The object as cast needs finishing work, to remove traces of the casting process. Then, especially with sculpture, the surface is often embeUished through polishing, gilding, or deHberate patination.  [c.421]

Exa.mina.tlon, Examinations of metal objects generally include the characterization of the metal, determination of the techniques involved in the manufacture, and study of aging phenomena. Of the latter, the state of corrosion is especially important, both in the examination of an object with the purpose of determining its authenticity, as well as in making an assessment of its state if conservation is needed (78,79). The layer of corrosion products covering the surface of the metal, the so-called patina, can be studied for its composition and stmcture (see Corrosion and corrosion control). Identification of corrosion products is often performed by means of x-ray diffraction analysis. The stmcture of the patina is studied using the low power stereomicroscope, where attention is directed to the growth pattern, crystal size, layering, and adherence to the metal. Cross sections prepared from samples can be studied under higher power reflected light microscopes to determine the extent of penetration of corrosion processes into the interior of the metal, or the inter- and intragranular corrosion. The examination with the low power microscope also yields information regarding wear patterns, eg, sharp or rounded edges of incised lines, and manufacturing techniques, tool marks, mold marks, etc. When the stmcture of the metal is studied through examination of an etched sample under high magnification, conclusions can be drawn regarding metallurgical techniques used in the manufacture such as various types of cold working, annealing, casting with preheated or cold molds, etc (80,81).  [c.421]

This concept of restoration has been left behind. The modem conservator defines as admissible restoration the compensation of those losses which render the object unreadable, ie, disfigure the object to an extent that the iatent of the artist is obscured. The normal visual effects of age are accepted as such and are not a priori subject to modification. In the twentieth century, it was realized that a change ia approach to restoration was necessary. Expertise ia chemistry, physics, and materials-sciences have been brought to bear both to define the roots of the problems that plague a work of art and to devise a means by which to arrest the deterioration processes. A new type of speciaUst, the conservator, arose who possesses the necessary manual skills and talents, and, moreover, has a good grasp of scientific methodology and a sufficient knowledge of chemistry and physics to be able to understand the basic causes of deterioration mechanisms, to devise effective treatments of objects, and to iateract with the scientific experts who are called upon to assist ia the analysis of the problems or the testing of proposed treatments.  [c.424]

An alternative to macroclimate systems is the creation of microclimates. The objects are placed within smaller spaces, such as cases, in which an ideal environment is maintained. One possibiUty is to install equipment to control the climate in individual cases, or groups of cases with similar materials, by mechanical means.  [c.429]


See pages that mention the term Objective meaning : [c.710]    [c.171]    [c.178]    [c.912]    [c.1243]    [c.56]    [c.513]    [c.540]    [c.521]    [c.285]    [c.417]    [c.51]   
Automotive quality systems handbook (2000) -- [ c.559 ]