Objectives measurement


J. J. Powers and H. R. Moskowitz, /. Soc. Test. Mater. Spec. Tech. Puhl 594, 35 (1974) R. A. Scanlan, ed.. Flavor Quality Objective Measurement ACS Symposium Series No. 51, Washington, D.C., 1977.  [c.7]

Image Evaluation. The subjective quaUty of a developed silver image depends on the color tone of the developed silver, brightness reproduction of the original scene (35), and perceived graininess and sharpness. Certain objective measurements and analyses correlate with these subjective quahties of a developed image. For example, quantitative optical density measurements and the corresponding D-log H curve analysis are used to monitor tone and brightness reproduction. Quantitative and objective analysis techniques have also been developed for graininess and sharpness evaluation. The silver image is granular because it is composed of a random distribution of discrete specks and filamentary clumps of metallic silver. Consequentiy the photographic image is inherently inhomogeneous and nonuniform, which becomes increasingly apparent under increasing magnification. Such nonuniformity is referred to as graininess. If a given field of the silver image is optically scaimed with a density measuting device, a micro densitometer.  [c.459]

The properties of the finished beer vary with the type of beer and place of origin. The figures in Table 1 do not, however, show much about the quaUty of the beer this can only partly be expressed in figures based on objective measurements. The quahty consists of aroma, taste, appearance, (color, clarity) formation, and stabiUty of foam. Of these, the first two ate still inaccessible to objective measurement. Although the aroma of a product is determined by the quantity of volatile alcohols, etc, the quahty of the product caimot be expressed in those terms. Appearance, foam formation, and foam stabiUty can be evaluated more easily. For judgment on taste and aroma, taste-testing panels ate the only method.  [c.13]

Objective, measurable criteria are always best, while subjective criteria are risky and subject to interpretation. There should be no room for doubt or ambiguity, although this is often difficult to achieve. It is also important to be clear about what the project output is expected to accomplish. For instance, these three outcomes may produce entirely different results the project/product performs the specified functions it was built according to approved design or it solves the client s problem.  [c.839]

Whereas pattern (b) is intuitively the most complex of the three patterns, it has neither the highest entropy (which belongs to pattern (c)) or the lowest (which belongs to pattern (a)). Indeed, were we to plot our intuitive sense of complexity as a function of the amount of order or disorder in a system, it would probably look something like that shown in figure 12.2. The problem is to find an objective measure of the complexity of a system that matches this intuition.  [c.614]

For binary vapor-liquid equilibrium measurements, the parameters sought are those that minimize the objective function  [c.98]

An example of an application of CAO is its use in optimising the distribution of gas in a gas lift system (Fig. 11.3). Each well will have a particular optimum gas-liquid ratio (GLR), which would maximise the oil production from that well. A CAO system may be used to determine the optimum distribution of a fixed amount of compressed gas between the gas lifted wells, with the objective of maximising the overall oil production from the field. Measurement of the production rate of each well and its producing GOR (using the test separator) provides a CAO system with the information to calculate the optimum gas lift gas required by each well, and then distributes the available gas lift gas (a limited resource) between the producing wells.  [c.282]

A number of real objects with artificially made disbonds were tested using the Fokker Bond Tester and spectra were stored in a PC for the classification. One of the objects, "Lower wing skin is shown in Figure 4. As can be seen, the positions and sizes of flaws are marked. The same marks were also drawn on the actual objects to facilitate measurements.  [c.109]

The NSC was trained using labeled data acquired during inspection of objects with known defects. Examples of spectra for the object Lower wing skin are shown in Figure 5, the spectra measured for the flawless structures for different number of layers in the upper panel, the spectra corresponding 100% and 50% disbonds in the middle and lower panel, respectively. The size of the disbonds is given as a percent of active surface of the probe used for the test.  [c.109]

The penetration of microwaves in various materials gives active microwave imaging a large potential for subsurface radar, civil engineering etc. Several inverse-scattering theories have been proposed in the scientific literature. Among them, the simplest Bom-type approach, which does not take into account multiple reflections, is valid for weakly scattering objects [1-2]. To improve the quality of reconstruction, the method based on the successive application of the perturbative algorithm was developed [3]. However, the inherent approximations of this approach are not overcome in the iterative scheme. Another class of algorithms aims to obtain the spatial distribution of permittivity by using numerical solutions of exact equations [4-6]. Unfortunately, a rate of convergence of the solution to the global minimum of cost function depends on actual contrast values, measurement error etc. That is why an importance of a priori knowledge about the object imder investigation is usually emphasized. In general, existing inversion algorithms suffer from serious problems when discontinuous profiles of high contrast, which are often encountered in practical applications, are to be reconstructed. Moreover, the frequency-swept imaging methods utilize usually reflection coefficient data measured in a very broad frequency band starting from zero frequency [1-2, 4-5]. Such methods are inappropriate from an application point of view.  [c.127]

CTB 941.2-93 defines laboratories subject to accreditation in National system. Among others laboratories with legal status, results of testing and measurements of which are used in assessment of safety of products, works and services, in diagnostics of technical state of critical safety objects and vehicles are noted. These laboratories use different NDT methods in their activities.  [c.957]

When it was first discovered that a corrosion problem was occurring on near drum generator tubes of black liquor recovery boilers, manually manipulated ultrasonic thickness transducers were used to measure the remaining tube wall thickness. Not only was this method time consuming, it also require complete operator attention during the tedious and tiring process. The objective of manual ultrasonic thickness meastirement is to obtain the thinnest wall measurement for each tube. It is not possible for a technician performing a manual inspection to perceive intricate patterns of different wastage profiles or tube characteristics. The mill is provided with and accepts a single reading under the assumption that a full coverage manual scan had been performed to the best ability of the technician. The mill does not expect any further information.  [c.1032]

In this section we consider electromagnetic dispersion forces between macroscopic objects. There are two approaches to this problem in the first, microscopic model, one assumes pairwise additivity of the dispersion attraction between molecules from Eq. VI-15. This is best for surfaces that are near one another. The macroscopic approach considers the objects as continuous media having a dielectric response to electromagnetic radiation that can be measured through spectroscopic evaluation of the material. In this analysis, the retardation of the electromagnetic response from surfaces that are not in close proximity can be addressed. A more detailed derivation of these expressions is given in references such as the treatise by Russel et al. [3] here we limit ourselves to a brief physical description of the phenomenon.  [c.232]

The classic techniques for measuring contact angle have been reviewed in detail by Neumann and Good [95]. The most commonly used method involves directly measuring the contact angle for a drop of liquid resting on a horizontal solid surface (a sessile drop) as illustrated in Fig. X-8. Commercial contact angle goniometers employ a microscope objective to view the angle directly much as Zisman and co-workers did 50 years ago [96]. More sophisticated approaches involve a photograph or digital image of the droplet [46, 97-100]. An entirely analogous measurement can be made on a sessile bubble captured at a solid-liquid interface as illustrated in Fig. X-8 [101, 102]. The use of bub-  [c.362]

S-S annihilation phenomena can be considered as a powerful tool for investigating tire exciton dynamics in molecular complexes [26]. However, in systems where tliat is not tire objective it can be a complication one would prefer to avoid. To tliis end, a measure of suitably conservative excitation conditions is to have tire parameter a< )T < 0.01. Here x is tire effective rate of intrinsic energy dissipation in tire ensemble if tire excitation is by CW light, and T = IS tire  [c.3023]

In order to compare two chemical (or any other) objects, e.g., two molecules, we need a measure. Plenty of similarity measures have been proposed they are listed in Table 6-2. Generally speaking these measures can be divided into two cases one of qualitative characteristics, and the other of quantitative characteristics. Here we consider these two cases.  [c.304]

Consequently, we can construct a similarity measure intuitively in the following way all matches c -i- d relative to all possibilities, i.e., matches plus mismatches (c+ d) + (a -I- h), yields (c -t- d) / a + b+ c + d), which is called the simple matching coefficient [18], and equal weight is given to matches and mismatches. (Normalized similarity measures are called similarity indices or coefficients see, e.g.. Ref. [19].) When absence of a feature in both objects is deemed to convey no information, then d should not occur in a similarity measure. Omitting d from the above similarity measure, one obtains the Tanimoto (alias Jaccard) similarity measure (Eq. (8) see Ref. [16] and the citations therein)  [c.304]

In order to apply the similarity measures to the objects, the latter must be described by some characteristics.  [c.309]

If the binary descriptors for the objects s and t are substructure keys the Hamming distance Eq. (6)) gives the number of different substructures in s and t (components that are 1 in either s or but not in both). On the other hand, the Tanimoto coefficient (Eq. (7)) is a measure of the number of substructures that s and t have in common (i.e., the frequency a) relative to the total number of substructures they could share (given by the number of components that are 1 in either s or t).  [c.407]

FAST System. Fabric Assurance by Simple Testing (FAST) is a system developed in the 1980s for objective measurement of those properties important to the appearance, handle, and performance of fabrics (136). The system is designed for use by garment makers and worsted wool finishers. The critical fabric properties for predicting garment performance have been defined as extensibility, shear rigidity, bending rigidity, and dimensional stability. Extensibility and shear rigidity (looseness) both affect the sewability of the fabric. Stiffness affects the handle of the fabric. Dimensional stability consists of the relaxation shrinkage, hygral expansion, and the stability of the fabric s surface layer, which in turn affects the subjective property of smoothness. Thus, the FAST system provides a means for estabUshing the tailorability of fabric based on subjective properties of looseness, stiffness, and stability by simple objective measurements. The system consists of a compression meter, a bending meter, an extension meter, and a dimensional stability test. Test results for a fabric are compared to a control chart showing defined tolerance limits of each measured property.  [c.463]

Objective Measurement ofiWool in Australia, AustraHan Wool Corp., Melbourne, AustraHa, 1973.  [c.355]

Thermodynamic depth satisfies several welcome properties (1) it is a purely objective measure in the semse that a different set of measurements using the same experimental data always yield the same depth (2) it vanishes for both completely ordered and disordered systems (3) the complexity of merged copies of a given system increases only by the depth of the copying process (which is, by comparison, typically small), and (4) the slightly subtle, but intuitive, property of revealed probing. By this we mean that it provides the prober the ability to tailor depth of the probe to a desired degree of resolution. As finer and finer details of the microscopic state of a gas are revealed by probing the state of the gas to great accuracy, what may initially appear to be shallow may, which successive probing, be discovered to be deep on the other hand, the fact that successively finer probes of, say, a crystal yield effectively the same (shallow) depth at all levels reveals that the thermodynamic depth of a crystal is indeed shallow.  [c.628]

But, with the use of digitization, 2D quantitative measurements are allowed for industrial radiography. These can be done by powerful tools, like estimation of defect extension, automatic segmentation, recognition of individual defects and image analysis (figure 7). For validation, results can be compared with destractive examination of metallic objects.  [c.503]

Taking into account that size and weight can change tremendously fi-om one object to the next, it is obvious that the CT- system had to be build in a very versatile but robust manner. For example heavy objects have to be moved very carefully, whereas small objects have to be measured as fast as possible and as accurate as possible. For that reason the turntable is equipped with an instrument, which limits the velocity, if the weight of the object is above a preselectable threshold.  [c.585]

While inspecting the parameters of vibration of the rotating parts and objects using standard contact methods and devices, serious problems are, usually encountered [Ij. The contactless methods of vibration measurement possess many advantages as compared to the contact methods, since many hindering factors such as an influence of weight of the sensor on vibration parameters, problems arising from high temperature and pressure in an inspected zone, and inconvenience concerning coimection of the sensor to the inspected surface are eliminated. However, some contactless sensors [1-2] also cannot be used to test vibrations of rotating objects. The distance to the object under test, the material s type and shape of the surface influence strongly the information parameters for a number of remote sensing methods.  [c.654]

The described microwave vibrometer can be used for automated vibration measuremerits and possesses a dynamic range from 1 micron to several millimeters, which is satisfactory for most applications. It demonstrates high linearity of output signal, good measurement accuracy, virtually no limit on upper frequency and is valid regardless the reflecting surface is metallic or not. Besides, the microwave vibrometer can be used to measure displacement of fastly rotating objects (magnitude and frequency) of cylindrical or conical shape and to identify manufacturing defects by comparing the measured vibrations with stored reference spectra. Roll diameter, shape and distance to the testing object do not influence the measurement results. Typical distances from antenna to reflector may be of 1 millimeter to 50 centimeters. Since the area in the space illuminated by the antenna at short distances is about 3x3 cm, care must be taken that only vibrating components are irradiated to avoid errors.  [c.655]

The quantitative description of visibility is based on the contrast threshhold dC which is the smallest contrast necessary to recognize an object. The contrast threshold depends on the viewing conditions and the optical abilities of the object. Quantitative correlations are based on experimental investigations which are recommended by the Commission International d Eclairage CIE [6]. The results are presented as dependant on the adaptation luminance and the object dimension as parameter [3]. The object dimension is described by the viewing angle (in angle minutes ) of a circle disc. As usual [5], the presentation time was set on 0.2 s. and the detection probability on 50 %. These quantitative correlations describe the fact, that visibility increases with increasing adaptation luminance if the contrast remains constant. The contrast threshold was determined by test persons under optimal conditions which included viewing and environmental conditions as well as the visual acuity. In any real visual inspection (e.g. detection of small objects) the contrast between the object and the surroundings should be much higher than the contrast threshold. This factor is defined as Visibility Level VL = C/dC. The factor is a quantitative measure for the visibility of objects Recommended values are dependant on the inspection task and are in the range VL = 3 - 60.  [c.670]

As a rule, to provide the technogenic safety of complicated objects it is necessary to control a wide range of different parameters, such as kinematic (time, velocity, flow, etc ), static and dynamic (mass, force, pressure, energy, etc), mechanical (specific weight, substance quantity, density, etc ), geometrical, electrical, thermic, magnetic, acoustic and other physical parameters. Now day, the role of methods and means for defectoscopy, introscopy, structuroscopy, dimensions measurements and monitoring of physical-mechanical parameters of materials and units has been considerably extended. For the inspection of objects with the rotational principle of operation, such as turbines, generators, electric motors, compressor installations and others - the best results may be obtained with the help of vibrodiagnostic methods and means.  [c.911]

The main peculiarity of NDT and TD means for safety development is in their continuous in-tellectualization. The objective high level information about the inspected object condition, state of environment and about other events may be obtained with the help of complex systems. Such systems may be integrated basing on different methods corresponding to special physical phenomena. Diagnostic systems may be combined in accordance with different physical investigation methods. Still the main problem exists and that is the statistical data base establishing. In this data base the information about defects, accidents or emergency situations must be kept as well as the development results of scientific physical-mathematical base of quality metering, that is, the quantitative estimation of objects and media quality and condition. At the same time the task of multidimensional and multiparametric information processing must be solved. The information that is received from various physical fields and phenomena, due to the improved models of transducers, measuring channels and algorithms.  [c.915]

Books are available on many of these subjects. The objective here, therefore, is to introduce several fiindamental issues and point to additional infomiation by citing key references and suggesting further reading. We begin by briefly delimiting what we mean by high pressure. Then, we discuss how high pressures are achieved and measured before describing the behaviours of a few familiar materials at high pressures.  [c.1955]

Chaotic attractors are complicated objects with intrinsically unpredictable dynamics. It is therefore useful to have some dynamical measure of the strength of the chaos associated with motion on the attractor and some geometrical measure of the stmctural complexity of the attractor. These two measures, the Lyapunov exponent or number [1] for the dynamics, and the fractal dimension [10] for the geometry, are related. To simplify the discussion we consider tliree-dimensional flows in phase space, but the ideas can be generalized to higher dimension.  [c.3059]

Phase interference in optical or material systems can be utilized to achieve a type of quantum measmement, known as nondemolition measurements ([41], Chapter 19). The general objective is to make a measurement that does not change some property of the system at the expense of some other property(s) that is (are) changed. In optics, it is the phase that may act as a probe for determining the intensity (or photon number). The phase can change in the comse of the measurement, while the photon number does not [126].  [c.103]

Abstract. The paper presents basic concepts of a new type of algorithm for the numerical computation of what the authors call the essential dynamics of molecular systems. Mathematically speaking, such systems are described by Hamiltonian differential equations. In the bulk of applications, individual trajectories are of no specific interest. Rather, time averages of physical observables or relaxation times of conformational changes need to be actually computed. In the language of dynamical systems, such information is contained in the natural invariant measure (infinite relaxation time) or in almost invariant sets ("large finite relaxation times). The paper suggests the direct computation of these objects via eigenmodes of the associated Probenius-Perron operator by means of a multilevel subdivision algorithm. The advocated approach is different from both Monte-Carlo techniques on the one hand and long term trajectory simulation on the other hand in our setup long term trajectories are replaced by short term sub-trajectories, Monte-Carlo techniques are connected via the underlying Probenius-Perron structure. Numerical experiments with the suggested algorithm are included to illustrate certain distinguishing properties.  [c.98]

Usually, the denominator, if present in a similarity measure, is just a normalizet it is the numerator that is indicative of whether similarity or dissimilarity is being estimated, or both. The characteristics chosen for the description of the objects being compared are interchangeably called descriptors, properties, features, attributes, qualities, observations, measurements, calculations, etc. In the formiilations above, the terms matches and mismatches" refer to qualitative characteristics, e.g., binary ones (those which take one of two values 1 (present) or 0 (absent)), while the terms overlap and difference" refer to quantitative characteristics, e.g., those whose values can be arranged in order of magnitude along a one-dimensional axis.  [c.303]

Following Bradshaw [17], we can give the definition of a similarity measure as follows Consider two objects A and B, a is the number of features (characteristics) present in A and absent in B, b is the number of features absent in A and present in B, c is the number of features common to both objects, and d is the number of features absent from both objects. Thus, c and d measure the present and the absent matches, respectively, i.e., similarity while a and b measure the corresponding mismatches, i.e., dissimilarity. The total ntunber of features is n = a + b + c + d.  [c.304]

Replacing summation by integration, one obtains the integration forms of the above-described similarity measures (Table 6-2). Using different characteristics to describe the objects being compared, one obtains different similarity measures. A typical example is the Garbo [20] similarity measure which is given by Eq. (9). where and Pa i ) are the electron density functions of quantum objects A and B, weighted by a positive definite operator 0(r, r ), chosen either as the Dirac function <5(r - r ) or the Coulomb operator r - r " etc.  [c.308]

Once we have the measures, we have to apply them to chemical objects. Objects of interest to a chemist include molecules, reactions, mbrtures, spectra, patents, journal articles, atoms, functional groups, and complex chemical systems. Most frequently, the objects studied for similarity/dissimilarity are molecular structures.  [c.309]

The most common objects of interest to a chemist are molecules. Some sources of drug-like compounds are the MDL Drug Data Report (MDDR) a licensed database compiled from the patent literature containing about 115 000 compounds, as weU as the database of the National Cancer Institute (NCI), containing about 250 000 compounds. Molecules in the MDDR are assigned a "therapeutic category by the vendor. There are 647 therapeutic categories. MDDR-3D is also available. The MDDR is a commercial database, whereas the NCI database is freely available at http //dtp.nci.nih.gov/docs/3D datahase/structural information/structural data.html and contains both structural information and biological data. The biological database is formed of three files, which contain data from different types of measurements - TGI, LC50, and G150.  [c.310]

The similarity between compounds is estimated in terms of a distance measure between two diSerent objects s and t. The objects s and t are described by the vectors Xg = (Xjj, Xj2,. .., Xg ) and x, = (x j, x,2,. .., Xj j) where m denotes the number of real variables and x, and x,j are each the jth element of the corresponding vector. For calculation of the distance and similarity of two compounds, the variables Xj should have a comparable magnitude. Otherwise scaling or normalization of the variables has to be performed. Two of the most prominent distance measures are given by Eqs. (1) and (2)  [c.405]

The calculation of a distance measure for two objects s and t represented by binary desaiptors and jq with m binary values is based on the frequencies of common and different components. For this purpose we define the frequencies a, h, c, and d as follows  [c.406]

Though well known to Graham s immediate successors, this striking relation Later appears to have been lost sight of, and there are repeated assertions in the literature that equimolar councerdiffusion occurs under Isobaric conditions in the bulk diffusion regime. This assumption of equimolar counterdiffusion was challenged in 1953 by Hoogschagen [29], [30], who was apparently unaware of Graham s work. His apparatus, which is sketched in Figure 6.2, was designed for steady state operation under completely isobaric conditions, thereby eliminating the objections which might be raised against Graham s work. Measurements were made on the interdiffusion of oxygen and the "inert" gases He, and CO Referring to  [c.52]

As Graham s relation shows, the molar fluxes in a binary mixture are not equal in magnitude under isobaric conditions, so it follows that equimolar counterdiffusion can occur only in the presence of a pressure gradient. The prediction and measurement of the necessary pressure gradient is the objective of the third class of fundamental investigations mentioned at the beginning of this chapter. The classic experiment was carried out by Kramers and Kistemaker [25], who measured the pressure difference across a capillary in circumstances where the molar fluxes of the two species are constrained to be equal in magnitude. The diameter of their capillary was large compared with the mean free paths, so their results can be compared with predictions based on the detailed theory of Chapter  [c.56]

A molecular fitting algorithm requires a numerical measure of the difference between two structures when they are positioned in space. The objective of the fitting procedure is to find the relative orientations of the molecules in which this function is minimised. The most common measure of the fit between two structures is the root mean square distance between pairs of atoms, or RMSD  [c.507]


See pages that mention the term Objectives measurement : [c.460]    [c.339]    [c.963]    [c.475]    [c.960]    [c.1629]    [c.104]    [c.406]   
Automotive quality systems handbook (2000) -- [ c.105 ]