# Objective determining

Weed Management Strategies. The paradigm that all noncrop plant populations in a field should be controlled, regardless of the actual impact on crop yield and quality, is not justifiable. The objective determination a priori of which plant populations require control and which do not, direcdy reduces the economic, environmental, and social costs associated with weed control and can be considered an innovative approach to weed management. For example, some noncrop plant populations do not significantly hinder production. In some developing areas of the world, producers have found uses for noncrop plants that would otherwise be considered weeds (451), and many weeds are both edible and nutritious (452). In aquaculture systems, certain highly problematic algal and bacterial weeds are also essential to the overall stability and productivity of the production system (453). [c.55]

Objective Determine the filter size and vacuum system capacity required to dewater 15 mtph (metric tons per hour) of dry sohds and produce a cake containing an average moisture content of 25 wt %. [c.1703]

Objective Determine the filter size and vacuum capacity required to dewater and wash 15 mtph of dry solids while producing a final washed cake with a moisture content of 25 wt % and containing 0.10 wt % TDS based on dry cake solids. [c.1704]

Studies of the increase of shock pressure of wave profiles to some peak value provide insight into the material deformation mechanisms relating to plastic deformation and strength. Studies of the reduction of shock pressure from some peak value provide insight into solid properties at high stress states. Such studies are of technological interest to determine the wave attenuation characteristics of solid materials. The release wave work has typically been carried out to accomplish three objectives determination of sonic velocity at pressure (a higher-order elastic behavior), determination of strength at pressure, and measurement of the release isentrope. [c.35]

Literature provides the basis for a user to objectively determine the maximum flame speed that will be achieved with a particular combination of confinement, obstacles, fuel reactivity, and ignition source. [c.125]

The objective of an EIA Is to document the potential physical, biological, social and health effects of a planned activity. This will enable decision makers to determine whether an activity is acceptable and if not, identify possible alternatives. Typically, ElA s will be carried out for [c.70]

The objective of appraisal activity is not necessarily to prove more hydrocarbons. For example, appraisal activity which determines that a discovery is non-commercial should be considered as worthwhile, since it saves a financial loss which would have been incurred if development had taken place without appraisal. [c.173]

One of the primary objectives of production operations is to deliver product at the required rate and quality. Therefore the product quality specification and any agreed contract terms will drive the activities of the production operations department, and will be a starting point for determining the preferred mode of operation. The specifications, such as delivery of stabilised crude with a BS W of less than 0.5%, and a salinity of 70 g/m, [c.279]

An example of an application of CAO is its use in optimising the distribution of gas in a gas lift system (Fig. 11.3). Each well will have a particular optimum gas-liquid ratio (GLR), which would maximise the oil production from that well. A CAO system may be used to determine the optimum distribution of a fixed amount of compressed gas between the gas lifted wells, with the objective of maximising the overall oil production from the field. Measurement of the production rate of each well and its producing GOR (using the test separator) provides a CAO system with the information to calculate the optimum gas lift gas required by each well, and then distributes the available gas lift gas (a limited resource) between the producing wells. [c.282]

Therefore an automatic method, which means an objective and reproducible process, is necessary to determine the threshold value. The results of this investigations show that the threshold value can be determined reproducible in the point of intersection of two normal distributed frequency approximations. [c.14]

Within this work mathematical models of the data collection process for radiography and CT have been developed. The objective has been to develop a functioning simulation environment of the physics in the image/data collection that considers the poly-energetic dependence in the imaging process, including full X-ray energy spectra and detector energy response. The simulation environment is used for fast and cost-effective studies of how parameter variations affect final image quality and indirect - the defect detectability. In this particular case, the simulations have been applied on a high resolution CT application to determine optimal operator parameter settings that maximise the detectability of different defect types in circular objects and to predict the minimum detectable size as a function of object diameter. The simulation environment has also been used to correct for beam hardening artefacts in CT-images. [c.208]

The actuality of researches of multi-layer objects with high density of materials for airspace technics is substantiate by several firms. The distribution of the heavier in longerons and elements of the screw of the helicopter was determined with the help of RCT. In the filler made of graphite or rubbers the layers of heavier from leaden alloys and the air stratification 1 up to 5 mm wide are revealed with the ratio signal / noise more than 3, even at presence of an external steel cover. [c.600]

Quantitative techniques of RCT allow to carry out the analysis of objects, when its matrix from carbonaceous elements has a different degree of impregnation by heavy metals. The given problem is urgent for improvement of technology of manufacturing nozzles of engines of space vehicles, in particular, for determination of distribution of heavy metals on a layer of objects and general contents of it in a product. [c.600]

In NDT the key question is the reliability of the methods, verified by validation. Regarding the visual methods, the reliability, that means probability of the visual recognition depends on the optical abilities of the scene with the objects. These abilities were determined quantitatively with an image-processing system for fluorescent magnetic particle inspection. This system was used, determining quality of detection media, following the type testing of the european standard prEN 9934-2. Another application was the determination of the visibility in dependance of the inspection parameters as magnetization, application of the detection medium, inclination of the test surface, surface conditions and viewing conditions. For standardized testing procedures and realistic parameter variations the probability of detection may be determined on the basis of physiological lightening in dependance of the defect dimensions. [c.669]

The arrangement consists of a usual video camera and a video-capturing system ("frame grabber") which adopts the video signal to the PC. The image-processing program (under WINDOWS 95) allows a flexible valuation of the scene that means the determination of the visibility level or a proportional value. The key of the program is a contour-following algorithm which surrounds all objects (indications) over a predetermined luminance level The luminances (grey values) of the surrounded object are summed up which presents the light stream of the indication. For the valuation of the indication, the corresponding light stream of the surroundings may be substracted (i.e. the difference of the luminances Lo - Ls). [c.671]

A gas/gas heat exchanger at a refinery had a known leakage, which it for technical reasons had been impossible to repair completely. The objective of the survey was to determine the leakage size in terms of the percentage of process stream crossing it. [c.1057]

Our work is targeted to biomolecular simulation applications, where the objective is to illuminate the structure and function of biological molecules (proteins, enzymes, etc) ranging in size from dozens of atoms to tens of thousands of atoms today, with the desire to increase this limit to millions of atoms in the near future. Such molecular dynamics (MD) simulations simply apply Newton s law to each atom in the system, with the force on each atom being determined by evaluating the gradient of the potential field at each atom s position. The potential includes contributions from bonding forces. [c.459]

Another problem is to determine the optimal number of descriptors for the objects (patterns), such as for the structure of the molecule. A widespread observation is that one has to keep the number of descriptors as low as 20 % of the number of the objects in the dataset. However, this is correct only in case of ordinary Multilinear Regression Analysis. Some more advanced methods, such as Projection of Latent Structures (or. Partial Least Squares, PLS), use so-called latent variables to achieve both modeling and predictions. [c.205]

Ten years ago we became interested in the possibility of using nitration as a process with which to study the reactivity of hetero-aromatic compounds towards electrophilic substitution. The choice of nitration was determined by the consideration that its mechanism was probably better imderstood than that of any other electrophilic substitution. Others also were pursuing the same objective, and a considerable amount of information has now been compiled. [c.251]

In the previous section we introduced the terms population and sample in the context of reporting the result of an experiment. Before continuing, we need to understand the difference between a population and a sample. A population is the set of all objects in the system being investigated. These objects, which also are members of the population, possess qualitative or quantitative characteristics, or values, that can be measured. If we analyze every member of a population, we can determine the population s true central value, p, and spread, O. [c.71]

The "feedback loop in the analytical approach is maintained by a quality assurance program (Figure 15.1), whose objective is to control systematic and random sources of error.The underlying assumption of a quality assurance program is that results obtained when an analytical system is in statistical control are free of bias and are characterized by well-defined confidence intervals. When used properly, a quality assurance program identifies the practices necessary to bring a system into statistical control, allows us to determine if the system remains in statistical control, and suggests a course of corrective action when the system has fallen out of statistical control. [c.705]

In a force field calculation, a molecule in three dimensions is constmcted using either Cartesian coordinates x,j and or via an internal coordinate matrix consisting of bond distances, bond angles, and dihedral angles to specify the atoms unique positions. Then the initial stmcture is evaluated to determine the extent to which each degree of freedom (bonds, angles, etc) deviates from the ideal (the zero-energy value) for the particular element and its hybridization. An energy minimization process follows wherein the energy associated with the distortions from ideal is minimized as the individual atomic positions or degrees of freedom are adjusted. Iteratively, this converges on a "minimum energy" or an "optimized" stmcture. This stmcture represents the best attempt of the minimization algorithm to render the smallest deviations in position of each of the atoms such that either the derivatives of the change in energy associated with the deviations are the smallest, or they satisfy either energetic convergence or coordinate change criteria from iteration to iteration. It should be noted that this process is analogous to the geometry optimization process within a quantum mechanical program, except that there the objective is to converge on a stmcture which yields the smallest energy derivatives and lowest total energy from solution of the SCF equations. Most simple molecular mechanics force fields include terms (Fig. 8) for [c.164]

Several lenses are used in a transmission electron microscope. The condenser lenses provide uniform illumination of the sample over the area of interest. The objective lens provides the primary image and therefore, determines the lateral resolution of the image. The objective lens aperture is important in controlling the contrast of the image. The final magnification of the image is performed by one or more projector lenses. The final image is typically recorded on a fluorescent or phosphorescent screen where it can be captured by a video camera for viewing. As noted above, all of these lenses are subject to serious aberrations which ultimately limit the resolution of the microscope to greater than the diffraction limit (the theoretical resolution limit for this approach.) Moreover, these lens aberrations restrict the angular range of the electron beam resulting in the need for very tall instmments. Despite these shortcomings, tern is a very powerful surface imaging tool with atomic resolution in some cases, providing sample magnifications between 100—500,000 X. [c.272]

Weighing is the operation of determining the mass of any material as represented by one or more objects or by a quantity of bulk material. Proportioning is the control, by weighing, of relative quantities of two or more ingredients according to a specific recipe in order to make a mixed product, or to prepare the ingredients for use in a chemical process. [c.324]

The integrity of welded stmctures depends on the integrity of the welds, and much attention is given to testing methods, such as destmctive tests, nondestmctive tests, and general weld inspection. An objective of many tests is to determine whether welds contain specific defects, such as porosity, slag inclusions, cracks, or lack of fusion (14,15). [c.349]

With the objective determination of the visibilitity of magnetic particle indications quantitative researches on the influence of the inspection parameters will be possible. The first part deals with the type testing of detection media which is as well on the course of adoption for type testing of liquid penetrant systems (prEN 751-2). [c.677]

The SSAHP developed by the Site G contractor did not indicate that the contractors routinely conducted job- or task-specific hazard analyses. In addition, the SSAHP did not specify that PPE selection for jobs and tasks must be based on the analysis of the health hazards associated with each job. Eurthermore, the SSAHP contained no procedures for objectively determining the effectiveness of decontamination of personnel or equipment. The decontamination program required incineration of all materials that could not be readily decontaminated such materials were placed in labeled disposal containers. The program, however, did [c.203]

If the objective function is considered two-dimensional, consisting of Equations (7-13) and (7-14) and the vector X includes only T and a, then the only change in the iteration is that the derivatives of with respect to composition are ignored in establishing the Newton-Raphson corrections to T and a. The new compositions can then be determined from Equations (7-8) and (7-9). Such a simplified procedure sacrifices little in convergence rate for vapor-liquid systems, where the contributions of compfosition-derivatives to changes in T and a are almost always smad 1. This approach requires only two evaluations of per iteration and still avoids creeping since it is essentially second-order in the limit as convergence is approached. [c.117]

The tool is positioned across the objective formation and set against the side of the borehole by either two packers or by up to three probes (the configuration used will depend on the test requirements). The probes are pushed through the mudcake and against the formation. A pressure drawdown can now be created at one probe and the drawdown be observed in the two observation probes. This will enable an estimate of vertical and horizontal permeability and hence indicate reservoir heterogeneities. Alternatively fluids can be sampled. In this case a built-in resistivity tool will determine when uninvaded formation fluid (hydrocarbons or formation water) is entering the sample module. The pressure drawdown can be controlled from surface, enhancing the chance of creating a monophasic flow by keeping fhe drawdown above bubble point. [c.132]

Safe fimction of objects (products and desings) fi om their creation before destruction because of defects accumulation must be accompanied by testing - diagnostic process (TDP). Fulfiled investigations show that stmcture TDP contains four main elements working actions, stimulating actions, non-destructive testing, diagnosties of NDT object. Researching of NDT process within the fi-amework of TDP model, under the notion defect understand some differences of the object fi om certain conditionally ideal one. Then NDT is directed on revealing these differences, but further information technologies - on their identification and determination of degree of danger. [c.247]

The quantitative description of visibility is based on the contrast threshhold dC which is the smallest contrast necessary to recognize an object. The contrast threshold depends on the viewing conditions and the optical abilities of the object. Quantitative correlations are based on experimental investigations which are recommended by the Commission International d Eclairage CIE [6]. The results are presented as dependant on the adaptation luminance and the object dimension as parameter [3]. The object dimension is described by the viewing angle (in angle minutes ) of a circle disc. As usual [5], the presentation time was set on 0.2 s. and the detection probability on 50 %. These quantitative correlations describe the fact, that visibility increases with increasing adaptation luminance if the contrast remains constant. The contrast threshold was determined by test persons under optimal conditions which included viewing and environmental conditions as well as the visual acuity. In any real visual inspection (e.g. detection of small objects) the contrast between the object and the surroundings should be much higher than the contrast threshold. This factor is defined as Visibility Level VL = C/dC. The factor is a quantitative measure for the visibility of objects Recommended values are dependant on the inspection task and are in the range VL = 3 - 60. [c.670]

Phase interference in optical or material systems can be utilized to achieve a type of quantum measmement, known as nondemolition measurements ([41], Chapter 19). The general objective is to make a measurement that does not change some property of the system at the expense of some other property(s) that is (are) changed. In optics, it is the phase that may act as a probe for determining the intensity (or photon number). The phase can change in the comse of the measurement, while the photon number does not [126]. [c.103]

Two-dimensional structure diagrams and. 3D molecular structures can form the basis for describing many chemical and physical properties of compounds. But all the models used so far represent only the 3D skeleton of a molecule and not the actual space requirements. In analogy to the human body, which has a skeleton and a surrounding body with a limiting surface (the skin), molecules can be seen as objects with a molecular surface. This surface separates the 3D space in an inner part of the volume filled by the molecule, and an outside part (the rest of the universe). But this picture of ajr exact separation through a discrete surface is ojrly an approximation. Since molecules cannot be treated with the laws of classical mechanics, the concept of surfaces is only an analogy to macroscopic objects. In a quantum mechanical sense molecules have neither a body, nor a fixed surface. Their ingredients arc atoms with nuclei made irp of protons and neutrons, which arc surrounded by electrons. The space that these electrons occupy is not bounded by a surface, but can be characterized by an electron cloud distribution. Tlie electron density is continuous and approaches zero value at large distances from the nuclei (see Section 7.2). In particular, the distribution of electrons is significant for molecular interactions and determines the properties of the molecule. Tlie molecular surface can express these different properties, such as electrostatic potential, atomic charges, or Hydrophobicity, using colored mapping (see Section 2.11), Therefore, the spatial figure, or envelope, of the molecule has to be considered this can be determined by various methods. Molecular surfaces can be obtained dt novo by mathematical calculation methods, but also from experiments. Three-dimensional structure analyses such as 2D NMR or X-ray crystallography give an impression of the spatial requirements of the molecule. If only [c.124]

A challenging task in material science as well as in pharmaceutical research is to custom tailor a compound s properties. George S. Hammond stated that the most fundamental and lasting objective of synthesis is not production of new compounds, but production of properties (Norris Award Lecture, 1968). The molecular structure of an organic or inorganic compound determines its properties. Nevertheless, methods for the direct prediction of a compound s properties based on its molecular structure are usually not available (Figure 8-1). Therefore, the establishment of Quantitative Structure-Property Relationships (QSPRs) and Quantitative Structure-Activity Relationships (QSARs) uses an indirect approach in order to tackle this problem. In the first step, numerical descriptors encoding information about the molecular structure are calculated for a set of compounds. Secondly, statistical and artificial neural network models are used to predict the property or activity of interest based on these descriptors or a suitable subset. [c.401]

The objective of this study is to show how data sets of compounds for which dif-ferent biological activities have been determined can be studied. It will be shown how the use of a counter-propagation neural networb can lead to new insights [46]. The cmpha.si.s in this example is placed on the comparison of different network architectures and not on quantitative results. [c.508]

A clear conclusion from such comparative studies is that density functional methods using gradient-corrected functionals can give results for a wide variety of properties that are competitive with, and in some cases superior to, ab initio calculations using correlation (e.g. MP2). Gradient-corrected functionals are required for the calculation of relative conformational energies and the study of intermolecular systems, particularly those involving hydrogen bonding [Sim et al. 1992]. As is the case with the ab initio methods the choice of basis set is also important in detertnining the results. By keeping the basis set constant (6-31G being a popular choice) it is possible to make objective comparisons. Four examples of such comparative studies are those of Johnson and colleagues, who considered small neutral molecules [Johnson et al. 1993] St-Amant et al, who examined small organic molecules [St-Amant et al 1995] Stephens et al, who performed a detailed study of the absorption and circular dichroism spectra of 4-methyl-2-oxetanone [Stephens et al 1994] and Frisch et al, who compared a variety of density functional methods with one another and to traditional ab initio approaches [Frisch et al 19%]. The evolution of defined sets of data such as those associated with the Gaussian-u series of models has also acted as a spur to those involved in developing density functional methods. For example, much of Becke s work on gradient corrections and on mixed Hartree-Fock/ density function methods was evaluated using data sets originally collated for the Gaussian-1 and Gaussian-2 methods. A more recent example is a variant of the Gaussian-3 method which uses B3LYP to determine geometries and zero-point energies [Baboul et al 1999]. [c.157]

The summation runs over the N, objects in cluster i, each located at q and where the mean of the cluster is r,-. The total information loss is calculated by adding together the values for each cluster. At each iteration that pair of clusters which gives rise to the smallest increase in the total error function are merged. Two more hierarchical clustering algorithms are the centtoid method, which determines the distance between two clusters as the distance between their centroids, and the median method, which represents each cluster by the coordinates of the median value. Fortunately, all six hierarchical agglomerative methods can be represented by a single equation, first proposed by Lance and Williams [Lance and Williams 1967], with the different algorithms having different coefficients. [c.511]

The three main goals of a molecular mechanics program for small molecules are calculation of geometiy, energy, and spectial absorbances due to vibrational excitation. Hagler (Hwang, Stockfish, and Hagler, 1994) has categorized force fields into class 1, intended to achieve the first of these increasingly demanding objectives class 2, to achieve the first two and class 3 to achieve all three objectives. Research and development on class 3 force fields is an active enterprise, as is extension of class 1 and class 2 force fields to less common molecules and larger, biologically important species. We have already introduced geometiy determination in Chapter 4. [c.131]

Our primary objective in this section is the discussion of practical osmometry, particularly with the goal of determining the molecular weight of a polymeric solute. We shall be concerned, therefore, with the design and operation of osmometers, with the question of units, and with circumventing the problem of nonideality. The key to these points is contained in the last section, but the details deserve additional comment. [c.548]

See pages that mention the term

**Objective determining**:

**[c.458] [c.171] [c.178] [c.76] [c.208] [c.230] [c.473] [c.513] [c.539] [c.664] [c.71] [c.140]**

Automotive quality systems handbook (2000) -- [ c.36 ]