Hypothesis


The current calculation methods are based on the hypothesis that each mixture whose properties are sought can be characterized by a set of pure components and petroleum fractions of a narrow boiling point range and by a composition expressed in mass fractions.  [c.86]

The quantity coming from air is practically invariant and corresponds to a level approaching 130 mg/Nm. Nitrogen present in the fuel is distributed as about 40% in the form of NO and 60% as N2. With 0.3% total nitrogen in the fuel, one would have, according to stoichiometry, 850 mg/Nm of NO in the exhaust vapors. Using the above hypothesis, the quantity of NO produced would be (//-U  [c.269]

It would be unrealistic to represent the porosity of the sand as the arithmetic average of the measured values (0.20), since this would ignore the range of measured values, the volumes which each of the measurements may be assumed to represent, and the possibility that the porosity may move outside the range away from the control points. There appears to be a trend of decreasing porosity to the south-east, and the end points of the range may be 0.25 and 0.15, i.e. larger than the range of measurements made. An understanding of the geological environment of deposition and knowledge of any diagenetic effects would be required to support this hypothesis, but it could only be proven by further data gathering in the extremities of the field.  [c.159]

It is very important, from one hand, to accept a hypothesis about the material fracture properties before physical model building because general view of TF is going to change depending on mechanical model (brittle, elasto-plastic, visco-elasto-plastic, ete.) of the material. From the other hand, it is necessary to keep in mind that the material response to loads or actions is different depending on the accepted mechanical model because rheological properties of the material determine type of response in time. The most remarkable difference can be observed between brittle materials and materials with explicit plastic properties.  [c.191]

This result is valid when a < 1 this hypothesis is verified since ultrasonic waves are attenuated in materials. For separating two echoes, we detect the peaks and measure the delay between them.  [c.225]

If in the section defects are absent, the projections is distributed accidentally on pixels and the values of functions p(ij) aproximately are alike in all pixels of the section. In defective areas the projections are focused and, as far as defect appearance is unlikely on accepted hypothesis  [c.249]

METHOD OF DERICHE By resuming the model proposed by Canny, and relaxing the spatial limited support hypothesis, R. Deriche finds a more effective optimal edge operator and proposes a recursive implementation.  [c.527]

The thinking behind this was that, over a long time period, a system trajectory in F space passes tlirough every configuration in the region of motion (liere the energy shell), i.e. the system is ergodic, and hence the infinite time average is equal to the average in the region of motion, or the average over the microcanonical ensemble density. The ergodic hypothesis is meant to provide justification of the equal a priori probability postulate.  [c.387]

In the spirit of Onsager, if one imagines a relatively small perturbation of the populations of reactants and products away from their equilibrium values, then the regression hypothesis states that the decay of these populations back to their equilibrium values will follow the same tune-dependent behaviour as the decay of correlations oi spontaneous fluctuations of the reactant and product populations in the equilibrium system. In the condensed phase, it is this powerfiil principle that coimects a macroscopic dynamical quantity such as a kinetic rate constant with equilibrium quantities such as a free energy fimction along a reaction pathway and, in turn, the underlying microscopic interactions which detemiine this free energy fimction. The effect of the condensed phase enviromnent can therefore be largely understood in the equilibrium, or quasi-equilibrium, context in temis of the modifications of the free energy curve as shown in figure A3.8.1. As will be shown later, the remaining condensed phase effects which are not included in the equilibrium picture may be defined as being dynamical .  [c.884]

The Onsager regression hypothesis, stated mathematically for the chemically reacting system just described, is given in the classical limit by  [c.884]

The development of neutron diffraction by C G Shull and coworkers [30] led to the detennination of the existence, previously only a hypothesis, of antiferromagnetism and ferrimagnetism. More recently neutron diffraction, because of its sensitivity to light elements in the presence of heavy ones, played a cmcial role in demonstrating the importance of oxygen content m high-temperature superconductors.  [c.1382]

Clearly, the assertion that the deviations of the observed binding energies from the calculated average binding energies reflect whether a ligand is particularly well or poorly fitted for binding is just a working hypothesis. Nevertheless, it has to be observed that even 20 years after publication of this method, average binding energies calculated by the Andrews scheme are stiU used for screening potential drug candidates.  [c.327]

The inference mechanism can be compared with the reasoning of humans there arc different kinds of reasoning, the most popular ones being forward chaining and backward chaining. In forward chaining, data are presented to the expert system which then chains forward to come to a result. Backward chaining starts with 3 hypothesis that is given to the expert system, which then tracks back to check whether the hypothesis is valid,  [c.479]

An importcmt advantage of NMR over other spectral analysis methods comes from the signal s area being directly proportional to the number of protons. Therefore, in a spectrum, the area percentages for the different signals are related to the percentages of atoms. From that, it is easy to obtain the percentage of hydrogen for each of the above species. Furthermore, using the hypothesis that the average number of hydrogen atoms attached to the carbon atoms is two and using the resuits from elemental analysis of carbon and hydrogen, it is possible to deduce the following parameters  [c.66]

TRIFOU can model any isotropic material encountered in eddy current inspections such as ferromagnetic steel and aluminium. TRIFOU can model any type of probe with one or several coils, including those containing ferrite cores and shields. For the thin-skin regime modelling, basic rules are needed for accurate results. The ratio between the mapped mesh step and the penetration depth in the test block has to be small enough to ensure good precision in the results. But too many elements increase the calculation time and the memory space needed, so that a compromise has to be made. For the following calculations the ratio between mesh step and penetration depth is greater than 3. This hypothesis is to be validated.  [c.141]

The estimated VSS and EPD allow for the observation of the tip diffraction effects (phase inversion - Atp = 180° - for the direct and mirror diffraction echoes) for all selected Ascan signals. This proves the plane nature of the OSD and confirm our initial hypothesis.  [c.178]

Secondly, it is separation of the defective sections. It is builded histogram, which kepts information on all 128 sections on distribution of amount Aoj of measured projections, which exceed the threshold. On the histogram presence of defect in the product is valued and sections with defects high probabditi are defined. If defects are absent, the distribution of the histogram information parameter corresponds a normal law. When defects are present, in some sections on the histogram is observed surges. This is connected with the putted on the IT base hypothesis that in the imdefective product US signal fluctuates on the normal law, but defects decline a distribution of US signal amplitude from it. Then sections, in which was exited defects, hereinafter is not considered, but thresholds is recalculated. This is a direct quantitative evaluation of the statistical standard of undefectivity.  [c.249]

Firstly, one have to develop a numerical model (the forward problem) able to regenerate the responses supplied by the sensor. Unfortunately, the relationship between the object function and the observed data which is used to invert eddy current data is inherently non linear because it consists in a pair of coupled integral equations, which involves the product of two unknowns the flaw conductivity and the true electric field within the flawed region. Several methods have been developed to solve this problem. Some sophisticated methods [12], [11], [5] seek to reconstruct simultaneously the object function and the diffracted electric field. They involve a non-linearized iterative process which leads to minimize a cost-functional depending on two terms the error between the computed scattered field at the present iteration and the measured data, and the error in satisfying the equation of state. So, this way requires to solve the direct-scattering problem in each step of iterations and such methods need more computations in the three-dimensional problem. In this work, we assume that the hypothesis using Born approximation is fuUfilled for solving the linearized inverse problem[4]. We assume therefore that the perturbation of the electric field within the flawed region is small and that the flawed region is uniformely illuminated by the incident field. We consider that the linearized model makes a relatively good compromise between the fidelity to measured data and the simplicity of the model.  [c.326]

To convert (A1.1.37) into a quantum-mechanical fonn that describes the matter wave associated with a free particle travelling tlirough space, one might be tempted to simply make the substitutions v = Elh (Planck s hypothesis) and X = hip (de Broglie s hypothesis). It is relatively easy to verify that the resulting expression satisfies the time-dependent Sclirodinger equation. However, it should be emphasized that this is not a derivation, as there is no compelling reason to believe that this ad hoc procedure should yield one of the fiindamental equations of physics. Indeed, the time-dependent Sclirodinger equation caimot be derived in a rigorous way and therefore must be regarded as a postulate.  [c.12]

From the very beginning of the 20th century, the concept of energy conservation has made it abundantly clear that electromagnetic energy emitted from and absorbed by material substances must be accompanied by compensating energy changes within the material. Hence, the discrete nature of atomic line spectra suggested that only certain energies are allowed by nature for each kind of atom. The wavelengths of radiation emitted or absorbed must therefore be related to the difference between energy levels via Planck s hypothesis, AE = hy = hc/ k.  [c.12]

This is known as the Stefan-Boltzmaim law of radiation. If in this calculation of total energy U one uses the classical equipartition result = k T, one encounters the integral f da 03 which is infinite. This divergence, which is the Rayleigh-Jeans result, was one of the historical results which collectively led to the inevitability of a quantum hypothesis. This divergence is also the cause of the infinite emissivity prediction for a black body according to classical mechanics.  [c.410]

In 1971 Wilson [21] recognized tlie analogy between quantum-field theory and the statistical mechanics of critical phenomena and developed a renomialization-group (RG) procedure that was quickly recognized as a better approach for dealing with the singularities at the critical point. New calculation methods were developed, one of which, expansion in powers of s = 4 - [c.650]

It is worth discussing the fact that a free energy can be directly relevant to the rate of a dynamical process such as a chemical reaction. After all, a free energy function generally arises from an ensemble average over configurations. On the other hand, most condensed phase chemical rate constants are indeed thennally averaged quantities, so this fact may not be so surprising after all, although it should be quantified in a rigorous fashion. Interestingly, the free energy curve for a condensed phase chemical reaction (cf figure A3.8.1) can be viewed, in effect, as a natural consequence of Onsager s Imear regression hypothesis as it is applied to condensed phase chemical reactions, along with some additional analysis and simplifications [7].  [c.884]

The diffraction of x-rays was first observed in 1912 by Lane and coworkers [6]. A plausible, though undocumented, story says that the classic experiment was inspired by a seminar given by P P Ewald, whose doctoral thesis was a purely theoretical study of the interaction of electromagnetic waves with an array of dipoles located at the nodes of a tliree-dimensional lattice. At the time it was hypothesized that crystals were composed of parallelepipedal building blocks, unit cells, fitted together in tliree dimensions and that x-rays were short-wavelength, electromagnetic radiation, but neither hypothesis had been confimied experimentally. The Lane experiment confirmed both, but the application of x-ray diffraction to the detemiination of crystal structure was introduced by the Braggs.  [c.1364]

Pedersen S, Herek J L and Zewail A H 1994 The validity of the Diradical hypothesis direct femtosecond studies of the transition-state structures Science 266 1359-64  [c.1996]


See pages that mention the term Hypothesis : [c.90]    [c.158]    [c.187]    [c.195]    [c.216]    [c.332]    [c.114]    [c.172]    [c.177]    [c.178]    [c.4]    [c.5]    [c.387]    [c.519]    [c.648]    [c.798]    [c.887]    [c.1273]    [c.2420]    [c.341]    [c.565]    [c.566]    [c.45]    [c.47]    [c.50]    [c.606]   
Modern analytical chemistry (2000) -- [ c.0 ]

Modern Analytical Chemistry (2000) -- [ c.0 ]