Statistical Approaches


In general, tolerance stack models are based on either the wor.st case or statistical approaches, including those given in the references above. The worst case model (see equation 3.1) assumes that each component dimension is at its maximum or minimum limit and that the sum of these equals the assembly tolerance (initially this model was presented in Chapter 2). The tolerance stack equations are given in terms of bilateral tolerances on each component dimension, which is a common format when analysing tolerances in practice. The worst case model is  [c.113]

The division of micromechanics stiffness evaluation efforts into the mechanics of materials approach and the elasticity approach with its many subapproaches is rather arbitrary. Chamis and Sendeckyj [3-5] divide micro mechanics stiffness approaches into many more classes netting analyses, mechanics of materials approaches, self-consistent models, variational techniques using energy-bounding principles, exact solutions, statistical approaches, finite element methods, semiempirical approaches, and microstructure theories. All approaches have the common objective of the prediction of composite materials stiffnesses. All except the first two approaches use some or all of the principles of elasticity theory to varying degrees so are here classed as elasticity approaches. This simplifying and arbitrary division is useful in this book because the objective here is to merely become acquainted with advanced micromechanics theories after the basic concepts have been introduced by use of typical mechanics of materials reasoning. The reader who is interested in micromechanics should supplement this chapter with the excellent critique and extensive bibliography of Chamis and Sendeckyj [3-5].  [c.137]

The division of micromechanics stiffness evaluation efforts into the mechanics of materials approach and the elasticity approach with its many subapproaches is rather arbitrary. Chamis and Sendeckyj [3-5] divide micro mechanics stiffness approaches into many more classes netting analyses, mechanics of materials approaches, self-consistent models, variational techniques using energy-bounding principles, exact solutions, statistical approaches, finite element methods, semiempirical approaches, and microstructure theories. All approaches have the common objective of the prediction of composite materials stiffnesses. All except the first two approaches use some or all of the principles of elasticity theory to varying degrees so are here classed as elasticity approaches. This simplifying and arbitrary division is useful in this book because the objective here is to merely become acquainted with advanced micromechanics theories after the basic concepts have been introduced by use of typical mechanics of materials reasoning. The reader who is interested in micromechanics should supplement this chapter with the excellent critique and extensive bibliography of Chamis and Sendeckyj [3-5].  [c.137]

Statistical Approaches in Machinery Problem Solving  [c.1041]

The failure fighting process includes all machinery maintenance activities, failure analysis, and trouble-shooting. Fighting failures involves using every strategy that is appropriate to the situation if the process is to be successful. However, before any strategies can be applied, as much information must be gathered as possible regarding failure modes and their frequencies. Appropriate statistical approaches will accomplish this task and therefore have to be part of the failure fighting system.  [c.1043]

Statistical Approaches in Machinery Problem Solving  [c.1044]

Statistical Approaches in Machinery Problem Solving Table 62.2 Primary machinery component failure  [c.1046]

Statistical Approaches in Machinery Problem Solving  [c.1048]

Statistical Approaches in Machinery Problem Solving  [c.1050]

Statistical Approaches in Machinery Problem Solving  [c.1052]

Statistical Approaches in Machinery Problem Solving  [c.1054]

Statistical Approaches in Machinery Problem Solving 1117  [c.1082]

The motion of particles in a fluid is best approached tlirough tire Boltzmaim transport equation, provided that the combination of internal and external perturbations does not substantially disturb the equilibrium. In otlier words, our starting point will be the statistical themiodynamic treatment above, and we will consider the effect of botli the internal and external fields. Let the chemical species in our fluid be distinguished by the Greek subscripts a,(3,.. . and let f (r, c,f)AV A be the number of molecules of type a located m  [c.569]

The evolution of the system following the quench contams different stages. The early stage involves the emergence of macroscopic domains from the initial post-quench state, and is characterized by the fonnation of interfaces (domain walls) separating regions of space where the system approaches one of its final coexisting states (domains). Late stages are dominated by the motion of these interfaces as the system acts to minimize its surface free energy. During this stage the mean size of the domains grows with time while the total amount of interface decreases. Substantial progress in the understanding of late stage domain growth kinetics has been inspired by the discovery of dynamical scaling, which arises when a single length dominates the time evolution. Then various measures of the morphology depend on time only tlnough this length (an instantaneous snapshot of the order parameter s space dependence is referred to as the system s morphology at that time). The evolution of the system then acquires self-similarity in the sense that the spatial patterns fonned by the domains at two different times are statistically identical apart from a global change of the length scale.  [c.733]

Therefore, apart from an unknown constant f>AA, and a known linear tenn PA , these are the same fiinction. Beimett [94] suggested two graphical methods for detemiining PA4 from Pq(AE) and P (AE), which rely on the two distributions, at worst, nearly overlapping (i.e., being measurable, with good statistics, for the same or similar values of AE). To broaden the sampling into the wings of the distribution, thereby improving statistics and extending the overlap region, we may use weighted sampling as described in section b3.3.4.2. There are many related approaches, variously called umbrella [95], multicanonical [96] and entropic [97] sampling, simulated tempering [98] and expanded ensembles [99].  [c.2263]

We have already discussed weighted sampling methods for exploring regions of high free energy, so the first part of this problem is tractable. The calculation of time-dependent functions, which start from the barrier top, is facilitated by the so-called blue-moon ensemble [186] in which a constraint is applied to keep the system exactly on the desired hypersurface. This allows sampling of the starting conditions with good statistics then the constraint may be released and subsequent dynamics accumulated. (Metric tensor factors associated with the constraint are discussed elsewhere [187. 188]. ) To compute the time-dependent part of the barrier-crossing rate, special approaches have been developed to suppress transient behaviour and statistical noise [187].  [c.2271]

Because this problem is complex several avenues of attack have been devised in the last fifteen years. A combination of experimental developments (protein engineering, advances in x-ray and nuclear magnetic resonance (NMR), various time-resolved spectroscopies, single molecule manipulation methods) and theoretical approaches (use of statistical mechanics, different computational strategies, use of simple models) [5, 6 and 7] has led to a greater understanding of how polypeptide chains reach the native confonnation.  [c.2642]

Master equation methods are not tire only option for calculating tire kinetics of energy transfer and analytic approaches in general have certain drawbacks in not reflecting, for example, certain statistical aspects of coupled systems. Alternative approaches to tire calculation of energy migration dynamics in molecular ensembles are Monte Carlo calculations [18,19 and 20] and probability matrix iteration [21, 22], amongst otliers.  [c.3021]

As a consequence of this observation, the essential dynamics of the molecular process could as well be modelled by probabilities describing mean durations of stay within different conformations of the system. This idea is not new, cf. [10]. Even the phrase essential dynamics has already been coined in [2] it has been chosen for the reformulation of molecular motion in terms of its almost invariant degrees of freedom. But unlike the former approaches, which aim in the same direction, we herein advocate a different line of method we suggest to directly attack the computation of the conformations and their stability time spans, which means some global approach clearly differing from any kind of statistical analysis based on long term trajectories.  [c.102]

A wide field of applications for chemical data mining is drug design. In short, drug design starts with a compound which has an interesting biological profile and optimizes the compound as well as its activity (see Section 10.4). Thus, the information about the biological activity of a compound is a crucial aspect in drug design. The relationship between a structure and its biological activity is represented by so-called quantitative structure-activity relationships (QSAR) (see Section 10.4). The field of QSAR can be approached via chemical data mining Starting from the structure input, e.g., in the form of a connection table (see Section 2.5), a 2D or 3D model of the structure is calculated. Ensuing secondary information, e.g., in the form of physicochemical properties such as charges, is generated for these structures. The enhanced structure model is then the basis for calculating a descriptor, i.e., a structure code in the form of a vector to which computational methods, for example statistical methods or neural networks, can be applied. These methods can then fulfill various data mining tasks such as classification or establishment of QSAR models which can finally be employed for the prediction of properties such as biological activities.  [c.474]

The integral of the Gaussian distribution function does not exist in closed form over an arbitrary interval, but it is a simple matter to calculate the value of p(z) for any value of z, hence numerical integration is appropriate. Like the test function, f x) = 100 — x, the accepted value (Young, 1962) of the definite integral (1-23) is approached rapidly by Simpson s rule. We have obtained four-place accuracy or better at millisecond run time. For many applications in applied probability and statistics, four significant figures are more than can be supported by the data.  [c.16]

It has long been realized that in very fine pores, having widths of the order of a few molecular diameters, the Kelvin equation could no longer remain strictly valid. Not only would the values of the surface tension y and molar volume Vi deviate from those of the liquid adsorptive in bulk, but the very concept of a meniscus would eventually become meaningless. The question as to the value of the curvature at which the deviations become large enough to produce appreciable effects on the calculated pore size is a long-standing one and not easy to answer with precision. Since direct experimental measurements of y and are ruled out by the smallness of the dimensions involved, indirect approaches are inevitable. On statistical-mechanical grounds, Guggenheim ° concluded that the surface tension must begin to depend on the radius of curvature of a liquid surface when this falls below r 500 A. Melrose, extending the treatment of Willard Gibbs, was able to derive an expression for y/y as a function of the radius of curvature that indirectly involved the thickness of the interfacial region. A curve from Melrose s paper, reproduced in Fig. 3.21, is based on the assumption, regarded as reasonable, that the interfacial region is 4 to 6 molecular diameters thick. As is seen, the surface tension begins to deviate appreciably from its bulk value y when r falls below 500 A at r = 100 A, y already exceeds y by 10 per cent and at r = 20 A, the excess has become 30 per cent. Inserted in the Kelvin equation, these values of y will elevate r in the same proportions, so that the corrected values of r would be 110 A and 27 A respectively.  [c.153]

In the previous section we described several internal methods of quality assessment that provide quantitative estimates of the systematic and random errors present in an analytical system. Now we turn our attention to how this numerical information is incorporated into the written directives of a complete quality assurance program. Two approaches to developing quality assurance programs have been described a prescriptive approach, in which an exact method of quality assessment is prescribed and a performance-based approach, in which any form of quality assessment is acceptable, provided that an acceptable level of statistical control can be demonstrated.  [c.712]

Once a control chart is in use, new quality assessment data should be added at a rate sufficient to ensure that the system remains in statistical control. As with prescriptive approaches to quality assurance, when a quality assessment sample is found to be out of statistical control, all samples analyzed since the last successful verification of statistical control must be reanalyzed. The advantage of a performance-based approach to quality assurance is that a laboratory may use its experience, guided by control charts, to determine the frequency for collecting quality assessment samples. When the system is stable, quality assessment samples can be acquired less frequently.  [c.721]

Statistical Thermodynamic Isotherm Models. These approaches were pioneered by Fowler and Guggenheim (21) and Hill (22). Examples of the appHcation of this approach to modeling of adsorption in microporous adsorbents are given in references 3, 23—27. Excellent reviews have been written (4,28).  [c.273]

A number of studies have examined the high pressure behavior of vitreous siUca (122—127). The degree of compaction depends on temperature, pressure, and the time of pressurization treatment. A density iacrease of almost 19% was obtained when the glass was subjected to a pressure of 8 GPa (<80, 000 atm) at 575°C for 2 min. The samples remained completely amorphous, though the densities and refractive iadexes were approaching those of the quartz phase (123,128). Stmctural studies iadicate that the compaction is accompanied by a shift ia the ring statistics toward a higher percentage of three- and four-membered rings of SiO tetrahedrons (129). Above 20 GPa (200,000 atm), the SiO tetrahedrons begia to distort and the Si coordination gradually shifts from four to six oxygens (130).  [c.504]

Fiber Length Distribution. For industrial appHcations, the fiber length and length distribution are of primary importance because they are closely related to the performance of the fibers in matrix reinforcement. Various fiber classification methods have thus been devised. Representative distributions of fiber lengths and diameters can be obtained through measurement and statistical analysis of microphotographs (14) fiber length distributions have also been obtained recently from automated optical analyzers (15). Typical fiber length distributions obtained from these approaches are illustrated in Figure 6 for chrysotile fibers. As in the cases shown there, industrial asbestos fiber samples usually contain a rather broad distribution of fiber lengths.  [c.349]

Correlation methods discussed include basic mathematical and numerical techniques, and approaches based on reference substances, empirical equations, nomographs, group contributions, linear solvation energy relationships, molecular connectivity indexes, and graph theory. Chemical data correlation foundations in classical, molecular, and statistical thermodynamics are introduced.  [c.232]

Intelligent system is a term that refers to computer-based systems that include knowledge-based systems, neural networks, fuzzy logic and fuzzy control, quahtative simulation, genetic algorithms, natural language understanding, and others. The term is often associated with a variety of computer programming languages and/or features that are used as implementation media, although this is an imprecise use. Examples include object-oriented languages, rule-based languages, prolog, and hsp. The term intelligent system is preferred over the term aiiificial intelligence. The three intelligent-system technologies currently seeing the greatest amount of industri application are knowledge-based systems, fuzzy logic, and artificial neural networks. These te(dinologies are components of distributed systems. Mathematical models, conventional numeric and statistical approaches, neural networks, knowledge-based systems, and the like, have their place in practical implementation and allow automation of tasks not well-treated by numerical algorithms.  [c.509]

R. M. HARRALCK, Statistical and structural approaches to texture. Proceedings of the IEEE., November 5 May 1979, Volume 67.  [c.237]

There are two different aspects to these approximations. One consists in the approximate treatment of the underlying many-body quantum dynamics the other, in the statistical approach to observable average quantities. An exlmistive discussion of different approaches would go beyond the scope of this introduction. Some of the most important aspects are discussed in separate chapters (see chapter A3.7. chapter A3.11. chapter A3.12. chapter A3.131.  [c.774]

Basic features of solvent effects can be illustrated by considering the variation of the rate constant of a unimolecular reaction as one gradually passes from the low-pressure gas phase into the regime of liquid-like densities [1] (see figure A3.6.1.1 At low pressures, where the rate is controlled by themial activation in isolated binary collisions with bath gas molecules, is proportional to pressure, i.e. it is in the low-pressure limit /cq. Raising the pressure fiirther, one reaches the fall-off region where the pressure dependence of becomes increasingly weaker until, eventually, it attains the constant so-called high-pressure limit k. At this stage, collisions with bath gas molecules, which can still be considered as isolated binary events, are sufficiently frequent to sustain an equilibrium distribution over rotational and vibrational degrees of freedom of the reactant molecule, and is detemiined entirely by the intramolecular motion along the reaction patli. k may be calculated by statistical theories (see chapter A3.4) if the potential-energy (liyper)surface (PES) for the reaction is known. What kind of additional effects can be expected, if the density of the compressed bath gas approaches that of a dense fluid Ideally, there will be little fiirther change, as equilibration becomes even more effective because of pemianent energy exchange with the dense heat bath. So, even with more confidence than in the gas phase, one could predict the rate constant using statistical reaction rate theories such as, for example, transition state theory (TST). However, this ideal picture may break down if (i) there is an appreciable change in charge distribution or molar volume as the system moves along the reaction path from reactant to product state, (ii) the reaction entails large-amplitude structural changes that are subject to solvent frictional forces retarding the motion along the reaction path or (iii) motion along the reaction path is sufficiently fast that tliemial equilibrium over all degrees of freedom of the solute and the bath cannot be maintained.  [c.830]

This chapter concentrates on describing molecular simulation methods which have a counectiou with the statistical mechanical description of condensed matter, and hence relate to theoretical approaches to understanding phenomena such as phase equilibria, rare events, and quantum mechanical effects.  [c.2239]

A technique used quite often to explore the conformational space of molecules is random or itochaitic generation (sometimes also called random searches) [134, 135]. In contrast to systematic approaches, random methods generate conformational diversity, not in a predictable fashion, but randomly. To obtain a new starting geometry either the Cartesian coordinates arc changed (e.g., by adding random numbers within a certain range to the x-, y-, and z-coordinates of the atoms) or the internal coordinates arc varied (c.g., by assigning random values to the torsion angle of the rotors) in a random manner, Again, as discussed for the systematic techniques, ring portions can be treated as pseudo-acyclic", including whether the ring closure condition is fulfilled. After the new conformation has been optimized and compared with all the previously generated conformations, it can be used as the starting point for the next iteration. In addition, a frequently used criterion for selecting a new starting geometry in random techniques is the so-callcd Metropolii Monte Carlo scheme [136], Thereby, a newly generated (and optimized) conformation is used as a starting geometi y for the next iteration only if it is lower in energy than the previous one or if it has a higher statistical probability (calculated by the Boltzmann factor of their energy difference). Otherwise, the previous structure is taken as the starting point. Thus, the selection of starting conformations is biased towards lower-energy structures, btit also allows jtimps into high-energy regions of the molecular hypersurface. Because of the random changes, stochastic methods are able to access a completely different region of the conformational space from one iteration step to the next. On the one hand, this ensures a broad sampling of the conformational space, but on the other, an artificial stop criterion has to be defined as random methods do not have a natural end point. Usually,  [c.108]

Chemometrics. Statistics and Computer Application in Analytical Chemistry. New York, Wiley-VCH. yer D C and P D J Grootenhuis 1999. Recent Developments in Molecular Diversity nputational Approaches to Combinatorial Chemistry. Annual Reports in Medicinal Chemistry 187-296,  [c.736]

Although this is a commonly used procedure for comparing two methods, it does violate one of the assumptions of an ordinary linear regression. Since both methods are expected to have indeterminate errors, an unweighted regression with errors in y may produce a biased result, with the slope being underestimated and the y-intercept being overestimated. This limitation can be minimized by placing the more precise method on the x-axis, using ten or more samples to increase the degrees of freedom in the analysis, and by using samples that uniformly cover the range of concentrations. For more information see Miller, J. C. Miller, J. N. Statistics for Analytical Chemistry, 3rd ed. Ellis Horwood PTR Prentice-Hall New York, 1993. Alternative approaches are discussed in Hartman, C. Smeyers-Verbeke, J. Penninckx, W. Massart, D. L. Anal. Chim. Acta 1997, 338, 19-40 and Zwanziger, H. W. Sarbu, C. Anal. Chem. 1998, 70, 1277-1280.  [c.133]

Pufferfish toxin, isolated from a do2en or more species, has been identified as having the empirical formula C H yN Og, but the stmcture is not weU-estabhshed, nor is it certain that the same stmcture is universally responsible for poisoning, although this is assumed to be the case. The so-called paralytic shellfish poisoning reported in many areas of the world has a microbiological etiology, and is thus more accurately a contamination rather than a natural toxicosis. The paralytic effects of the poisoning begin as a tingling sensation in the bps, tongue, and extremities, and gradually progress into nausea and convulsions. Japanese statistics indicate mortaUty rates approaching 65% (100).  [c.481]

As can be seen from Figure 4, LBVs for these components are not constant across the ranges of composition. An iateraction model has been proposed (60) which assumes that the lack of linearity results from the iateraction of pairs of components. An approach which focuses on the difference between the weighted linear average of the components and the actual octane number of the blend (bonus or debit) has also been developed (61). The iadependent variables ia this type of model are statistical functions (averages, variances, etc) of blend properties such as octane, olefins, aromatics, and sulfur. The general statistical problem has been analyzed (62) and the two approaches have been shown to be theoretically similar though computationally different.  [c.188]

There are distinct performance differences for garments that are resistant to flame alone compared to those that are resistant to both heat buildup and flammabUity. Appropriate tests have been devised that measure the thermal protective performance of fabrics when exposed to a radiant heat source. Sophisticated constmctions against bioha2ards need further development to produce materials that are both thermally comfortable and impermeable to bloodbome pathogens and other deleterious microorganisms. The most promising approaches are materials that are laminated or coated fabrics that are permeable to vapor but impermeable to Hquids. Statistics included in recent OSHA standards for protection against blood-home pathogens (eg, hepatitis and AIDS vimses) estimate that currently close to six million persons in the United States (health care workers and many other occupations) requite protective clothing and other types of safeguards against these bioha2ards (43).  [c.73]

Other formulations based on more rigorous statistical mechanics approaches are also available (12). The first term on the right in equation 7 is the standard Flory-Huggins (FH) expression, except that the interaction term, B or is evaluated from solubiUty parameters, computed from a group contribution method as described (75) using equation 6 or its equivalent. The second term is a quasichemical (QC)-type formulation (98) that treats hydrogen bond formation in analogy with chemical reaction equiUbria. The temperature-dependent equiUbrium constants have been approximated from infrared spectroscopy observations (75). Of course, the possibihty of hydrogen bond formation between two polymers does not guarantee their miscibility. The self-association that must be dismpted within the pure components may outweigh the favorable intercomponent hydrogen bonds formed in the mixture and the overall contribution to the free energy is thus unfavorable. Likewise unfavorable nonhydrogen bond interactions may outweigh any favorable contribution from hydrogen bonding. Other potentially strong specific interaction mechanisms, including ionic interactions, have been proposed for producing miscible blends (1,101—105). In all cases, whether miscibility occurs or not depends on a deflcate balance of issues with the net free energy effect rarely being as great as might be imagined by a simplistic accounting of only a strong specific interaction. For one thing, any favorable gain in the energy of mixing is accompanied by an unfavorable noncombinatorial entropy effect (106,107).  [c.411]


See pages that mention the term Statistical Approaches : [c.444]    [c.848]    [c.2815]    [c.110]    [c.397]    [c.536]    [c.815]   
See chapters in:

Chemical kinetics the study of reaction rates in solution  -> Statistical Approaches