Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Reliability theory measurements

Electrochemical impedance spectroscopy leads to information on surface states and representative circuits of electrode/electrolyte interfaces. Here, the measurement technique involves potential modulation and the detection of phase shifts with respect to the generated current. The driving force in a microwave measurement is the microwave power, which is proportional to E2 (E = electrical microwave field). Therefore, for a microwave impedance measurement, the microwave power P has to be modulated to observe a phase shift with respect to the flux, the transmitted or reflected microwave power APIP. Phase-sensitive microwave conductivity (impedance) measurements, again provided that a reliable theory is available for combining them with an electrochemical impedance measurement, should lead to information on the kinetics of surface states and defects and the polarizability of surface states, and may lead to more reliable information on real representative circuits of electrodes. We suspect that representative electrical circuits for electrode/electrolyte interfaces may become directly determinable by combining phase-sensitive electrical and microwave conductivity measurements. However, up to now, in this early stage of development of microwave electrochemistry, only comparatively simple measurements can be evaluated. [Pg.461]

As mentioned at the beginning of this chapter real phase-sensitive measurements of electrochemical systems have not yet been performed. Not only is the experimental technique difficult, but a reliable theory of... [Pg.514]

The above solvents theory (A) and proton theory (B) have shown that in theory the neutrality point (of the pure solvent) lies for the amphiprotic solvents at pH = pKs and for the aprotic protophilic solvents at a pH somewhere between the highest acidity (of the protonated solvent) and an infinitely high pH. However, the true pH of the neutrality point of the solvent can only be obtained from a reliable pH measurement and the problem is whether and how this can be achieved. For water as a solvent, the true pH = - logaH+ = colog aH+ is fixed by the internationally adopted convention E°m ( H2(latm) = 0... [Pg.255]

Laboratory research is dependent on reliable quality measurements and the use of uniformity, reliability, and accuracy to achieve this goal and this cannot be emphasized enough. Poor or inaccurate measurements can only lead to poor or inaccurate conclusions. A good theory can be lost if the experimental data are misread. [Pg.66]

Many methods have been used to determine the value of the PZC on solid electrodes. The one that seems to be most reliable, and relatively easy to perform, is based on diffuse-double-layer theory. Measurement of the capacitance in dilute solutions (C < 0.01 M) should show a minimum at , as seen in Eq. 15G and Fig. 4G. Lowering the concentration yields better defined minima. Modem instrumentation... [Pg.172]

One major objective of kinetic studies is to contribute towards the development of the theory of solid state chemistry. Identification of the factors that determine reactivity requires a foundation of reliable kinetic measurements. [Pg.166]

The measurements are reliable only if the experimental data fit the theory at small tip/ITIES distances (i.e., dla < 0.1). The reliability of measurements can also be verified by fitting experimental iT versus d curves to the theory for conductive substrate. The maximum normalized feedback current for such a process should be at least 6. [Pg.304]

Unpleasant odors may sometimes develop in wine during alcoholic fermentation, due to the formation of sulfur compounds by yeast. In view of the complexity of yeast s sulfur metabolism, there are many biochemical mechanisms capable of producing these malodorous molecules. For this reason, theories that attempt to explain the appearance of reduction defects in fermenting wines are often contradictory, and of practically no use to winemakers wishing to implement reliable preventive measures (Rankine, 1963 Eschenbruch, 1974). [Pg.262]

Reliability theory is based on probability theory, but is only used today to indicate possible values of the various safety factors in use. A historical development of the measurement of safety is given in Chapter 3 and, as we shall see, the traditional safety factors only measure part of the uncertainty surrounding the construction and eventual use of a structure. These factors, which are usually the ratio of some estimated critical load or stress for the structure to the estimated working load or stress, are crude and ignore the possibility of human error. [Pg.16]

Thus we have argued that the engineer has to make use of propositions, of theories and data, which are highly variable in their testability and dependability. The next question is, obviously, how can we use the ideas presented in this chapter to help the engineer measure this variable dependability, even if the measurement has to be subjective There is no accepted answer to this question today, but one purpose of the work described in Chapters 6 and 10 is to begin to provide a theoretical basis for such measurements. Firstly we have to be convinced that the present methods of reliability theory based on probability theory are inadequate. In fact it will be argued in Chapter 5 that the present use of reliability theory confuses the four aspects of testability discussed earlier. We will demonstrate the limitations of probability theory as a measure of the testability or dependability of a theory. In Chapter 6 we will discuss the theoretical developments which may eventually lead us to measures of the various aspects of testability and dependability, and we will return to a discussion of this in Chapter 10. [Pg.45]

These ideas, of course, still require development. Just as we have theorems of probability theory, decision theory, reliability theory, it will be possible to develop fuzzy probability theory, fuzzy decision theory and fuzzy reliability theory perhaps based on the measures presented here. [Pg.168]

Performing estimation and risk analysis in the presence of uncertainty requires a method that reproduces the random nature of certain events (such as failures in the context of reliability theory). A Monte-Carlo simulation addresses this issue by running a model many times and picking values from a predefined probability distribution at each run (Mun 2006). This process allows the generation of output distributions for the variables of interest, from which several statistical measures (such as mean, variance, skewness) can be computed and analyzed. [Pg.660]

Current praxis supposes only two possible states of individual measuring channel - state when system is able to operate - up state - and state when system is not able to operate - down state. Reliability theory splits down state into preventive maintenance and failure state. Preventive maintenance can be scheduled easily and that is why we will only take care of failure states. The failure state is commonly understood as something rmwanted, potentially dangerous. [Pg.1505]

The optimum experiment would therefore proceed by preparing the molecules in a precise state using beam methods and laser excitation, followed by measurement of the radiative emission as well as other properties of the bound molecule. Such experiments are, in fact, underway on a variety of molecules in several laboratories around the world. In addition, information on the bound-state dynamics of molecules has emerged from pump-probe techniques in which two lasers are utilized, one to prepare the molecule in the desired state and the second to interrogate the dynamics. Along with these experimental developments, we note a need for reliable theories to understand the interrelationship between the observed features and the nature of the dynamics. Such developments are in progress. [Pg.141]

From the viewpoint of the conventional band theory, the band gap is absent in metals and has positive width in dielectrics. The latter can be divided into dielectrics proper, with g > 4 eV, and semiconductors, with 0 < g < 4 eV. Since fg defines the energy required to transform a dielectric into a conducting (metallic) state, this parameter is widely used for various physical and chemical purposes and correlations. Tables 2.15, S2.12 and S2.13 comprise the most reliable experimental measurements of g. [Pg.92]

Uncertainty theory is also referred to as probability theory, credibility theory, or reliability theory and includes fuzzy random theory, random fuzzy theory, double stochastic theory, double fiizzy theory, the dual rough theory, fiizzy rough theory, random rough theory, and rough stochastic theory. This section focuses on the probability theory and fiizzy set theory, including probability spaces, random variables, probability spaces, credibility measurement, fuzzy variable and its expected value operator, and so on. [Pg.15]

Previously, the reliability theory had only been applied to conventional program code. However it prov to be fairly easy to adapt the theory to PLC logic networks. Minor extensions to the coverage growth theory were needed, and we also needed to identify suitable coverage measures for logic rather than conventional code. Once this had been done, it was possible to use ... [Pg.192]

One could ask whether it is worthwhile discussing in this book the theory of charge transport in polymers if only very few serious calculations are available and there are no very reliable experimental measurements on the same systems, which hinders comparison of the results. The answer is yes, because the theory itself is sufficiently developed for... [Pg.324]

The development of synchrotron radiation sources and advances in infiared, x-ray and electron-energy-loss spectroscopy, as well as improvements in sample preparation, have vastly extended the number and range of reliable optical measurements. Moreover, advances in the theory have provided various optical sum rules [12,13] by which the self-consistency of optical data can be checked. [Pg.13]

El-Damoese, M A. Temraz N.S., 2011. Availability and Reliability Measures for Multi-State System by Using Markov Reward Model. Reliability Theory Applications, 6(3) 68-85. [Pg.240]

Often commuters encounter intersections which operate near capacity limits and do not know when they will experience unacceptable traffic conditions. There are days where they drive through such intersections without major disruption and days on which they do not accept adverse traffic conditions, which can be expressed by means of a variety of measures including delays, queues, stops etc. [Chodur 2004]. For each driver the ability to estimate the likelihood of favourable traffic conditions at the time of departure and select the best route is extremely important in terms of minimizing trip time. The aim of this paper is to show the application of elements of reliability theory to the description of the functioning of a lane with a left turn at a signalised intersection in Krakow over several successive working days in favourable weather conditions. The analysis will cover 24-hour periods and record moments in time in which there were adverse traffic conditions from the viewpoint of drivers. Lane s functioning is associated with the level and quality of service and is renewable in time. The concept of renewal in this case is directly related to theoretical renewal, when a renewed object reveals the same reliability as directly before the overload. [Pg.335]

Next, drawing on a set of determined theoretical density functions f t describing the variability of the traffic conditions measure one can determine reliability functions failure function and the renewal intensity function for its/the measure s value. In reliability theory, the proper construction and interpretation of the above reliability characteristics requires that the term failure be defined in relation to the unit of the analysed measure which is variable in time. In the case of time periods failure is the case when time periods of the proper functioning of the lane until occurrence of the critical length of the residual queue are greater then If it... [Pg.339]

This immediately leads to a simpler measure which is similar to the Mean Time To (First) Failure (MTTF) in reliability theory, namely the Mean Time To (First) Hazard (MTTH), which can be defined by the simple Markov model in Figure 2. [Pg.52]

As mentioned above, experimental protocols are challenging in order to directly probe isolated polyelectrolyte chains, such as their sizes, counterion distributions, and electric potential variations inside and outside the coils. These quantities are sometimes deduced from measurements of other quantities, such as the electrophoretic mobility. The interpretation of data in these indirect measurements also depends heavily on reliable theories. The theoretical... [Pg.92]

Data Acquisition In theory, the acquisition is simple focus the smallest possible spot on one detector and measure the signal from that detector and the signal Sj from a few surrounding detectors. In practice, reliable crosstalk measurements are difficult. As mentioned elsewhere, it is difficult to know when the spot is well focused and when it is centered on a particular detector. It is helpful to have automated systems that allow us to acquire data in a methodical way for many scans as a function of position X, y on the array for many lens-to-detector distances. We can then select from that data those that yield the sharpest off-to-on transitions and minimum crosstalk. [Pg.365]

Many methods have been used to determine the value of Ez on solid electrodes. The one that seems to be most reliable, and relatively easy to perform, is based on diffuse-double-layer theory. Measurement of the capacitance in dilute solutions (Cb < 0.01 M) should show a minimum at Ez, as seen in Eq. (8.10) and Figure 8.4. Lowering the concentration yields better defined minima. Modem instmmen-tation allows us to extend the measurement of capacitance to low concentrations of the electrolyte (cf. Sections 8.1.6 and 16.1) increasing the accuracy of the determination of Ez on solid electrodes. One should bear in mind, however, that the minimum in capacitance coincides with Ez only if a symmetrical electrolyte such as NaF is used. Asymmetric electrolytes of the type AmB , (where n m) also show minima in the plot of capacitance versus potential in dilute solutions, but the minima are shifted slightly from the PZC. [Pg.179]

How many samples or measurements are required to ensure statistical accuracy is one of the most commonly asked questions in reverse engineering. This chapter will discuss this question by introducing the fundamental principles of statistics and their applications in data process and analysis. The reliability theory is closely related to statistics but was independently developed. This chapter will also discuss the applications of reliability theory, which is critical to reverse engineering data process and analysis in many cases. [Pg.209]


See other pages where Reliability theory measurements is mentioned: [Pg.104]    [Pg.206]    [Pg.62]    [Pg.5]    [Pg.356]    [Pg.166]    [Pg.7]    [Pg.89]    [Pg.101]    [Pg.166]    [Pg.166]    [Pg.491]    [Pg.615]    [Pg.383]    [Pg.338]    [Pg.195]    [Pg.454]    [Pg.230]    [Pg.30]    [Pg.143]    [Pg.391]    [Pg.441]    [Pg.233]   
See also in sourсe #XX -- [ Pg.491 , Pg.494 ]




SEARCH



Measure theory

Measurement reliability

Reliability theory

© 2024 chempedia.info