It is possible to reverse this reaction between hydrogen and oxygen. An electrical current causes water molecules to decompose into hydrogen gas and oxygen gas. These gases can be captured and their amounts measured. Repeated observations show that the two substances are always produced in the same mass ratio Every 9.0 g of decomposed water produces 1.0 g of hydrogen and 8.0 g of oxygen. [Pg.64]

Static measurements (stationary solution). After a coulometric pulse of specific magnitude, the resulting pH step is measured. Repeating the experiment with different pulses allows the construction of the titration curve. [Pg.350]

Repeatability is the closeness of the agreement between the results of successive measurements of the same measurand carried out under the same conditions of measurement . Repeatability conditions include the same measurement procedure, the same observer, the same measuring instrument, used under the same conditions, the same location, and repetition over a short period of time (ISO 3534-1 [1993]). [Pg.204]

At this point, it is important to make a distinction between the random errors and the systematic errors. When the same variable is measured repeatedly, we usually get a series of measurement. Because of a number of uncontrollable factors of small importance, varying at random, the measurement errors are normally distributed, in this case about zero. [Pg.131]

The subsequent development of laser diode sources at low cost, and improved electronic detection, coupled with new probe fabrication techniques have now opened up this field to higher-temperature measurement. This has resulted in an alexandrite fluorescence lifetime based fiber optic thermometer system,(38) with a visible laser diode as the excitation source which has achieved a measurement repeatability of l°C over the region from room temperature to 700°C, using the lifetime measurement technique. [Pg.361]

On the other hand, its should be emphasized that such basic analytical properties as precision, sensitivity and selectivity are influenced by the kinetic connotations of the sensor. Measurement repeatability and reproducibility depend largely on constancy of the hydrodynamic properties of the continuous system used and on whether or not the chemical and separation processes involved reach complete equilibrium (otherwise, measurements made under unstable conditions may result in substantial errors). Reaction rate measurements boost selectivity as they provide differential (incremental) rather than absolute values, so any interferences from the sample matrix are considerably reduced. Because flow-through sensors enable simultaneous concentration and detection, they can be used to develop kinetic methodologies based on the slope of the initial portion of the transient signal, thereby indirectly increasing the sensitivity without the need for the large sample volumes typically used by classical preconcentration methods. [Pg.76]

The apparent volume of distribution will be reasonably consistent if measured repeatedly in the... [Pg.133]

As I have shown, the response given by the model equation (3.5) has an error term that includes the lack of fit of the model and dispersion due to the measurement (repeatability). For the three-factor example discussed above, there are four estimates of each effect, and in general the number of estimates are equal to half the number of runs. The variance of these estimated effects gives some indication of how well the model and the measurement bear up when experiments are actually done, if this value can be compared with an expected variance due to measurement alone. There are two ways to estimate measurement repeatability. First, if there are repeated measurements, then the standard deviation of these replicates (s) is an estimate of the repeatability. For N/2 estimates of the factor effect, the standard deviation of the effect is... [Pg.88]

The aggregate outcomes of one position measurement repeated on many particles in the state corresponding to a wave function can be predicted from . [Pg.7]

The results of experimental measurements repeated four times were the following 24.24, 24.36, 24.87, 24.20, 24.10. Verify whether the third value, which seems to be high compared to the others, should be considered as a value out of acceptable range. [Pg.398]

Measure the concentration of analyte in several identical aliquots (portions). The purpose of replicate measurements (repeated measurements) is to assess the variability (uncertainty) in the analysis and to guard against a gross error in the analysis of a single aliquot. The uncertainty of a measurement is as important as the measurement itself, because it tells us how reliable the measurement is. If necessary, use different analytical methods on similar samples to make sure that all methods give the same result and that the choice of analytical method is not biasing the result. You may also wish to construct and analyze several different bulk samples to see what variations arise from your sampling procedure. [Pg.8]

Initiate the enzyme-catalyzed reaction by adding 0.15 mL of the 1.0 mg/mL intact mitochondria or SMP suspension. Mix well and record the AA25q for 3-5 minutes. If the reaction rate is too slow or fast to measure, repeat using more or less protein. Be sure the total volume of... [Pg.367]

C. butyricum was immobilized in polyacrylamide gel membrane and the immobilized whole cells were fixed on the anode. A linear relationship was obtained between the steady-state current and the BOD from 0 to 250 ppm. The steady-state current was reproducible within 7 % of relative error, when the standard solution (50 mg 1 glucose, 50 mg l glutamate) was measured repeatedly. The standard deviation was 2 ppm. [Pg.340]

Robust errors result from disrupting basic conditions for measuring, researcher s error, etc. A researcher is asked to check the probability of appearance of a robust error. A robust error appears as a measured value that is drastically different from others. This error may be avoided if another researcher who is ignorant of former measurements repeats it. The same effect may be achieved when the same researcher repeats measurements after some time when he has already forgotten the results the of first ones. Such a result has to be rejected if a robust error has been discovered. [Pg.191]

There are several new in situ soil carbon measurement techniques undergoing field testing. Some devices, such as those using a neutron generator, measure total C atoms to a known depth, and as a result do not require a bulk density measurement (L. Wielopolski, personal communication, March 2007). In addition, such in situ and noninvasive techniques would allow the same location to be measured repeatedly. However, for the time being, and likely for many applications in the future, the importance of carefully measuring bulk density cannot be overstated. [Pg.241]

The simplest way to determine a rate law is the method of initial rates. If a reaction is slow enough, it can be allowed to proceed for a short time, At, and the change in a reactant or product concentration measured. Repeating the experiment for different concentrations, the concentration-dependence of the rate can be deduced. [Pg.183]

Case A If we have a data set of n pairs of measurements which are essentially the same, although subject to random error (the same measurement repeated), then x and y are the mean, or average, values of the sets. [Pg.47]

Statistical tests on the significance of the above coefficients are possible if we have estimates of the experimental variance from the past or if we can repeat some experiments or even the total design. Alternatively, variance can be computed if the so-called central point of a design is (sampled and) measured repeatedly. This is a point between all factor levels. In Fig. 3-1 this is the point between location 1 and location 2 and between depth 1 and depth 2. The meaning of another term which is used, zero level, will be clear after we have learned how to construct general designs. [Pg.74]

The responsibility for avoidance or elimination of blunders lies squarely on the shoulders of the analysts applying the traceability protocol. The best chance of finding and correcting blunders is by introducing some redundancy into measurements. Repeat measurements by the same analysts, by other analysts in other laboratories, or by other methods are potentially effective in revealing previously undetected blunders. Most analysts forestall potential blunders by wisely asking colleagues to independently verify their data and calculations up to the end result. [Pg.21]

If, in a real case, the same analyst reported results of a measurement repeated over a short time interval as 50 and 56 pg/mL, there would be a question over the validity of these results as they are very unlikely to have differed by 6 pg/mL as a result of random variability. [Pg.296]

test measures what it is supposed to measure. A test can have high reliability and low validity if it measures repeatedly the wrong thing. [Pg.77]

To conclude this section, it must be reiterated that following the discussion above, it is obvious that different causes exist for the spreading of the specific surface area measured in an adsorption experiment. Thus, it is usually estimated, by measuring repeatedly the tested samples, that the relative error in the BET surface area measurements of the adsorption parameters is normally around 20% [5], For samples with very large surface areas, the relative error could be even 30% [2],... [Pg.303]

Equation 5.1 and Equation 5.3 assume that the instrument response provides a value of zero when the analyte concentration is zero. In this respect, the above calibration model forces the calibration line through the origin, i.e., when the instrument response is zero, the estimated concentration must likewise equal zero. In such circumstances, the instrument response is frequently set to zero by subtracting the blank sample response from the calibration sample readings. The instrument response for the blank is subject to errors, as are all the calibration measurements. Repeated measures of the blank would give small, normally distributed, random fluctuations about zero. However, for many samples it is difficult if not impossible to obtain a blank sample that matrix-matches the samples and does not contain the analyte. [Pg.110]

© 2024 chempedia.info