Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Repeated measures approach

Pilcher JJ, Ott ES. The relationships between sleep and measures of health and wellbeing in college students a repeated measures approach. Behav Med 1998 23(4) 170-178. [Pg.207]

The planning of measurements is the first consideration to obtain information by the measurement approach. Why is it essential to make plans before any action is taken. Could one not just take the instruments and carry out the monitoring. In very simple situations this approach might provide a satisfactory result, but it could result in failure as well. In complicated situations failure, in terms of missing information, would be likely. Hence in order to obtain a sufficient quantity of high-quality information and to avoid the need to repeat any measurement or monitoring, and thus to save time and effort, the planning of measurements is essential. [Pg.1120]

The precondition for the use of the normal distribution in estimating the random error is that adequate reliable estimates are available for the parame-rcrs ju. and cr. In case of a repeated measurement, the estimates are calculated using Eqs. (12.1) and (12,3). When the sample size iiicrease.s, the estimates m and s approach the parameters /c and cr. A rule of rhumb is that when s 30. the normal distribution can be osecl,... [Pg.1127]

Usually there is no opportunity to repeat the measurements to determine the experimental variance or standard deviation. This is the most common situation encountered in field measurements. Each measurement is carried out only once due to restricted resources, and because field-measured quantities are often unstable, repetition to determine the spread is not justified. In such cases prior knowledge gained in a laboratory with the same or a similar meter and measurement approach could be used. The second alternative is to rely on the specifications given by the instrument manufacturer, although instrumenr manufacturers do not normally specify the risk level related to the confidence limits they are giving. [Pg.1130]

The title implies that in this first chapter techniques are dealt with that are useful when the observer concentrates on a single aspect of a chemical system, and repeatedly measures the chosen characteristic. This is a natural approach, first because the treatment of one-dimensional data is definitely easier than that of multidimensional data, and second, because a useful solution to a problem can very often be arrived at in this manner. [Pg.13]

Decay of the nuclide itself. The conceptually simplest approach is to take a known quantity of the nuclide of interest, P, and repeatedly measure it over a sufficiently long period. The observed decrease in activity with time provides the half-life to an acceptable precision and it was this technique that was originally used to establish the concept of half-lives (Rutherford 1900). Most early attempts to assess half lives, such as that for " Th depicted on the front cover of this volume, followed this method (Rutherford and Soddy 1903). This approach may use measurement of either the activity of P, or the number of atoms of P, although the former is more commonly used. Care must be taken that the nuclide is sufficiently pure so that, for instance, no parent of P is admixed allowing continued production of P during the experiment. The technique is obviously limited to those nuclides with sufficiently short half-lives that decay can readily be measured in a realistic timeframe. In practice, the longest-lived isotopes which can be assessed in this way have half-lives of a few decades (e.g., °Pb Merritt et al. 1957). [Pg.15]

There are two ways to approach this issue, and both should be investigated in any questionable measurement. First, it is often assumed that any absorption or other measurement that is three or four times larger than the noise is real. This is a good start. However, there are other more scientific approaches to this problem. If repeated measurements on different subsamples or aliquots produce the same absorption or measurement, with minor variations, then it is probably a real result and not noise. On the other hand, if, during repeated measurements, absorption features occur in exactly the same location and have exactly the same characteristics such as shape and area under the peak, then they are probably not due to the sample because some variation in measurement always occurs. This type of problem is usually a result of instrument malfunction and this must be investigated. [Pg.294]

However, the GUM [Guide to the Expression of Uncertainty of Measurement approach (ISO 1993a), which leads to the verbose statement concerning expanded uncertainty quoted above, might not have been followed, and all the analyst wants to to do is say something about the standard deviation of replicates. The best that can be done is to say what fraction of the confidence intervals of repeated experiments will contain the population mean. The confidence interval in terms of the population parameters is calculated as... [Pg.34]

The last of these approaches is called imputation of missing values. As Piantadosi (2005) commented, while this approach sounds a lot like making up data, when done properly it may be the most sensible strategy. While techniques for addressing missing data can be technically difficult, one commonly used, simple imputation method is called last observation carried forward (LOCF). In a study with repeated measurements over time, the most recent observation replaces any subsequent missing observations (Piantadosi, 2005, see also Molenberghs and Kenward, 2007). [Pg.168]

The law of large numbers is fundamental to probabilistic thinking and stochastic modeling. Simply put, if a random variable with several possible outcomes is repeatedly measured, the frequency of a possible outcome approaches its probability as the number of measurements increases. The weak law of large numbers states that the average of N identically distributed independent random variables approaches the mean of their distribution. [Pg.265]

Carry out repeat measurements for a particular combination of variables, to determine the rejteatability of the approach. [Pg.289]

Accident dosimetry using biological systems in which the quantification of chromosome aberrations or the ratios between different blood proteins can give an indication of exposure, is hampered by the individual characteristics of the victim (i.e. general health, diet etc.), and by the complexity of the techniques. These problems can be avoided by adopting a more physical approach, and both chemiluminescence and thermoluminescence of possible dosimeters, for example, have been found to be useful. The drawbacks here concern the solubility with chemiluminescence, the amount of sample required for thermoluminescence, and the impossibility of taking repeated measurements with either system. In contrast, electron spin resonance (ESR) spectroscopy is not subject to these constraints. Measurement is made directly on the sample, very small amounts of material can be used, and repeated measurements are possible... [Pg.299]

A second approach is to consider the F-test for comparison of variance at each frequency. As there were 26 repeated measurements, the degree of freedom for the two calculations of variance is v = 26 — 1 = 25. At the a = 0.05 level, a value of 1.955 is obtained from Table 3.4, and a value of2.604 is obtained at the oc = 0.01 level. These critical values can be... [Pg.54]

The third approach is to use experimental methods to assess the error structure. Independent identification of error structure is the preferred approach, but even minor nonstationarity between repeated measurements introduces a significant bias error in the estimation of the stocheistic variance. Dygas emd Breiter report on the use of intermediate results from a frequency-response analyzer to estimate the variance of real and imaginary components of the impedance. Their approach allows assessment of the variance of the stochastic component without the need for replicate experiments. The drawback is that their approach cannot be used to assess bias errors and is specific to a particular commercial impedance instrumentation. Van Gheem et have proposed a structured multi-sine... [Pg.419]

The biological activity satisfies the second and third requirements, since each measurement is done independently (see, however, below). Upon repeated measurements the values will be normally distributed, and will have an equal variance. The independent variables in the Hansch approach are obtained from experimental measurements and contain therefore an experimental error. They cannot be considered as fixed variates. The experimental error in these parameters, however, is usually much smaller than the error in the dependent variable they can be treated therefore as fixed variates. The complications arising from correlations between observations that include experimental errors have been thoroughly analyzed (224). [Pg.71]

Another important characteristic is that of precision. This becomes evident only when repeat measurements are made, because precision refers to the amount of agreement between repeated measurements (the standard deviation around the mean estimate). Precision is subject to both random and systematic errors. In industrial quality control and chemical analysis, Shewhart Control Charts provide a means of assessing the precision of repeat measurements but these approaches are rarely used in ecotoxicity testing. The effect is that we generally understand little about either the accuracy or the precision of most bioassays. [Pg.46]

In most cases, atorvastatin had no effect on interleukin levels following one year of treatment in AD subjects. Use of a repeated measures statistical analysis approach suggested that lL-3 and IL-13 were significantly increased in the atorvastatin treated AD subjects compared to placebo controls. IL-13 was first described as a T-cell antigen with anti-inflammatory activities that inhibit type-1 dominated cell-mediated immime responses [162]. IL-13 is functionally related to IL-4 with some distinct activities that have been reviewed in detail. Studies have shown that IL-13 has antitumor and... [Pg.71]

Using a general linear model (GLM) approach to repeated measures data with time treated as a categorical variable is limited in two respects. First, such a model... [Pg.198]


See other pages where Repeated measures approach is mentioned: [Pg.67]    [Pg.183]    [Pg.124]    [Pg.67]    [Pg.183]    [Pg.124]    [Pg.321]    [Pg.87]    [Pg.90]    [Pg.168]    [Pg.56]    [Pg.231]    [Pg.155]    [Pg.163]    [Pg.25]    [Pg.907]    [Pg.184]    [Pg.317]    [Pg.342]    [Pg.268]    [Pg.87]    [Pg.359]    [Pg.359]    [Pg.468]    [Pg.13]    [Pg.27]    [Pg.689]    [Pg.717]    [Pg.254]    [Pg.199]    [Pg.199]   
See also in sourсe #XX -- [ Pg.184 ]




SEARCH



Repeatability measurement

© 2024 chempedia.info