Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Data analysis error sources

When designing and evaluating an analytical method, we usually make three separate considerations of experimental error. First, before beginning an analysis, errors associated with each measurement are evaluated to ensure that their cumulative effect will not limit the utility of the analysis. Errors known or believed to affect the result can then be minimized. Second, during the analysis the measurement process is monitored, ensuring that it remains under control. Finally, at the end of the analysis the quality of the measurements and the result are evaluated and compared with the original design criteria. This chapter is an introduction to the sources and evaluation of errors in analytical measurements, the effect of measurement error on the result of an analysis, and the statistical analysis of data. [Pg.53]

A final source of variation in microarray experiments is derived from measurement errors. Measurement errors may occur during the processes of image acquisition and normalization or during the multifactorial data analysis required to extract biological relevance from the collected data. The effect of measurement error can be minimized by ensuring consistency in all aspects of microarray experimentation. If possible, experiments should be performed by the same technician, and subsequent data analyses be applied to all datasets consistently. [Pg.395]

The primary source of data interpretation error in elemental analysis is laboratory contamination affecting the method blank and the samples. Method blank is a volume of analyte-free water prepared and analyzed in the same manner as the samples. Method blank is also called analytical blank or preparation blank. [Pg.236]

The statistical submodel characterizes the pharmacokinetic variability of the mAb and includes the influence of random - that is, not quantifiable or uncontrollable factors. If multiple doses of the antibody are administered, then three hierarchical components of random variability can be defined inter-individual variability inter-occasional variability and residual variability. Inter-individual variability quantifies the unexplained difference of the pharmacokinetic parameters between individuals. If data are available from different administrations to one patient, inter-occasional variability can be estimated as random variation of a pharmacokinetic parameter (for example, CL) between the different administration periods. For mAbs, this was first introduced in sibrotuzumab data analysis. In order to individualize therapy based on concentration measurements, it is a prerequisite that inter-occasional variability (variability within one patient at multiple administrations) is lower than inter-individual variability (variability between patients). Residual variability accounts for model misspecification, errors in documentation of the dosage regimen or blood sampling time points, assay variability, and other sources of error. [Pg.85]

To circumvent the above problems with mass action schemes, it is necessary to use a more general thermodynamic formalism based on parameters known as interaction coefficients, also called Donnan coefficients in some contexts (Record et al, 1998). This approach is completely general it requires no assumptions about the types of interactions the ions may make with the RNA or the kinds of environments the ions may occupy. Although interaction parameters are a fundamental concept in thermodynamics and have been widely applied to biophysical problems, the literature on this topic can be difficult to access for anyone not already familiar with the formalism, and the application of interaction coefficients to the mixed monovalent-divalent cation solutions commonly used for RNA studies has received only limited attention (Grilley et al, 2006 Misra and Draper, 1999). For these reasons, the following theory section sets out the main concepts of the preferential interaction formalism in some detail, and outlines derivations of formulas relevant to monovalent ion-RNA interactions. Section 3 presents example analyses of experimental data, and extends the preferential interaction formalism to solutions of mixed salts (i.e., KC1 and MgCl2). The section includes discussions of potential sources of error and practical considerations in data analysis for experiments with both mono- and divalent ions. [Pg.435]

Experimental procedures for quantitative mass spectrometric analysis usually involve several steps. The final error results from the accumulation of the errors in each step, some steps in the procedure being higher error sources than others. A separation can be made between the errors ascribable to the spectrometer and its data treatment on the one hand and the errors resulting from the sample handling on the other. [Pg.265]

A third source of uncertainty is the occurrence of rare or unique events in the measurement, such as an incorrect reading by the observer, or a chance disturbance in the equipment. Such errors can often produce large deviations from the other readings, and are hence termed outliers . There are statistical tests for recognising such data points, but the occurrence of outliers can be a real problem in statistical data analysis. [Pg.297]

Permeability, P, and difiusion coefficient, D, were obtained by performing a two-parameter least squares fit of the experimental flux data to equation (2). The solubility, S, was obtained from the relationship P = DS. The methodology of these oxygen barrier measurements, data analysis, and the sources of experimental error were previously described in detail elsewhere (7/). [Pg.49]

Another form of random matrix-related interference is more rarely occurring gross errors, which typically are seen in the context of immunoassays and relate to unexpected antibody interactions (see interference section) Such an error will usually show up as an outlier in method comparison studies. A weU-known source is the occurrence of heterophilic antibodies. This is the background for the fact that outliers should be carefuUy considered and not just discarded from the data analysis procedure. Supplementary studies may help clarify such random matrix-related interferences and may provide specifications for the assay that limit its application in certain contexts (e.g., with regard to samples from certain patient categories). [Pg.370]

In the interpretation of the numerical results that can be extracted from Mdssbauer spectroscopic data, it is necessary to recognize three sources of errors that can affect the accuracy of the data. These three contributions to the experimental error, which may not always be distinguishable from each other, can be identified as (a) statistical, (b) systematic, and (c) model-dependent errors. The statistical error, which arises from the fact that a finite number of observations are made in order to evaluate a given parameter, is the most readily estimated from the conditions of the experiment, provided that a Gaussian error distribution is assumed. Systematic errors are those that arise from factors influencing the absolute value of an experimental parameter but not necessarily the internal consistency of the data. Hence, such errors are the most difficult to diagnose and their evaluation commonly involves measurements by entirely independent experimental procedures. Finally, the model errors arise from the application of a theoretical model that may have only limited applicability in the interpretation of the experimental data. The errors introduced in this manner can often be estimated by a careful analysis of the fundamental assumptions incorporated in the theoretical treatment. [Pg.519]

Review of the process prompted by analyst or operator concerns should involve the analyst, the supervisor, and possibly a specialist. This review may be able to distinguish among possible causes in chemical analysis such as matrix difference, method instability, bad reagents, or analyst error. In radiation detection, source problems, detector malfunction, and data analysis must be distinguished. The discussion should focus on what the analyst or operator remembers about the measurement series in question, in contrast to records for similar analyses this should help determine when the problem was first observed and the differences in the process since then. [Pg.250]

The KB inversion process involves the extraction of KBIs from the available experimental data. The experimental data required for this process—derivatives of the chemical potentials, partial molar volumes, and the isothermal compressibility—are all generally obtained as derivatives of various properties of the solution. Obtaining reliable derivatives can be challenging and will depend on the quality of the source data and the fitting function. Unfortunately, the experimental data often appear without a reliable statistical analysis of the errors involved, and hence the quality of the data is difficult to determine. Matteoli and Lepori have performed a fairly rigorous analysis of a series of binary mixtures and concluded that, for systems under ambient conditions, the quality of the resulting KBIs is primarily determined by the chemical potential data, followed by the partial molar volume data, whereas errors in the compressibility data have essentially no effect on the KBI values (Matteoli and Lepori 1984). Excess chemical potentials are typically obtained from partial pressure data, either isothermal or bubble point determinations, and from osmotic pressure or even electrochemical measurements. The particle number... [Pg.32]

The steps in an analysis usually include the following sampling, sample preparation and workup, separation (chromatography), detection of the analyte, data analysis including peak area integration, and calculations. With major advances in GC instrumentation and integration in the past 20 years, the major sources of GC error are usually sampling and sample preparation, especially if dirty matrices are involved. [Pg.74]

This chapter describes practical aspects of the application of UV absorbance temperature profiles to determine the thermodynamics of nucleic acid structural transitions. Protocols and practical advice are presented for issues not normally addressed in the primary literature but that are crucial for the determination of reliable thermodynamics, such as sequence design, sample preparation, choice of buffer, protocols for determining strand concentrations and mixing strands, design of microvolume cuvettes and cell holder, instrumental requirements, data analysis methods, and sources of error. References to the primaiy literature and reviews are also provided where appropriate. Sections of this chapter have been adapted from previous reviews and are reprinted with permission from the Annual Review of Biochemistry, Volume 62 1993, by Annual Reviews wwwAnnualReviews.org (6) and with permission from Biopolymers 1997, by John Wiley Sons, Inc. (4). [Pg.329]


See other pages where Data analysis error sources is mentioned: [Pg.329]    [Pg.217]    [Pg.655]    [Pg.329]    [Pg.224]    [Pg.421]    [Pg.335]    [Pg.45]    [Pg.23]    [Pg.182]    [Pg.448]    [Pg.704]    [Pg.26]    [Pg.329]    [Pg.217]    [Pg.63]    [Pg.2319]    [Pg.704]    [Pg.421]    [Pg.194]    [Pg.227]    [Pg.282]    [Pg.377]    [Pg.270]    [Pg.639]    [Pg.5]    [Pg.307]    [Pg.7]    [Pg.62]    [Pg.74]    [Pg.105]    [Pg.270]    [Pg.330]    [Pg.408]    [Pg.4]    [Pg.353]   
See also in sourсe #XX -- [ Pg.227 ]




SEARCH



Analysis error source

Data analysis errors

Data sources

Error analysis

Error sources

Source analysis

© 2024 chempedia.info