Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Censored data

Missing and censored data should be handled exactly as in the case of linear regression. The analyst can use complete case analysis, naive substitution, conditional mean substitution, maximum likelihood, or multiple imputation. The same advantages and disadvantages for these techniques that were present with linear regression apply to nonlinear regression. [Pg.121]


When all units have failed, the result is complete data. Singly-censored data result in life testing when testing is terminated before all units fail. And multiply-censored data result from ... [Pg.1045]

The following are examples of situations with these reasons for multiply-censored data ... [Pg.1046]

Graphical analysis of failure data is most commonly plotted using probability. However, in order to understand the hazard plotting method presented here, is not necessary to understand probability plotting. While it is difficult to utilize probability plotting for multiply-censored data, it is... [Pg.1046]

To determine which hazard paper is appropriate to use when plotting a set of multiply censored data, first rely on engineering experience. If that is not an option, try different papers until you find one that is suitable. To save time, it may be a good idea to try plotting the sample cumulative hazard function on exponential hazard paper, since it is just... [Pg.1051]

The cumulative hazard plotting method and papers presented here provide simple means for statistical analyses of multiply censored failure data to obtain engineering information. The hazard-plotting method is simpler to use for multiply censored data than other plotting methods given in the literature and directly gives failure-rate information not provided by others. [Pg.1054]

These methods are essential when there is any significant degree of mortality in a bioassay. They seek to adjust for the differences in periods of risk individual animals undergo. Life table techniques can be used for those data where there are observable or palpable tumors. Specifically, one should use Kaplan-Meier product limit estimates from censored data graphically, Cox-Tarone binary regression (log-rank test), and Gehan-Breslow modification of Kruskal-Wallis tests (Thomas et al., 1977 Portier and Bailer, 1989) on censored data. [Pg.322]

The HR-ICPMS cation analytical results are a robust dataset that is remarkably free of data qualifiers for 32 of the 63 cations analyzed an additional 14 cations have <20% censored data. Further, the low-level data have consistent map distribution patterns that make sense geologically. Described below are patterns for several possible porphyry-related elements. [Pg.367]

The robustness of a sample preparation technique is characterized by the reliability of the instrumentation used and the variability (precision) of the information obtained in the subsequent sample analysis. Thus, variations in controlled parameters and sequences are to be avoided. In sample preparation methods employing supercritical fluids as the extracting solvents, it has been our experience that minimal variations in efficient analyte recoveries are possible using a fully automated extraction system. The extraction solvent operating parameters under automated control are temperature, pressure (thus density), composition and flow rate through the sample. The precision of the technique will be discussed by presenting replicability, repeatability, and reproducibility data for the extraction of various analytes from such matrices as sands and soils, river sediment, and plant and animal tissue. Censored data will be presented as an indicator of instrumental reliability. [Pg.269]

Reliability, that part of robustness associated with ongoing performance, can be summarized as the "ability of method or technique to successfully perform a required function under stated conditions for a stated period of time." The work presented in this paper in the "Results and Discussion" section focuses predominately on measures of precision. Some anecdotal information -essentially censored data - is presented for reliability. [Pg.275]

There are often data sets used to estimate distributions of model inputs for which a portion of data are missing because attempts at measurement were below the detection limit of the measurement instrument. These data sets are said to be censored. Commonly used methods for dealing with such data sets are statistically biased. An example includes replacing non-detected values with one half of the detection limit. Such methods cause biased estimates of the mean and do not provide insight regarding the population distribution from which the measured data are a sample. Statistical methods can be used to make inferences regarding both the observed and unobserved (censored) portions of an empirical data set. For example, maximum likelihood estimation can be used to fit parametric distributions to censored data sets, including the portion of the distribution that is below one or more detection limits. Asymptotically unbiased estimates of statistics, such as the mean, can be estimated based upon the fitted distribution. Bootstrap simulation can be used to estimate uncertainty in the statistics of the fitted distribution (e.g. Zhao Frey, 2004). Imputation methods, such as... [Pg.50]

Zhao Y, Frey HC (2004) Quantification of variability and uncertainty for censored data sets and application to air toxic emission factors. Risk Analysis, 24(3) 1019-1034. [Pg.95]

Chowdhury and Fard (2001) presented a method for estimating dispersion effects from robust design experiments with right censored data. Kim and Lin (2002) proposed a method to determine optimal design factor settings that take account of both location and dispersion effects when there are multiple responses. They based their approach on response surface models for location and dispersion of each response variable. [Pg.40]

Borth, D.M. (1996). Optimal Experimental Designs for (Possibly) Censored Data. Chemom. Intell.Lab.Syst,32,25-35. [Pg.542]

A prediction model must define all of the possible results that may be obtained from the alternative method. This is important since there are many different types of data available from typical alternative methods. Examples of data types include quantitative data, censored data, qualitative data, descriptive data, default values, and nonqualified... [Pg.2708]

Left-censored data are characteristic of many bioassays due to the inherent limitation of the presence of a lower limit of detection and quantification. An ad hoc approach to dealing with the left-censored values is to replace them with the Unfit of quantification (LOQ) or LOQ/2 values. Alternatively, one can borrow information from other variables related to the missing values and use MI to estimate the left-censored data. In addition, the left-censored mechanism can be incorporated directly into a parametric model, and a maximum likelihood (ML) approach can be used to estimate the parameters (21). [Pg.254]

J. P. Hing, S. G. Woolfrey, D. Greenslade, and P. M. C. Wright, Analysis of toxicokinetic data using NONMEM impact of quantification limit and replacement strategies for censored data. J Pharmacokinet Pharmacodyn 28 465 79 (2001). [Pg.261]

Censoring Type III Type III is differentiated from Type I and II censored data, by the censored times that are not identical, even for subjects who do not drop out of a study. This type of censoring occurs when the study is of fixed duration, and the event of interest is duration of a response that is first observed at a random time after the start of the study. As the starting time of the response is random, the censoring time for all subjects who remain enrolled at the end of the study will also be random. [Pg.658]

In some cases, the exact time at which an event occurs is not known, but the event is known to have occurred between two recorded times. Such cases are termed interval censored. This type of censoring is present in the analgesic trial example presented later in this chapter. Survival time analysis is better suited than logistic analysis to the analysis of interval or right censored data. [Pg.658]

To truly account for left-censored data requires a likelihood approach that defines the total likelihood as the sum of the likelihoods for the observed data and the missing data and then maximizes the total censored and uncensored likelihood with respect to the model parameters. In the simplest case with n independent observations that are not longitudinal in nature, m of which are below the LLOQ, the likelihood equals... [Pg.297]

It should be noted that in the case of right-censored data the likelihood is simply... [Pg.297]


See other pages where Censored data is mentioned: [Pg.1046]    [Pg.1047]    [Pg.1051]    [Pg.918]    [Pg.273]    [Pg.28]    [Pg.104]    [Pg.2709]    [Pg.255]    [Pg.260]    [Pg.656]    [Pg.657]    [Pg.657]    [Pg.667]    [Pg.868]    [Pg.1222]    [Pg.86]    [Pg.86]    [Pg.91]    [Pg.121]    [Pg.196]    [Pg.196]    [Pg.268]    [Pg.296]    [Pg.296]    [Pg.297]   
See also in sourсe #XX -- [ Pg.110 , Pg.113 , Pg.171 ]

See also in sourсe #XX -- [ Pg.196 ]

See also in sourсe #XX -- [ Pg.37 ]




SEARCH



Censoration

Censored survival time data

Censoring survival data

Data censoring

Data censoring

Missing and censored data

Time-to-event data and censoring

© 2024 chempedia.info