Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Time-to-Event Data Set

A time-to-event analysis data set captures the information about the time distance between therapeutic intervention and some other particular event. There are two time-to-event analysis variables that deserve special attention and definition. They are as follows  [Pg.121]

Event/Censor A binomial outcome such as success/failure, death/life, heart attack/no heart attack. If the event happened to the subject, then the event variable is set to 1. If it is certain that the patient did not experience the event, then the event variable is set to 0. Otherwise, this variable should be missing. [Pg.121]

Time to Event This variable captures the time (usually study day) from therapeutic intervention to the event date or censor date. If the event occurred for a subject, the time to event is the study day at that event. If the event did not occur, then the time to event is set to the censor date that is often the last known follow-up date for a subject. [Pg.121]

So for every clinical event of concern there is an event binomial flag and a time-to-event variable. Time-to-event data sets are typically represented in a flat denormalized single observation per subject data set. [Pg.121]

Note that the term censor is introduced in the preceding table. The log-rank test (invoked in SAS with PROC LIFETEST) and the Cox proportional hazards model (invoked in SAS with PROC PHREG) allow for censoring observations in a time-to-event analysis. These tests adjust for the fact that at some point a patient may no longer be able to experience an event. The censor date is the last known time that the patient did not experience a given event and the point at which the patient is no longer considered able to experience the event. Often the censor date is the last known date of patient follow-up, but a patient could be censored for other reasons, such as having taken a protocol-prohibited medication. [Pg.121]


Categorical Data and Why Zero and Missing Results Differ Greatly 102 Performing Many-to-Many Comparisons/Joins 106 Using Medical Dictionaries 108 Other Tricks and Traps in Data Manipulation 112 Common Analysis Data Sets 118 Critical Variables Data Set 118 Change-from-Baseline Data Set 118 Time-to-Event Data Set 121... [Pg.83]

In this section we take the aforementioned principles and guidelines for analysis data sets and apply them to creating the most common analysis data sets. The critical variables, change-from-baseline, and time-to-event data sets are presented. Although these are the most common analysis data sets that a statistical programmer will encounter, they are by no means all of the possible analysis data sets. When it comes to analysis data sets, there is no limit to the diversity of data that you may have to create. [Pg.118]

The following is a very simplistic example of what a time-to-event data set might look like when the event of interest is seizure. Here you assume that there is a seizure event form that collects whether a subject had a seizure and the date when the seizure occurred. You also assume that you do not need to search other ancillary data forms such as adverse events for seizure events. [Pg.122]

Program 4.15 Creating a Time-to-Event Data Set for Seizures... [Pg.122]

In order to illustrate the kinds of arguments and considerations which are needed in relation to intention-to-treat, the discussion in this section will consider a set of applications where problems frequently arise. In Chapter 13 we will cover methods for the analysis of time-to-event or so-called survival data, but for the moment I would like to focus on endpoints within these areas that do not use the time-point at which randomisation occurs as the start point for the time-to-event measure. Examples include the time from rash healing to complete cessation of pain in Herpes Zoster, the time from six weeks after start of treatment to first seizure in epilepsy and time from eight weeks to relapse amongst responders at week 8 in severe depression. [Pg.122]

The baseline data set must be updated each time the machine is repaired, rebuilt, or when major maintenance is performed. Even when best practices are used, machinery cannot be restored to as-new condition when major maintenance is performed. Therefore, a new baseline or reference data set must be established following these events. [Pg.693]

It is therefore easy to see why this current drug safety paradigm, with its lack of standards in data collection and analysis, hinders the analysis of adverse events. Without data standards in place, it is difficult to build practical, reusable tools for systematic safety analysis. With no standard tools, truly standardized analyses cannot occur. Reviewers may forget their initial analytical processes if they are not using standardized data and tools. Comprehensive reproducibility and auditability, therefore, become nearly impossible. In practice, the same data sets and analytical processes cannot be easily reused, even by the same reviewers who produced the original data sets and analyses. Not using standardized tools slows the real-time systematic analysis... [Pg.652]

Imagine you have a data set of adverse event data and a data set of concomitant medications, and you want to know if a concomitant medication was given to a patient during the time of the adverse event. The following program defines the two data sets and joins them with PROC SQL so that you get all medications taken during any specific... [Pg.106]

The analysis of safety data may require larger data sets, so care in ensuring that a consistent phenotype is collected will be important. Furthermore, the benefit to the drug development program could be that additional data to predict adverse events could be available at the time a drug is marketed. [Pg.97]

In a sense, it is like trend analysis it looks at the relationship of sets of data from a different perspective. In the case of Fourier analysis, the approach is by resolving the time dimension variable in the data set. At the most simple level, it assumes that many events are periodic in nature, and if we can remove the variation in other variables because of this periodicity (by using Fourier transforms), we can better analyze the remaining variation from other variables. The complications to this are (1) there may be several overlying cyclic time-based periodicities, and (2) we may be interested in the time cycle events for their own sake. [Pg.949]

Figure 1.27. Histograms of a data set containing n - 19 values, the r-range depicted is in all cases -0.5 to 5.5 inclusive, leaving one event to the left of the lower boundary. The number of classes is 6, 12, resp. 24 in the latter case the observed frequency never exceeds two per class, which is clearly insufficient for a x -test. (See text.) The superimposed normal distribution has the same area as the sum of events, n, times the bin width, namely 19, 8.5, respectively 4.25. Figure 1.27. Histograms of a data set containing n - 19 values, the r-range depicted is in all cases -0.5 to 5.5 inclusive, leaving one event to the left of the lower boundary. The number of classes is 6, 12, resp. 24 in the latter case the observed frequency never exceeds two per class, which is clearly insufficient for a x -test. (See text.) The superimposed normal distribution has the same area as the sum of events, n, times the bin width, namely 19, 8.5, respectively 4.25.
Fig. 1. Relative probability histograms of Slave craton detrital zircons (continuous curve with black infill below based on data from Sircombe et al. 2001), Ar/ Ar ages of impact spherules in lunar soil samples (dash-dot curve after Culler et al. 2000), and Ar/ Ar ages of impact glasses in lunar meteorites (dashed curve after Cohen et al. 2000). Time interval spans from 4500 Ma, the approximate age of formation of the Moon, to 2500 Ma, the defined Archaean-Proterozoic boundary. Vertical scales of the three curves are independent. Shaded age bars with roman numerals represent main events in basement of the Slave craton that were initially defined on the basis of individual rock age and their inheritance (see Bleeker Davis 1999). The detrital zircon data represent c. 300 zircon grains from five widely distributed samples of a c. 2800 Ma quartzite unit overlying the Mesoarchaean to Hadean-age basement complex of the Slave craton. These data represent a least-biased record of pre-2.8 Ga components of the Slave craton. The broad complementarity in the datasets should be noted. With the first major peak in Slave crustal ages (event V 3100-3200 Ma) immediately following the last major peak in the lunar spherule data. Both lunar soil and meteorite data sets support a lunar cataclysm or late heavy bombardment that appears to have erased or swamped out the pre-4.0Ga lunar record. Fig. 1. Relative probability histograms of Slave craton detrital zircons (continuous curve with black infill below based on data from Sircombe et al. 2001), Ar/ Ar ages of impact spherules in lunar soil samples (dash-dot curve after Culler et al. 2000), and Ar/ Ar ages of impact glasses in lunar meteorites (dashed curve after Cohen et al. 2000). Time interval spans from 4500 Ma, the approximate age of formation of the Moon, to 2500 Ma, the defined Archaean-Proterozoic boundary. Vertical scales of the three curves are independent. Shaded age bars with roman numerals represent main events in basement of the Slave craton that were initially defined on the basis of individual rock age and their inheritance (see Bleeker Davis 1999). The detrital zircon data represent c. 300 zircon grains from five widely distributed samples of a c. 2800 Ma quartzite unit overlying the Mesoarchaean to Hadean-age basement complex of the Slave craton. These data represent a least-biased record of pre-2.8 Ga components of the Slave craton. The broad complementarity in the datasets should be noted. With the first major peak in Slave crustal ages (event V 3100-3200 Ma) immediately following the last major peak in the lunar spherule data. Both lunar soil and meteorite data sets support a lunar cataclysm or late heavy bombardment that appears to have erased or swamped out the pre-4.0Ga lunar record.

See other pages where Time-to-Event Data Set is mentioned: [Pg.121]    [Pg.122]    [Pg.349]    [Pg.121]    [Pg.122]    [Pg.349]    [Pg.255]    [Pg.274]    [Pg.302]    [Pg.260]    [Pg.795]    [Pg.309]    [Pg.242]    [Pg.1061]    [Pg.6]    [Pg.394]    [Pg.275]    [Pg.276]    [Pg.292]    [Pg.50]    [Pg.123]    [Pg.19]    [Pg.188]    [Pg.50]    [Pg.281]    [Pg.86]    [Pg.185]    [Pg.32]    [Pg.755]    [Pg.962]    [Pg.242]    [Pg.1198]    [Pg.263]    [Pg.812]    [Pg.95]    [Pg.212]    [Pg.329]    [Pg.309]   


SEARCH



Data set

Setting time

Time events

Time-to-event data

© 2024 chempedia.info