Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Random data

Consequently, the surface generation becomes a question of producing a matrix of random data that obey the defined height distribution and a prescribed ACF This can be carried out efficiently through the procedure of a 2-D digital filter, proposed by Hu and Tonder [47], as summarized as follows. [Pg.130]

Demographics and Trial-Specific Baseline Data 27 Concomitant or Prior Medication Data 27 Medical History Data 29 Investigational Therapy Drug Log 30 Laboratory Data 31 Adverse Event Data 32 Endpoint/Event Assessment Data 35 Clinical Endpoint Committee (CEC) Data 36 Study Termination Data 37 Treatment Randomization Data 38 Quality-of-Life Data 40... [Pg.19]

The randomization of a patient in a given therapy is the cornerstone of a randomized clinical trial. You may find these data in more than one place. They are often found within some form of Interactive Voice Response System (IVRS), but they may also be found in an electronic file containing the treatment assignments or on the CRF itself. If randomization data are found on the CRF, they usually consist only of the date of randomization for treatment-blinded trials. IVRS data are often found outside the confines of the clinical data management system and usually consist of the following three types of data tables. [Pg.38]

The randomization data are used in both efficacy and safety analyses, as they are typically the key stratification variable for the trial. The randomization data allow you to answer the question of whether patients who are getting the study therapy fare better than the alternative. CDISC places treatment assignment information in the special demographics domain. [Pg.40]

Randomization data from an interactive voice response system (IVRS)... [Pg.44]

Bendat, J.S., Piersol, A.G., Random Data Analysis and Measurement Procedures, Wiley New York, 1971... [Pg.246]

The calculation used is the calculation of the sum of squares of the differences [5], This calculation is normally applied to situations where random variations are affecting the data, and, indeed, is the basis for many of the statistical tests that are applied to random data. However, the formalism of partitioning the sums of squares, which we have previously discussed [6] (also in [7], p. 81 in the first edition or p. 83 in the second edition), can be applied to data where the variations are due to systematic effects rather than random effects. The difference is that the usual statistical tests (t, x2> F, etc.) do not apply to variations from systematic causes because they do not follow the required statistical distributions. Therefore it is legitimate to perform the calculation, as long as we are careful how we interpret the results. [Pg.453]

A computer program, containing the data for all compounds, is available for a nominal fee (Carl L. Yaws, Box 10053, Lamar University, Beaumont, TX 77710, phone/FAX 409-880-8787). The computer program (random data file) is in ASCII which can be accessed by other software. [Pg.1]

A widely used approach to establish model robustness is the randomization of response [25] (i.e., in our case of activities). It consists of repeating the calculation procedure with randomized activities and subsequent probability assessments of the resultant statistics. Frequently, it is used along with the cross validation. Sometimes, models based on the randomized data have high q values, which can be explained by a chance correlation or structural redundancy [26]. If all QSAR models obtained in the Y-randomization test have relatively high values for both and LOO (f, it implies that an acceptable QSAR model cannot be obtained for the given dataset by the current modeling method. [Pg.439]

In chromatography the quantitative or qualitative information has to be extracted from the peak-shaped signal, generally superimposed on a background contaminated with noi%. Many, mostly semi-empirical, methods have been developed for relevant information extraction and for reduction of the influence of noise. Both for this purpose and for a quantification of the random error it is necessary to characterize the noise, applying theory, random time functions and stochastic processes. Four main types of statistical functions are used to describe the tosic properties of random data ... [Pg.71]

Classic univariate regression uses a single predictor, which is usually insufficient to model a property in complex samples. Multivariate regression takes into account several predictive variables simultaneously for increased accuracy. The purpose of a multivariate regression model is to extract relevant information from the available data. Observed data usually contains some noise and may also include irrelevant information. Noise can be considered as random data variation due to experimental error. It may also represent observed variation due to factors not initially included in the model. Further, the measured data may carry irrelevant information that has little or nothing to do with the attribute modeled. For instance, NIR absorbance... [Pg.399]

Figure 4-5 50% and 90% confidence intervals for the same set of random data. Filled squares are the data points whose confidence interval does not include the true population mean of 10 000. [Pg.59]

Another simple variant of this method is to test the model on completely random data. Generate a random series of numbers for your figure of merit y, and then run your model. You should get only noise - if you get meaningful results (i.e., high R2 and q2 values) out of random data, then there is something seriously wrong with your model. [Pg.266]

A Monte Carlo study demonstrated the problem of estimating the number of clusters [DUBES, 1987]. One principal reason for this problem is that clustering algorithms tend to generate clusters even when applied to random data [DUBES and JAIN, 1979]. JAIN and MOREAU [1987] therefore used the bootstrap technique [EFRON and GONG, 1983] for cluster validation. [Pg.157]

Spectrum analysis is a random data analysis method that probably has origins in the periodgraph that was obtained on the basis of the period of the variation in the sunspot 150 years ago by the English physicist Arthur Schster. [Pg.101]

Correlation dimension. The correlation dimension is calculated by measuring the Hausdorff dimension according to the method of Grassberger [36,39]. The dimension of the system relates to the fewest number of independent variables necessary to specify a point in the state space [40]. With random data, the dimension increases with increase of the embedding space. In deterministic data sets, the dimension levels off, even though the presence of noise may yield a slow rise. [Pg.53]

Fig. 4. Relationship between length of the correct tree and skewness of the tree-length distribution in simulated phytogenies. The optimal (most parsimonious) tree is likely to be the correct tree only in analyses of data sets that produce tree-length distributions which are significantly more skewed than expected from random data. The shaded regions correspond to the 95% (dark) and 99% (light) confidence limits for gx (the skewness statistic) for random sequence data. (Adapted from Ref. 12). Fig. 4. Relationship between length of the correct tree and skewness of the tree-length distribution in simulated phytogenies. The optimal (most parsimonious) tree is likely to be the correct tree only in analyses of data sets that produce tree-length distributions which are significantly more skewed than expected from random data. The shaded regions correspond to the 95% (dark) and 99% (light) confidence limits for gx (the skewness statistic) for random sequence data. (Adapted from Ref. 12).
The decision problem is represented by the decision tree in Figure 5, in which open circles represent chance nodes, squares represent decision nodes, and the black circle is a value node. The first decision node is the selection of the sample size n used in the experiment, and c represents the cost per observation. The experiment will generate random data values y that have to be analyzed by an inference method a. The difference between the true state of nature, represented by the fold changes 6 = 9, 9g), and the inference will determine a loss L(-) that is a function of the two decisions n and a, the data, and the experimental costs. There are two choices in this decision problem the optimal sample size and the optimal inference. [Pg.126]

The value of g0) at time 0, corresponding to J = 0, could be calculated but it must be excluded in fitting any data since it produces a discontinuous spike even for totally random data. The maximum value of yis - 1, corresponding to the total time of the measure-... [Pg.386]


See other pages where Random data is mentioned: [Pg.145]    [Pg.38]    [Pg.248]    [Pg.268]    [Pg.454]    [Pg.350]    [Pg.81]    [Pg.247]    [Pg.299]    [Pg.333]    [Pg.47]    [Pg.52]    [Pg.53]    [Pg.54]    [Pg.247]    [Pg.464]    [Pg.670]    [Pg.676]    [Pg.263]    [Pg.293]    [Pg.346]   
See also in sourсe #XX -- [ Pg.14 ]




SEARCH



© 2024 chempedia.info