Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Underlying distribution

DEFLECTION OF SIMPLY SUPPORTED LAMINATED PLATES UNDER DISTRIBUTED TRANSVERSE LOAD... [Pg.289]

Figure 5-8 Simply Supported Laminated Rectangular Plate under Distributed Transverse Load, p(x,y)... Figure 5-8 Simply Supported Laminated Rectangular Plate under Distributed Transverse Load, p(x,y)...
When the underlying distribution is not known, tools such as histograms, probability curves, piecewise polynomial approximations, and general techniques are available to fit distributions to data. It may be necessary to assume an appropriate distribution in order to obtain the relevant parameters. Any assumptions made should be supported by manufacturer s data or data from the literature on similar items working in similar environments. Experience indicates that some probability distributions are more appropriate in certain situations than others. What follows is a brief overview on their applications in different environments. A more rigorous discussion of the statistics involved is provided in the CPQRA Guidelines. ... [Pg.230]

Fig. 3-5 compares the bound the Chebychev inequality places on the probability P X(i) — m > e with the actual value of this quantity when the underlying distribution is gaussian. The agreement between... [Pg.125]

From a purely practical point of view the range or a quantile can serve as indicator. Quantiles are usually selected to encompass the central 60-90% of an ordered set the influence of extreme values diminishes the smaller this %-value is. No assumptions as to the underlying distribution are made. [Pg.69]

A general principle is governing the relation between physical parameters and underlying distribution functions. Its paramount importance in the field of soft condensed matter originates from the importance of polydispersity in this field. Let us recall the principle by resorting to a very basic example molecular mass distributions of polymers and the related characteristic parameters. [Pg.21]

Enhanced sampling in conformational space is not only relevant to sampling classical degrees of freedom. An additional reason to illustrate this particular method is that the delocalization feature of the underlying distribution in Tsallis statistics is useful to accelerate convergence of calculations in quantum thermodynamics [34], We focus on a related method that enhances sampling for quantum free energies in Sect. 8.4.2. [Pg.285]

The second component is the assumption of two underlying distributions. This implies that if MAXCOV results are inconsistent with the taxonic conjecture, we can only conclude that there are not two underlying distributions (i.e., there can be one or three or four latent groups). In the presence of serious evidence against the taxonic conjecture we normally infer absence of a taxon, but there is also an alternative explanation that is frequently overlooked. It is possible that more than two latent groups reside in the distribu-... [Pg.64]

One critical limitation of MAXSLOPE is that the method locates the hitmax correctly only for certain types of latent distributions (e.g., normal distribution). For other kinds of distributions (e.g., chi-square distribution), the estimated location of the hitmax and the base rate may be substantially off the mark. In other words, when the underlying distributions are of the difficult kind, MAXSLOPE will detect taxonicity, but the estimated taxon base rate may be incorrect. Moreover, MAXSLOPE may fail to detect taxonicity under certain circumstances. Specifically, this will happen if ... [Pg.83]

Nose count and base rate variability consistency tests are possible with MAXSLOPE, although it is not yet clear how these tests behave when the underlying distributions are of the difficult kind. Luckily, MAXSLOPE puts less emphasis on internal consistency testing and stresses external consistency testing instead. MAXSLOPE is different from other taxometric algorithms and thus can provide a strong test of external consistency for other procedures. [Pg.83]

If the errors are normally distributed, the OLS estimates are the maximum likelihood estimates of 9 and the estimates are unbiased and efficient (minimum variance estimates) in the statistical sense. However, if there are outliers in the data, the underlying distribution is not normal and the OLS will be biased. To solve this problem, a more robust estimation methods is needed. [Pg.225]

As discussed before, the outliers generated by the heavy-tails of the underlying distribution have a considerable influence on the OLS problem arising in a conventional data reconciliation procedure. To solve this problem, a limiting transformation, which operates on the data set, is defined to eliminate or reduce the influence of outliers on the performance of a conventional rectification scheme. [Pg.231]

The ability of the frequency curve to accurately represent the underlying distribution increases with the number of observations. With a small number of results only an approximation is possible, and the divergence may be relatively large. [Pg.274]

The liquid phase which is absorbed by the synthetic polymer granules (e.g., Sephadex) is mostly available in a wide range as solvent for solute molecules in contact with the gel. It has been observed that the actual distribution of the solute in between the inside and outside of the respective gel granules is nothing but a criterion of the available space. However, the underlying distribution coefficient occurring between the granular and interstitial aqueous phases is found to be independent of three major factors, namely ... [Pg.478]

In Sections 1.6.3 and 1.6.4, different possibilities were mentioned for estimating the central value and the spread, respectively, of the underlying data distribution. Also in the context of covariance and correlation, we assume an underlying distribution, but now this distribution is no longer univariate but multivariate, for instance a multivariate normal distribution. The covariance matrix X mentioned above expresses the covariance structure of the underlying—unknown—distribution. Now, we can measure n observations (objects) on all m variables, and we assume that these are random samples from the underlying population. The observations are represented as rows in the data matrix X(n x m) with n objects and m variables. The task is then to estimate the covariance matrix from the observed data X. Naturally, there exist several possibilities for estimating X (Table 2.2). The choice should depend on the distribution and quality of the data at hand. If the data follow a multivariate normal distribution, the classical covariance measure (which is the basis for the Pearson correlation) is the best choice. If the data distribution is skewed, one could either transform them to more symmetry and apply the classical methods, or alternatively... [Pg.54]

D. R. James, Y. Liu, N. O. Petersen, A. Siemiarczuk, B. D. Wagner, and W. R. Ware, Recovery of underlying distributions of lifetimes from fluorescence decay data, Fluorescence Detection, SPIE 743, 117-122 (1987). [Pg.107]

In this notation, N d is the number of independent samples contained in the trajectory, and fsim the length of the trajectory. The standard error can be used to approximate confidence intervals, with a rule of thumb being that + 2SE represents roughly a 95% confidence interval [26]. The actual interval depends on the underlying distribution and the sampling quality as embodied in Nfd fSimA/ see ref. 25 for a more careful discussion. [Pg.33]

Fig. 14. One-dimensional cross section of an elliptic weighting filter. The characteristic length is defined as the section length when the relative weight has dropped to 2/a. The filter shape corresponds to the deformation profile of an elastic material under distributed load in a circle of radius Z./2. Fig. 14. One-dimensional cross section of an elliptic weighting filter. The characteristic length is defined as the section length when the relative weight has dropped to 2/a. The filter shape corresponds to the deformation profile of an elastic material under distributed load in a circle of radius Z./2.
A basic assumption underlying r-tests and ANOVA (which are parametric tests) is that cost data are normally distributed. Given that the distribution of these data often violates this assumption, a number of analysts have begun using nonparametric tests, such as the Wilcoxon rank-sum test (a test of median costs) and the Kolmogorov-Smirnov test (a test for differences in cost distributions), which make no assumptions about the underlying distribution of costs. The principal problem with these nonparametric approaches is that statistical conclusions about the mean need not translate into statistical conclusions about the median (e.g., the means could differ yet the medians could be identical), nor do conclusions about the median necessarily translate into conclusions about the mean. Similar difficulties arise when - to avoid the problems of nonnormal distribution - one analyzes cost data that have been transformed to be more normal in their distribution (e.g., the log transformation of the square root of costs). The sample mean remains the estimator of choice for the analysis of cost data in economic evaluation. If one is concerned about nonnormal distribution, one should use statistical procedures that do not depend on the assumption of normal distribution of costs (e.g., nonparametric tests of means). [Pg.49]

It may seem strange to see the normal distribution play a part in the p-value calculations in Section 11.5.1 and 11.5.2. The appearance of this distribution is in no sense related to the underlying distribution of the data. For the Mann-Whitney U-test for example it relates to the behaviour of the average of the ranks within each of the individual groups under the assumption of equal treatments where the ranks in those groups of sizes and 2 simply a random split of the numbers 1 through to Hi -b 2-... [Pg.169]

Clearly the main advantage of a non-parametric method is that it makes essentially no assumptions about the underlying distribution of the data. In contrast, the corresponding parametric method makes specific assumptions, for example, that the data are normally distributed. Does this matter Well, as mentioned earlier, the t-tests, even though in a strict sense they assume normality, are quite robust against departures from normality. In other words you have to be some way off normality for the p-values and associated confidence intervals to be become invalid, especially with the kinds of moderate to large sample sizes that we see in our trials. Most of the time in clinical studies, we are within those boundaries, particularly when we are also able to transform data to conform more closely to normality. [Pg.170]

An extreme, large positive, value may sometimes be a manifestation of an underlying distribution of data that is heavily skewed. Transforming the data to be more symmetric may then be something to consider. [Pg.171]

Distribution free method a method for testing a hypothesis or setting up a confidence interval, which does not depend on the form of the underlying distribution. [Pg.109]

While polydisperse model systems can nicely be resolved, the reconstruction of a broad and skewed molar mass distribution is only possible within certain limits. At this point, experimental techniques in which only a nonexponential time signal or some other integral quantity is measured and the underlying distribution is obtained from e.g. an inverse Laplace transform are inferior to fractionating techniques, like size exclusion chromatography or the field-flow fractionation techniques. The latter suffer, however, from other problems, like calibration or column-solute interaction. [Pg.56]

According to the important theorem known as the central limit theorem, if N samples of size n are obtained from a population with mean, fi, and standard deviation, a, the probability distribution for the means will approach the normal probability distribution as N becomes large even if the underlying distribution is nonnormal. For example, as more samples are selected from a bin of pharmaceutical granules, the distribution of N means, x, will tend toward a normal distribution with mean /j and standard deviation <7- = a/s/n, regardless of the underlying distribution. [Pg.45]

The low-order cumulants may be utilized to give saddle-point approximations of the underlying distribution [385,386]. [Pg.266]


See other pages where Underlying distribution is mentioned: [Pg.88]    [Pg.342]    [Pg.290]    [Pg.216]    [Pg.302]    [Pg.36]    [Pg.37]    [Pg.43]    [Pg.47]    [Pg.55]    [Pg.70]    [Pg.172]    [Pg.227]    [Pg.451]    [Pg.136]    [Pg.196]    [Pg.37]    [Pg.40]    [Pg.72]    [Pg.233]    [Pg.99]   
See also in sourсe #XX -- [ Pg.46 ]




SEARCH



© 2024 chempedia.info