Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Returns normal distribution

Table 2.26a lists the height of an ordinate (Y) as a distance z from the mean, and Table 2.26b the area under the normal curve at a distance z from the mean, expressed as fractions of the total area, 1.000. Returning to Fig. 2.10, we note that 68.27% of the area of the normal distribution curve lies within 1 standard deviation of the center or mean value. Therefore, 31.73% lies outside those limits and 15.86% on each side. Ninety-five percent (actually 95.43%) of the area lies within 2 standard deviations, and 99.73% lies within 3 standard deviations of the mean. Often the last two areas are stated slightly different viz. 95% of the area lies within 1.96cr (approximately 2cr) and 99% lies within approximately 2.5cr. The mean falls at exactly the 50% point for symmetric normal distributions. [Pg.194]

We mentioned earlier, in Section 13.1, that if we did not have censoring then an analysis would probably proceed by taking the log of survival time and undertaking the unpaired t-test. The above model simply develops that idea by now incorporating covariates etc. through a standard analysis of covariance. If we assume that InT is also normally distributed then the coefficient c represents the (adjusted) difference in the mean (or median) survival times on the log scale. Note that for the normal distribution, the mean and the median are the same it is more convenient to think in terms of medians. To return to the original scale for survival time we then anti-log c, e, and this quantity is the ratio (active divided by control) of the median survival times. Confidence intervals can be obtained in a straightforward way for this ratio. [Pg.207]

At this point let us return to the aluminum content data of Table 5.1. The skewed shape that is evident in all of Figs 5.2—5.5 makes a Gaussian distribution inappropriate as a theoretical model for (raw) aluminum content of such PET samples. But as is often the case with right skewed data, considering the logarithms of the original measurement creates a scale where a normal distribution is more plausible as a representation of the phenomenon under... [Pg.184]

A second facility that is sometimes useful is the random number generator function. There are several possible distributions, but the most usual is the normal distribution. It is necessary to specify a mean and standard deviation. If one wants to be able to return to the distribution later, also specify a seed, which must be an integer number. Figure A. 15 illustrates the generation of 10 random numbers coming from a distribution of mean 0 and standard deviation 2.5 placed in cells A1 -A10 (note that the standard deviation is of the parent population and will not be exactly the same for a sample). This facility is very helpful in simulations and can be employed to study die effect of noise on a dataset. [Pg.437]

NORMSINVl/)) returns the x-value at which the area under the normal distribution pdf (with p = 0 and area calculated is at the left-hand tail of the distribution. [Pg.54]

Up to this point normality has not been assumed. Equations (3.55) and (3.56) do not depend on normality for their validity. But, if 0 was normally distributed then g(0) will also be normally distributed. So returning to the example, since Vc and Vp were normally distributed, then Vss will be normally distributed. [Pg.106]

We now return to the apparently anomalous value. Many tests to detect outlying data have been proposed. One of the most frequently used in chemistry is Dixon s Q test, which assumes that the values being tested are normally distributed. Actually there are several tests identified as Dixon s, all based on comparing differences between the suspect value and the other values of the sample. You can obtain more information about these tests in Skoog et al. (1996), and in Rorabacher (1991). Here we will limit our discussion to the following question can we consider the 56.3 min. time of experiment 9 as an element of the same distribution that produced the other times recorded for path C ... [Pg.74]

Now that we added the assumption that the errors follow a normal distribution to our hypotheses, we can return to the ANOVA and use the mean square values to test if the regression equation is statistically significant. When Pi = 0, that is, when there is no relation between X and y, it can be demonstrated that the ratio of the MSr and MSr mean squares... [Pg.218]

Two datasets are fist simulated. The first contains only normal samples, whereas there are 3 outliers in the second dataset, which are shown in Plot A and B of Figure 2, respectively. For each dataset, a percentage (70%) of samples are randomly selected to build a linear regression model of which the slope and intercept is recorded. Repeating this procedure 1000 times, we obtain 1000 values for both the slope and intercept. For both datasets, the intercept is plotted against the slope as displayed in Plot C and D, respectively. It can be observed that the joint distribution of the intercept and slope for the normal dataset appears to be multivariate normally distributed. In contrast, this distribution for the dataset with outliers looks quite different, far from a normal distribution. Specifically, the distributions of slopes for both datasets are shown in Plot E and F. These results show that the existence of outliers can greatly influence a regression model, which is reflected by the odd distributions of both slopes and intercepts. In return, a distribution of a model parameter that is far from a normal one would, most likely, indicate some abnormality in the data. [Pg.5]

Other variants of MV optimization have been proposed to address some of its shortcomings. For example, MV analysis presumes normal distributions for asset returns. In actuality, financial asset return distributions sometimes possess fat tails. Alternative distributions can be used. In addition, the MV definition of risk as the standard deviation of returns is arbitrary. Why not define risk as first- or third-order deviation instead of second Why not use absolute or downside deviation ... [Pg.767]

The successive values assumed by W are serially independent so from Equation (2.8), we conclude that changes in the variable W from time 0 to time T follow a normal distribution with mean 0 and a standard deviation of /T. This describes the Wiener process, with a mean of zero or a zero drift rate and a variance of T. This is an important result because a zero drift rate implies that the change in the variable (for which now read asset price) in the future is equal to the current change. This means that there is an equal chance of an asset return ending up 10% or down 10% over a long period of time. [Pg.18]

Tracking error is the standard deviation of the difference in returns between a portfolio and a selected benchmark, which is usually a suitable bond index. Assuming a normal distribution of returns, a portfolio manager can expect to deviate by no more than the tracking error amount for 68% of the time during a selected period. [Pg.776]

A normal distribution of returns This point is more amenable to mathematical testing. Most common tracking error models assume a... [Pg.776]

The variance-covariance model approach extracts volatility information from historical returns and builds a model intended to predict divergence in performance. For this type of model, the underlying data are typically a time series of yields, spreads or returns. The model relies heavily on historical data and assumes both stable correlations and a normal distribution of returns. [Pg.781]

Returns do not need to have any particular distribution (e.g., normally distributed) it is a nonparametric approach so it is possible to use skewed distributions. However, to get a tracking error number that is symmetric, we need to assume a symmetric distribution. [Pg.792]

Monte Carlo simulations are an alternative to parametric and historical approaches to risk measurements. They approximate the behavior of financial prices by using computer-generated simulations of price paths. The underlying idea is that bond prices are determined by factors that each have a specific distribution. As soon as these distributions (e.g., normal distributions) have been selected, a sequence of values for these factors can be generated. By using these values to calculate bond prices (and thus portfolio returns), the method creates a set of simulation outcomes that can be used for estimating value at risk. [Pg.794]

QS is the shortfall line and depicts expected return of portfolios along the line QZ that are equal or higher than the return of QS in (1 - p)% of all cases. Portfolio returns fall short of QS in p% of all cases. Portfolio P (corresponding with point P" on the shortfall line QS) reaches a return of tjnin in (1 - p)% of all cases and thereby complies with the investor s risk aversion. The optimal portfolio P maximizes return taking the investor s risk aversion into account. The following equation formalizes the investor s risk aversion. It is assumed that returns follow a normal distribution. [Pg.839]

The price behavior of financial instruments. One of the key assumptions of option pricing models such as Black-Scholes (B-S), which is discussed below, is that asset prices follow a lognormal distribution— that is, the logarithms of the prices show a normal distribution. This characterization is not strictly accurate prices are not lognormally distributed. Asset returns, however, are. Returns are defined by formula (8.8). [Pg.143]

The following section presents an intuitive explanation of the B-S model, in terms of the normal distribution of asset price returns. [Pg.145]

BS model is widely used in finance where the log-return of asset price is considered to be normally distributed (Platen Health 2006). However, equity returns distribution presents several realistic properties not found in the ideal BS model (1) the presence of jumps represented by large random fluctuations such as crashes or sudden upsurges, (2) the log-return distribution are skewed. [Pg.946]

Return to the Sam s Club store in Exercise 6. Assume that the supply lead time from HP is normally distributed, with a mean of 2 weeks and a standard deviation of 1.5 weeks. How much safety inventory should Sam s Club carry if it wants to provide a CSL of 95 percent How does the required safety inventory change as the standard deviation of lead time is reduced from 1.5 weeks to zero in intervals of 0.5 weeks ... [Pg.350]

One way to implement tailored postponanent is to produce high-demand, predictable products without postponement and produce only the unpredictable products using postponement. Let us return to the Benetton data with red sweaters constituting about 80 percent of demand. Recall that demand for red sweaters at Benetton is forecast to be normally distributed, with a mean of fired — 3,1(X) and a standard deviation of a-red = 8(X). Demand for the other three colors is forecast to be normally distributed, with a mean of ju. = 300 and a standard deviation of a- = 200. We evaluated that postponing all colors decreases profits for Benetton by more than 2,(XX) (from 102,205 to 99,876). However, if we tailor postponement so red sweaters are made using the traditional method and only the other colors are postponed, profits actually increase by 1,009, to 103,213. [Pg.384]


See other pages where Returns normal distribution is mentioned: [Pg.34]    [Pg.30]    [Pg.41]    [Pg.35]    [Pg.435]    [Pg.299]    [Pg.501]    [Pg.1008]    [Pg.1012]    [Pg.52]    [Pg.172]    [Pg.21]    [Pg.23]    [Pg.777]    [Pg.789]    [Pg.790]    [Pg.1339]    [Pg.265]    [Pg.122]    [Pg.435]    [Pg.145]    [Pg.366]    [Pg.366]    [Pg.114]    [Pg.947]    [Pg.320]   
See also in sourсe #XX -- [ Pg.776 ]




SEARCH



Distribution normalization

Normal distribution

Normalized distribution

RETURN

Returnability

© 2024 chempedia.info