Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Error distribution solutions

Now, if (m2 > g), the solution of Eq. (10.24), under the assumption of an independent and normal error distribution with constant variance can be obtained as the maximum likelihood estimator of d and is given by... [Pg.206]

Section 4.2.2 shows how to use the scaling method to obtain the error function solution for the one-dimensional diffusion of a step function in an infinite medium given by Eq. 4.31. The same solution can be obtained by superposing the onedimensional diffusion from a distribution of instantaneous local sources arrayed to simulate the initial step function. The boundary and initial conditions are... [Pg.105]

The distribution of errors is a property of the exact trajectories. If we generate an approximate trajectory based on the finite difference formula, we should generate an error distribution that is consistent with what we know about the true solution. [Pg.101]

Apart from these data analytical issues, the problem definition is important. Defining the problem is the core issue in all data analysis. It is not uncommon that data are analyzed by people not directly related to the problem at hand. If a clear understanding and consensus of the problem to be solved is not present, then the analysis may not even provide a solution to the real problem. Another issue is what kind of data are available or should be available Typical questions to be asked are is there a choice in instrumental measurements to be made and are some preferred over others are some variables irrelevant in the context, for example because they will not be available in the future can measurement precision be improved, etc. A third issue concerns the characteristics of the data to be used. Are they qualitative, quantitative, is the error distribution known within reasonable certainty, etc. The interpretation stage after data analysis usually refers back to the problem definition and should be done with the initial problem in mind. [Pg.2]

The known properties of the PDF can be used to improve the solution a. Indeed, the modeled measurement errors f = F - f(a) for a a should reproduce the known statistical properties of measurement errors as closely as possible. The agreement of modeled f with known error distribution can be evaluated using the known PDF as a function of modeled errors P( f ) the higher P( f ) the closer the modeled f to the known statistical properties. Thus, the best solution a should result in modeled errors corresponding to the most probable error realization, i.e. to PDF maximum ... [Pg.70]

In essence, this principle is the well-known Method of Maximum Likelihood (MML). The PDF written as a function of measurements P(f(a) F) is called Likelihood Function. The MML is one of the strategic principles of statistical estimation that provides statistically the best solution in many senses [27]. For example, the asymptotical error distribution (for infinite number of f realizations) of MML estimates a have the smallest possible variances of a,. [Pg.70]

The least-squares method chooses values for the bj s of Eq. (3), which are imbiased estimates of the p s of Eq. (2). The least-squares estimates are universally minimum variance imbiased estimates for normally distributed residual errors and are minimum variance among all linear estimates (linear combinations of the observed T s), regardless of the residual error distribution shape (see Eisenhart 1964). The bj s (as well as the T s) are linear combinations of the observed T s. The least-squares method determines the weight given to each Y value. The derivations of the least-squares solution and/or associated equations used later in this chapter are shown in other sources (see Additional Reading). In essence, the bjS are chosen to minimize the numerator of Eq. (5)—the sum of squares of e s of Eq. (4)—hence least squares. ... [Pg.2269]

However, in many applications the essential space cannot be reduced to only one degree of freedom, and the statistics of the force fluctuation or of the spatial distribution may appear to be too poor to allow for an accurate determination of a multidimensional potential of mean force. An example is the potential of mean force between two ions in aqueous solution the momentaneous forces are two orders of magnitude larger than their average which means that an error of 1% in the average requires a simulation length of 10 times the correlation time of the fluctuating force. This is in practice prohibitive. The errors do not result from incorrect force fields, but they are of a statistical nature even an exact force field would not suffice. [Pg.22]

The two sources of stochasticity are conceptually and computationally quite distinct. In (A) we do not know the exact equations of motion and we solve instead phenomenological equations. There is no systematic way in which we can approach the exact equations of motion. For example, rarely in the Langevin approach the friction and the random force are extracted from a microscopic model. This makes it necessary to use a rather arbitrary selection of parameters, such as the amplitude of the random force or the friction coefficient. On the other hand, the equations in (B) are based on atomic information and it is the solution that is approximate. For ejcample, to compute a trajectory we make the ad-hoc assumption of a Gaussian distribution of numerical errors. In the present article we also argue that because of practical reasons it is not possible to ignore the numerical errors, even in approach (A). [Pg.264]

Hence, we use the trajectory that was obtained by numerical means to estimate the accuracy of the solution. Of course, the smaller the time step is, the smaller is the variance, and the probability distribution of errors becomes narrower and concentrates around zero. Note also that the Jacobian of transformation from e to must be such that log[J] is independent of X at the limit of e — 0. Similarly to the discussion on the Brownian particle we consider the Ito Calculus [10-12] by a specific choice of the discrete time... [Pg.269]

For many modeling purposes, Nhas been assumed to be 1 (42), resulting in a simplified equation, S = C, where is the linear distribution coefficient. This assumption usually works for hydrophobic polycycHc aromatic compounds sorbed on sediments, if the equdibrium solution concentration is <10 M (43). For many pesticides, the error introduced by the assumption of linearity depends on the deviation from linearity. [Pg.221]

An eluted solute was originally identified from its corrected retention volume which was calculated from its corrected retention time. It follows that the accuracy of the measurement depended on the measurement and constancy of the mobile phase flow rate. To eliminate the errors involved in flow rate measurement, particularly for mobile phases that were compressible, the capacity ratio of a solute (k ) was introduced. The capacity ratio of a solute is defined as the ratio of its distribution coefficient to the phase ratio (a) of the column, where... [Pg.26]

This equation, although originating from the plate theory, must again be considered as largely empirical when employed for TLC. This is because, in its derivation, the distribution coefficient of the solute between the two phases is considered constant throughout the development process. In practice, due to the nature of the development as already discussed for TLC, the distribution coefficient does not remain constant and, thus, the expression for column efficiency must be considered, at best, only approximate. The same errors would be involved if the equation was used to calculate the efficiency of a GC column when the solute was eluted by temperature programming or in LC where the solute was eluted by gradient elution. If the solute could be eluted by a pure solvent such as n-heptane on a plate that had been presaturated with the solvent vapor, then the distribution coefficient would remain sensibly constant over the development process. Under such circumstances the efficiency value would be more accurate and more likely to represent a true plate efficiency. [Pg.451]

The diad fractions for the low conversion experiments only are reproduced in Table II. The high conversion data cannot be used since the Mayo-Lewis model does not apply. Again diad fractions have been standardized such that only two independent measurements are available. When the error structure is unknown, as in this case, Duever and Reilly (in preparation) show how the parameter distribution can be evaluated. Several attempts were made to use this solution. However with only five data points there is insufficient information present to allow this approach to be used. [Pg.287]


See other pages where Error distribution solutions is mentioned: [Pg.130]    [Pg.130]    [Pg.636]    [Pg.102]    [Pg.130]    [Pg.22]    [Pg.179]    [Pg.1874]    [Pg.224]    [Pg.250]    [Pg.243]    [Pg.257]    [Pg.127]    [Pg.658]    [Pg.268]    [Pg.147]    [Pg.137]    [Pg.34]    [Pg.330]    [Pg.775]    [Pg.313]    [Pg.253]    [Pg.679]    [Pg.25]    [Pg.103]    [Pg.106]    [Pg.163]    [Pg.366]    [Pg.92]    [Pg.221]    [Pg.226]    [Pg.227]    [Pg.229]    [Pg.735]    [Pg.28]    [Pg.347]   


SEARCH



Errors distribution

© 2024 chempedia.info