Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Low-order polynomials

Our results indicate that dispersion coefficients obtained from fits of pointwise given frequency-dependent hyperpolarizabilities to low order polynomials can be strongly affected by the inclusion of high-order terms. A and B coefficients derived from a least square fit of experimental frequency-dependent hyperpolarizibility data to a quadratic function in ijf are therefore not strictly comparable to dispersion coefficients calculated by analytical differentiation or from fits to higher-order polynomials. Ab initio calculated dispersion curves should therefore be compared with the original frequency-dependent experimental data. [Pg.142]

In engineering practice we are often faced with the task of fitting a low-order polynomial curve to a set of data. Namely, given a set of N pair data, (y , x ), i=l,...,N, we are interested in the following cases,... [Pg.29]

Similar to ED- in FEM-schemes the cross-section under consideration is divided in a set of little areas, rectangles or triangles, preferentially. Now, within the finite elements, the field is defined not only by nodal amplitudes, but expressed within the whole element in terms of low-order polynomial functions. If e.g. the full magnetic field is considered. Maxwells curl equations give... [Pg.261]

In practice, one is often faced with choosing a model that is easily interpretable but may not approximate a response very well, such as a low-order polynomial regression, or with choosing a black box model, such as the random-function model in equations (l)-(3). Our approach makes this blackbox model interpretable in two ways (a) the ANOVA decomposition provides a quantitative screening of the low-order effects, and (b) the important effects can be visualized. By comparison, in a low-order polynomial regression model, the relationship between input variables and an output variable is more direct. Unfortunately, as we have seen, the complexities of a computer code may be too subtle for such simple approximating models. [Pg.323]

A particnlarly easy type of least-sqnares analysis called multiple linear regression is possible for fitting data with a low-order polynomial, and this technique can be used for many of the experiments in this book. The nse of spreadsheet programs, as discussed in Chapter III, is strongly recommended in snch cases. In the case of more complicated nonlinear fitting procednres, other techniqnes are described in Chapter XXL... [Pg.33]

The autocorrelation function can also be analyzed by the method of cumulants. In this approach G(r) is fitted to a low order polynomial. Fora third order cumulants fit ... [Pg.592]

Depending on the features of the differential heat curve, one or more polynomials might be used to describe the entire range of heat. The use of various low order polynomials to fit the integral heat of adsorption curve in short intervals and then to obtain the difierential heat by diflerentiation is equivalent to graphical or numerical diflerentiation of the raw data and avoids the fluctuations that can occur at the edges of the q-6 curve that depend on the order of the polynomial used. [Pg.167]

Because cell count data are often noisy, it is best to obtain the derivative by fitting ln n) versus time with a low-order polynomial, or by drawing a smooth curve through a plot of n n) versus time and manually determining the slope at selected points of interest. For the special case of steady state, the derivative is zero and... [Pg.136]

The first term is usually a low-order polynomial. The random functions z, have the following form ... [Pg.362]

The interpolation functions Ni(x,y) are obtained from standardized subroutines and can be selected to provide linear, quadratic, or higher-order interpolation among the nodal values. This interpolation is a central concept in finite element analyses, in that the actual variation of the unknowns within an element is replaced by a low-order polynomial interpolation. [Pg.266]

The photometric calibration also contributes to the uncertainty of the measured spectrum. Flux standard stars are typically measured at widely spaced wavelengths (50 A is common), and the sensitivity function of the instrument is determined by fitting a low-order polynomial or spline to the flux points. Such fits inevitably introduce low-order wiggles in the sensitivity function, which will vary from star to star. Based on experience, the best spectrophotometric calibration yield uncertainties in the relative fluxes of order 2-3% for widely-spaced emission lines the errors may be better for ratios of lines closer than 20 A apart. Absolute fluxes have much higher uncertainties, of course, especially for narrow-aperture observations of extended objects. [Pg.174]

Dorao and Jakobsen [40, 41] did show that the QMOM is ill conditioned (see, e.g.. Press et al [149]) and not reliable when the complexity of the problem increases. In particular, it was shown that the high order moments are not well represented by QMOM, and that the higher the order of the moment, the higher the error becomes in the predictions. Besides, the nature of the kernel functions determine the number of moments that must be used by QMOM to reach a certain accuracy. The higher the polynomial order of the kernel functions, the higher the number of moments required for getting reliable predictions. This can reduce the applicability of QMOM in the simulation of fluid particle flows where the kernel functions can have quite complex functional dependences. On the other hand, QMOM can still be used in some applications where the kernel functions are given as low order polynomials like in some solid particle or crystallization problems. [Pg.1090]

The principal feature which distinguishes the numerical integration of complex-valued trajectories from real-valued ones lies in the flexibility one has in choosing the complex time path along which time is incremented. Although the quantities from which the classical S-matrix is constructed are analytic functions and thus independent of the particular time path,9 there are practical considerations that restrict the choice. Thus although translational coordinates behave as low order polynomials in time, so that nothing drastic happens to them when t becomes complex, the vibrational coordinate is oscillatory—... [Pg.130]

In this method, a low-order polynomial, such as a parabola, is fitted to a small number of contiguous data. The (usually odd) number of data points included must be significantly larger than the number of parameters defining the polynomial, i.e., when one fits the data to a parabola, y = Oq + a,x + 03X2, at least five but preferably many more data points should be included. One then uses the fitted polynomial to calculate the smoothed y-value, or its derivative, at the midpoint of the polynomial. And it is here that you will appreciate using an odd number of data, because in that case the midpoint coincides with an already existing x- value. [Pg.94]

While the original curve had only six points that rose significantly above the baseline, the interpolated curve clearly has quite a few more. Note that we have not added any real information to the original data set, but have merely interpolated it, without making any assumptions on the shape of the underlying data set, other than that it is noise-free. Consequently we can, e.g., determine the peak position by fitting a number of points near the peak maximum to a low-order polynomial such as y=ai) + axx + a y , using the... [Pg.280]

An alternative method to treating time as a categorical variable is to treat time as a continuous variable and to model time using a low-order polynomial, thereby reducing the number of estimable parameters in the model. In this case, rather than modeling the within-subject covariance the between-subject covariance is manipulated. Subjects are treated as random effects, as are the model parameters associated with time. The within-subject covariance matrix was treated as a simple covariance structure. In this example, time was modeled as a quadratic polynomial. Also, included in the model were the interactions associated with the quadratic term for time. [Pg.199]

The vapour pressure curves, Kj(Ti), may be represented by either Antoine or Riedel equations, as explained for the single-component case in Section 12.3. Similarly table lookups or low-order polynomial expressions may be used to describe the dependence on temperature (only) of the specific volume of each liquid component and the specific internal eneigy of each component in both the liquid and the vapour phase. [Pg.130]

The equation set (16.41) requires the conversion from specific entropy at a given pressure to specific enthalpy at that pressure and vice versa. It turns out that we may derive a simple analytical expression for the rate of change of enthalpy with entropy. Then we may integrate from the base line of saturated steam conditions, along a line of constant pressure to find the appropriate value of specific enthalpy as a function of specific entropy. We are aided in these conversions by the fact that the enthalpy/entropy line for saturated steam is a fairly uncomplicated function, which may be approximated well by a low-order polynomial. [Pg.196]

Solving equation (17.55) for a general, nonlinear function, /pi, of head versus flow will require iteration. However, the function may be represented by a low-order polynomial... [Pg.211]

The pump efficiency function, fpj(U), is frequently available as a low-order polynomial in U, in which case the differentiation d fpj/dU is a simple matter. Alternatively, the differentiation may be carried out graphically by finding the tangent to the fpjfU) vs. U curve. However, before we can solve equation (23.35), we still need to find the total derivative of pump speed to pump volume flow, dN/dQ, when the power supplied is kept constant. To do this we proceed as follows. [Pg.300]

State calculations. With the extensions provided, the method can be applied to the full Watson Hamiltonian [51] for the vibrational problem. The efficiency of the method depends greatly on the nature of the anharmonic potential that represents couphng between different vibrational modes. In favorable cases, the latter can be represented as a low-order polynomial in the normal-mode displacements. When this is not the case, the computational effort increases rapidly. The Cl-VSCF is expected to scale as or worse with the number N of vibrational modes. The most favorable situation is obtained when only pairs of normal modes are coupled in the terms of the polynomial representation of the potential. The VSCF-Cl method was implemented in MULTIMODE [47,52], a code for anharmonic vibrational spectra that has been used extensively. MULTIMODE has been successfully applied to relatively large molecules such as benzene [53]. Applications to much larger systems could be difficult in view of the unfavorable scalability trend mentioned above. [Pg.171]


See other pages where Low-order polynomials is mentioned: [Pg.40]    [Pg.14]    [Pg.526]    [Pg.430]    [Pg.107]    [Pg.135]    [Pg.168]    [Pg.138]    [Pg.167]    [Pg.282]    [Pg.393]    [Pg.394]    [Pg.404]    [Pg.405]    [Pg.15]    [Pg.856]    [Pg.14]    [Pg.146]    [Pg.107]    [Pg.986]    [Pg.1003]    [Pg.609]    [Pg.268]    [Pg.105]    [Pg.94]    [Pg.39]    [Pg.120]    [Pg.209]    [Pg.171]   
See also in sourсe #XX -- [ Pg.120 ]




SEARCH



Polynomial

Polynomial order

© 2024 chempedia.info