Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Polynomials, best

The two parameters y and 8, which characterize a Jacobi polynomial, were varied, and RMSE values, as defined by Equations 58, 59, and 60, were numerically evaluated using an IBM-360 computer. Preliminary tests showed that satisfactory integrations were achieved by summing over 50 equally spaced points. The RMSE-surface was mapped for both the Bigeleisen-Ishida formula. Equation 44, and the modified one. Equation 52. Naturally, the best polynomial may depend on the order and... [Pg.211]

L = 5 are shown in Tables VI and VII, respectively. For each value of L, the best polynomial naturally gives better RMSE than Tn x) does, for every range and order. Comparison of Tables VI and VIII shows, however, that for smaller ranges and higher orders, T (x) with L = 0 yields lower values of the RMSE than T,7 x) with L = 5. On the other hand the use of sub-divided ranges of the expansion variable improves the Chebyshev expansion when u covers a wide range and the order of the expansion is less than 2. [Pg.213]

Figures 2, 3, 4, 5, 6, 7, and 8 show computer-generated plots of the absolute errors of the approximation of In b u) by Equation 52 as functions of u. Plots are shown for ranges 27t, Stt,. . . , 87t, respectively. Each figure consists of four separate frames, one for every order. Each frame contains three error curves one obtained by the best polynomial given in Table IV which has L = 5, another by T (x) with L = 5, and the third by Tn x) with L = 0. The error curves for In b(n) evaluated by the least-squares-analysis are indistinguishable from the curves for the best polynomial with L = 5. The curves in Figures 2, 3, 4, 5, 6, 7, and 8 clearly illustrate the oscillatory behavior of the error obtained with the expansions they show error amplitudes, regions of maxima and minima. Figures 2, 3, 4, 5, 6, 7, and 8 show computer-generated plots of the absolute errors of the approximation of In b u) by Equation 52 as functions of u. Plots are shown for ranges 27t, Stt,. . . , 87t, respectively. Each figure consists of four separate frames, one for every order. Each frame contains three error curves one obtained by the best polynomial given in Table IV which has L = 5, another by T (x) with L = 5, and the third by Tn x) with L = 0. The error curves for In b(n) evaluated by the least-squares-analysis are indistinguishable from the curves for the best polynomial with L = 5. The curves in Figures 2, 3, 4, 5, 6, 7, and 8 clearly illustrate the oscillatory behavior of the error obtained with the expansions they show error amplitudes, regions of maxima and minima.
The bottom sections of Tables XII, XIII, XIV, and XV show results of similar calculations for the same systems, but using the best sets of Jacobi polynomials, tabulated in Tables II and IV, in place of the Chebyshev polynomials. For each best set, the calculations were carried out in the following manner using the appropriate equations. For each temperature u niax was computed from the highest frequency of the system, and the best polynomial for the range closest to u nmx was chosen—i.e, if = 4.37T the best polynomial for the range [0,47t] was selected. [Pg.225]

The method of least squares was employed in the previous section to fit the best straight line to analytical data and a similar procedure can be adopted to estimate the best polynomial line. To illustrate the technique, the least squares fit for a quadratic curve will be developed. This can be readily extended to higher power functions. - ... [Pg.163]

Determine the best polynomial calibration curve through these points. [Pg.149]

The standard RS-HDMR approach was extended by an optimisation method (Ziehn and Tomlin 2008a), which automatically chooses the best polynomial order for the approximation of each of the component functions. Component functirms can also be excluded from the HDMR expansirm if they do not make a significant contribution to the modelled output value via the use of a threshold (Ziehn and Tomlin 2008b, 2009). The aim is to reduce the number of comprment functions to be approximated by polymomials and therefore to achieve automatic complexity reduction without the use of prior screening methods such as the Morris method (Morris 2006). For a second-order HDMR expansirm, a separate threshold can be defined for the exclusion of the first- and second-order component functirms. [Pg.97]

Development of the Nomograph. Tw o main sources of data were used to develop the nomograph McAuliffe and Price. The hydrocarbons were divided into 14 homologous series as listed in Table 1. Solubilities at 25°C were then regressed with the carbon numbers of the hydrocarbons in order to obtain the best fit for each homologous series. A second order polynomial equation fits the data very well ... [Pg.360]

Since A x,y) can really be any function bounded by the aperture of the system it is best to use as general description as possible. One such description of this function is to expand A x, y) = A p, 0) about (p, 0) in an infinite polynomial series. One set of polynomials that are frequently used are Zernike polynomials. Thus one can write A(p, 0) = Y.m,n CmnFmn p, 0). [Pg.42]

Assumption, x -distribution the curvature of the x -functions versus df is not ideal for polynomial approximations various transformations on both axes, in different combinations, were tried, the best one by far being a logio(x ) vs. logio(df) plot. The 34 x -values used for the optimization of the coefficients (two decimal places) covered degrees of freedom 1-20, 22, 24, 26, 28, 30, 35, 40, 50, 60, 80, 100, 120, 150, and 200. [Pg.338]

The best and easiest way to smooth the data and avoid misuse of the polynomial curve fitting is by employing smooth cubic splines. IMSL provides two routines for this purpose CSSCV and CSSMH. The latter is more versatile as it gives the option to the user to apply different levels of smoothing by controlling a single parameter. Furthermore, IMSL routines CSVAL and CSDER can be used once the coefficients of the cubic spines have been computed by CSSMH to calculate the smoothed values of the state variables and their derivatives respectively. [Pg.117]

Data were subjected to analysis of variance and regression analysis using the general linear model procedure of the Statistical Analysis System (40). Means were compared using Waller-Duncan procedure with a K ratio of 100. Polynomial equations were best fitted to the data based on significance level of the terms of the equations and values. [Pg.247]

Linear PCR can be modified for nonlinear modeling by using nonlinear basis functions 0m that can be polynomials or the supersmoother (Frank, 1990). The projection directions for both linear and nonlinear PCR are identical, since the choice of basis functions does not affect the projection directions indicated by the bracketed term in Eq. (22). Consequently, the nonlinear PCR algorithm is identical to that for the linear PCR algorithm, except for an additional step used to compute the nonlinear basis functions. Using adaptive-shape basis functions provides the flexibility to find the smoothed function that best captures the structure of the unknown function being approximated. [Pg.37]

The fraction undissolved data until the critical time can be least-square fitted to a third degree polynomial in time as dictated by Eq. (29). The moments of distribution ij, p2, and p3 can be evaluated from Eqs. (30) through (32), with three equations used to solve for three unknowns. These values may be used as first estimates in a nonlinear least-squares fit program, and the curve will, hence, reveal the best values of both shape factor, size distribution, and A -value. [Pg.183]

The point being that, as our conclusions indicate, this is one case where the use of latent variables is not the best approach. The fact remains that with data such as this, one wavelength can model the constituent concentration exactly, with zero error - precisely because it can avoid the regions of nonlinearity, which the PCA/PLS methods cannot do. It is not possible to model the constituent better than that, and even if PLS could model it just as well (a point we are not yet convinced of since it has not yet been tried -it should work for a polynomial nonlinearity but this nonlinearity is logarithmic) with one or even two factors, you still wind up with a more complicated model, something that there is no benefit to. [Pg.153]

The values of X and Y are known, since they constitute the data. Therefore equations 66-9 (a-c) comprise a set of n + 1 equations in n + 1 unknowns, the unknowns being the various values of the at since the summations, once evaluated, are constants. Therefore, solving equations 66-9 (a-c) as simultaneous equations for the at results in the calculation of the coefficients that describe the polynomial (of degree n) that best fits the data. [Pg.443]

Procedures for curve fitting by polynomials are widely available. Bell shaped curves usually are fitted better and with fewer constants by ratios of polynomials. Problem P5.02.02 compares a Gamma fit with those of other equations, of which a log normal plot is the best. In figuring chemical conversion, fit of the data at low values of Ett) need not be highly accurate since those regions do not affect the overall result very much. [Pg.509]


See other pages where Polynomials, best is mentioned: [Pg.406]    [Pg.245]    [Pg.210]    [Pg.212]    [Pg.212]    [Pg.223]    [Pg.230]    [Pg.242]    [Pg.265]    [Pg.367]    [Pg.212]    [Pg.406]    [Pg.245]    [Pg.210]    [Pg.212]    [Pg.212]    [Pg.223]    [Pg.230]    [Pg.242]    [Pg.265]    [Pg.367]    [Pg.212]    [Pg.28]    [Pg.1529]    [Pg.270]    [Pg.532]    [Pg.131]    [Pg.132]    [Pg.169]    [Pg.549]    [Pg.117]    [Pg.395]    [Pg.616]    [Pg.314]    [Pg.364]    [Pg.442]    [Pg.443]    [Pg.38]    [Pg.314]    [Pg.176]    [Pg.450]    [Pg.160]   
See also in sourсe #XX -- [ Pg.231 , Pg.242 , Pg.245 ]




SEARCH



Polynomial

© 2024 chempedia.info