Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Parameter estimation polynomial

The above equations suggest that the unknown parameters in polynomials A( ) and B() can be estimated with RLS with the transformed variables yn and un k. Having polynomials A( ) and B(-) we can go back to Equation 13.1 and obtain an estimate of the error term, e , as... [Pg.224]

The MO concentrations versus time profiles were fitted to second order polynomial equations and the parameters estimated by nonlinear regression analysis. The initial rates of reactions were obtained by taking the derivative at t=0. The reaction is first order with respect to hydrogen pressure changing to zero order dependence above about 3.45 MPa hydrogen pressure. This was attributed to saturation of the catalyst sites. Experiments were conducted in which HPLC grade MIBK was added to the initial reactant mixture, there was no evidence of product inhibition. [Pg.265]

When linear regression does not yield a good correlation, application of a non-linear function may be feasible (see Chapter 10). The parameter estimates for higher-order or polynomial equations may prove to be more difficult to interpret than for a linear relationship. Nevertheless, this approach may be preferable to using lower-order levels of correlation (B or C) for evaluating the relationship between dissolution and absorption data. [Pg.344]

Full second-order polynomial models used with central composite experimental designs are very powerful tools for approximating the true behavior of many systems. However, the interpretation of the large number of estimated parameters in multifactor systems is not always straightforward. As an example, the parameter estimates of the coded and uncoded models in the previous section are quite different, even though the two models describe essentially the same response surface (see Equations 12.63 and 12.64). It is difficult to see this similarity by simple inspection of the two equations. Fortunately, canonical analysis is a mathematical technique that can be applied to full second-order polynomial models to reveal the essential features of the response surface and allow a simpler understanding of the factor effects and their interactions. [Pg.254]

In this chapter we investigate the interaction between experimental design and information quality in two-factor systems. However, instead of looking again at the uncertainty of parameter estimates, we will focus attention on uncertainty in the response surface itself. Although the examples are somewhat specific (i.e., limited to two factors and to full second-order polynomial models), the concepts are general and can be extended to other dimensional factor spaces and to other models. [Pg.279]

In a set of experiments, x is temperature expressed in degrees Celsius and is varied between 0°C and 100 C. Fitting a full second-order polynomial in one factor to the experimental data gives the fitted model y, = 10.3 + 1.4xi, + 0.0927xf, + r,. The second-order parameter estimate is much smaller than the first-order parameter estimate h,. How important is the second-order term compared to the first-order term when the temperature changes from 0°C to 1°C How important is the second-order term compared to the first-order term when temperature changes from 99°C to 100°C Should the second-order term be dropped from the model if it is necessary to predict response near the high end of the temperature domain ... [Pg.358]

The ASI algorithm is broken down into three steps. First the order of the drift (k) is estimated. Then all the possible polynomial GC models are estimated. Thus if k = 0, there are 3 models estimated k = 1, there are 7 models estimated and if k = 2, there are 15 models estimated. The inadmissible models (those models whose parameter estimates do not meet the constraints of the polynomial GC model) are discarded and the three best models are chosen. The third step compares the remaining models and makes the final choice. [Pg.217]

The form of the response function to be fitted depends on the goal of modeling, and the amount of available theoretical and experimental information. If we simply want to avoid interpolation in extensive tables or to store and use less numerical data, the model may be a convenient class of functions such as polynomials. In many applications, however, the model is based an theoretical relationships that govern the system, and its parameters have some well defined physical meaning. A model coming from the underlying theory is, however, not necessarily the best response function in parameter estimation, since the limited amount of data may be insufficient to find the parameters with any reasonable accuracy. In such cases simplified models may be preferable, and with the problem of simplifying a nonlinear model we leave the relatively safe waters of mathematical statistics at once. [Pg.140]

Following the extrapolation step regress the average parameter estimates vs. X using a quadratic polynomial. [Pg.82]

Problems, however, arise if the intervals between the knots are not narrow enough and the spline begins to oscillate (cf. Figure 3.13). Also, in comparison to polynomial filters, many more coefficients are to be estimated and stored, since in each interval, different coefficients apply. An additional disadvantage is valid for smoothing splines, where the parameter estimates are not expectation-true. The statistical properties of spline functions are, therefore, more difficult to describe than in the case of linear regression (cf. Section 6.1). ... [Pg.78]

Parameter estimation with a polynomial of second order reveals the following final model ... [Pg.121]

Richardson, M. andFormenti, D. L. Parameter estimation from frequency response measurements using rational fraction polynomials. In Proceedings of 1st International Modal Analysis Conference (Orlando, Horida, 1982), pp. 167-182. [Pg.287]

Van der Auweraer, H. and Leuridan, J. Multiple input orthogonal polynomial parameter estimation. Mechanical Systems and Signal Processing 1(3) (1987), 259-272. [Pg.289]

Since the polynomial is useful to predict the point t = tj+i where y = 0, the Neville method, which does not require any parameter estimation, can be adopted (see Buzzi-Ferraris and Manenti, 2010b). [Pg.14]

Theoretically, the main concerns lie with identifiability of a process, that is, given a data set and model structure (order of polynomials), what are the conditions for there to be a unique solution to the parameter estimates. For open-loop experiments, the identifiability constraint for a prediction error model can be simply written as... [Pg.298]

For many catalytic reactions with nonlinear steps, derivation of kinetic equations can be challenging. In order to avoid such difficulties, Lazman and Yablonsky applied constructive algebraic geometry to nonlinear kinetics, expressing the reaction rate of a complex reaction as an implicit function of concentrations and temperature. This concept of kinetic polynomial [6] has found important applications including parameter estimation, analysis of kinetic model identifiability and finding all steady-states of kinetic models. The Lazman-Yablonsky four-term rate equation for the polynomial kinetics is ... [Pg.208]

X is an acidity function based on the first-order approximation, Eq. (8-92). Values of X have been assigned by an iterative procedure. The data consist of values of Cb/cbh+ as functions of Ch+ for a large number of indicators. For each indicator an initial estimate of pXbh+ and m is made and X is calculated with Eq. (8-94). This yields a large body of X values, which are fitted to a polynomial in acid concentration. From this fitted curve smoothed X values are obtained, and Eq. (8-94), a linear function in X. allows refined values of pXbh + and m to be obtained. This procedure continues until the parameters undergo no further change. Table 8-20 gives X values for sulfuric and perchloric acid solutions. ... [Pg.451]

The last example serves to show that in some cases the exponentiated polynomial function used to estimate the true parameter distribution can show serious lack of fit. Therefore other estimating functions are required. [Pg.293]

The permeability coefficients and molecular radii are known. The effective pore radius, R, is the only unknown and is readily calculated by successive approximation. Consequently, unknown parameters (i.e., porosity, tortuosity, path length, electrical factors) cancel, and the effective pore radius is calculated to be 12.0 1.9 A. Because the Renkin function [see Eq. (35)] is a rapidly decaying polynomial function of molecular radius, the estimation of R is more sensitive to small uncertainties in the calculated molecular radius values than it is to experimental variabilities in the permeability coefficients. The placement of the perme-ants within the molecular sieving function is shown in Figure 9 for the effective... [Pg.263]

This model tends to approach a zero probability rapidly at low doses (although it never reaches zero) and thus is compatible with the threshold hypothesis. Mantel and Bryan, in applying the model, recommend setting the slope parameter b equal to 1, since this appears to yield conservative results for most substances. Nevertheless, the slope of the fitted curve is extremely steep compared to other extrapolation methods, and it will generally yield lower risk estimates than any of the polynomial models as the dose approaches zero. [Pg.302]


See other pages where Parameter estimation polynomial is mentioned: [Pg.257]    [Pg.60]    [Pg.11]    [Pg.209]    [Pg.483]    [Pg.114]    [Pg.293]    [Pg.208]    [Pg.293]    [Pg.245]    [Pg.428]    [Pg.8]    [Pg.187]    [Pg.417]    [Pg.256]    [Pg.70]    [Pg.361]    [Pg.2309]    [Pg.140]    [Pg.283]    [Pg.293]    [Pg.169]    [Pg.430]    [Pg.117]    [Pg.343]    [Pg.440]   
See also in sourсe #XX -- [ Pg.297 , Pg.298 ]




SEARCH



Parameter estimation

Polynomial

© 2024 chempedia.info