Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Polynomial regression model

Peixoto, J. L. (1990). A property of well-formulated polynomial regression models. The American Statistician, 44, 26-30. (Correction 45, 82.)... [Pg.266]

In practice, one is often faced with choosing a model that is easily interpretable but may not approximate a response very well, such as a low-order polynomial regression, or with choosing a black box model, such as the random-function model in equations (l)-(3). Our approach makes this blackbox model interpretable in two ways (a) the ANOVA decomposition provides a quantitative screening of the low-order effects, and (b) the important effects can be visualized. By comparison, in a low-order polynomial regression model, the relationship between input variables and an output variable is more direct. Unfortunately, as we have seen, the complexities of a computer code may be too subtle for such simple approximating models. [Pg.323]

Fortunately, the polynomial regression models commonly used are made up of functions fk(x) that are products of powers of single variables. With (24), element k of fe(xe) in (16) is... [Pg.325]

It is notable that such kinds of error sources are fairly treated using the concept of measurement uncertainty which makes no difference between random and systematic . When simulated samples with known analyte content can be prepared, the effect of the matrix is a matter of direct investigation in respect of its chemical composition as well as physical properties that influence the result and may be at different levels for analytical samples and a calibration standard. It has long since been suggested in examination of matrix effects [26, 27] that the influence of matrix factors be varied (at least) at two levels corresponding to their upper and lower limits in accordance with an appropriate experimental design. The results from such an experiment enable the main effects of the factors and also interaction effects to be estimated as coefficients in a polynomial regression model, with the variance of matrix-induced error found by statistical analysis. This variance is simply the (squared) standard uncertainty we seek for the matrix effects. [Pg.151]

Researchers can draw on their primary field knowledge to determine this, whereas a card-carrying statistician usually cannot. The statistician may decide to use a polynomial regression model and is sure that, with some manipulation, it can model the data better, particularly in that the error at each... [Pg.37]

Polynomial regression models are useful in situations in which the curvilinear response function is too complex to linearize by means of a transformation, and an estimated response function fits the data adequately. Generally, if the modeled polynomial is not too complex to be generalized to a wide variety of similar studies, it is useful. On the other hand, if a modeled polynomial overfits the data of one experiment, then, for each experiment, a new polynomial must be built. This is generally ineffective, as the same type of experiment must use the same model if any iterative comparisons are required. Figure 7.1 presents a dataset that can be modeled by a polynomial function, or that can be set up as a piecewise regression. It is impossible to linearize this function by a simple scale transformation. [Pg.241]

Polynomial regression models often use data that are ill-conditioned, in that the matrix [X X] is unstable and error-prone. This usually results in the variance (MSe) that is huge. We discussed aspects of this situation in Chapter 6. When the model y = bQ + b Xi + b2xi is used, xf and Xi will be highly correlated because xf is the square of xj. If it is not serious due to excessive range spread, for example, in the selection of the x, values, it may not be a problem, but it should be evaluated. [Pg.244]

Fig. 23. Linear and quartic models applied to a three-line, 31-point set of synthetic data. Plots are (a) linear regression model, (b) residuals pattern for linear fit, (c) fourth order polynomial regression model, and (d) residuals pattern for quartic fit. The nonrandomness of the quantc fit residuals indicates a poor fit for this model even though it appears to be a good fit visually. Fig. 23. Linear and quartic models applied to a three-line, 31-point set of synthetic data. Plots are (a) linear regression model, (b) residuals pattern for linear fit, (c) fourth order polynomial regression model, and (d) residuals pattern for quartic fit. The nonrandomness of the quantc fit residuals indicates a poor fit for this model even though it appears to be a good fit visually.
Polynomial regression of the relationship of A(518Oi ) versus elevation (z) measured in meters derived from modeling all possible modem starting T and RH pairs for values of A(518Op) between 0%c and -25%c results in a curve that is only slightly different from the equation reported by Currie et al. (2005) and used by Rowley and Currie (2006), as well as that of Rowley and Garzione (2007). The revision is Equation (5) ... [Pg.33]

Lee et al. (2005) estimated C02 fluxes in a forest near Takayama on the basis of root system respiration using the polynomial constituent of the regression model, which took into account the temperature and moisture of the soil and reflected the hourly regime of soil respiration. It was shown that the contribution of the forest root system to soil C02 flux (1.06kgCkm 2yr 1) constitutes 45% (0.48kgCkm 2yr 1). This highlights the importance of reflecting the role of the root system in models of forest ecosystems as an independent element of the ecosystem. [Pg.190]

Instead of artificially transforming the data to a linear model, our group developed an approach in which the relation between isotope ratios and mole ratios is described by means of a polynomial regression (Jonckheere et al., 1982). In this, the basic IDMS equation [Eq. (1)] is seen as a rational function ... [Pg.136]

Fig. 4. Flow scheme of model testing in polynomial regression analysis. Reprinted with permission from Anal. Chem. 55, 153-155 (1982). Copyright ACS. Fig. 4. Flow scheme of model testing in polynomial regression analysis. Reprinted with permission from Anal. Chem. 55, 153-155 (1982). Copyright ACS.
Finally, it is obvious that the presented polynomial regression analysis with model testing requires a reasonable computational facility. A computer program, RAMP, is available in FORTRAN IV or HPL-BASIC (Jonckheere et al., 1982). [Pg.139]

The presence of both mutually dependent (mixture) and independent (process) variables calls for a new type of regression model that can accommodate these peculiarities. The models, which serve quite satisfactorily, are combined canonical models. They are derived from the usual polynomials by a transformation on the mixture-related terms. To construct these types of models, one must keep in mind some simple rules these models do not have an intercept term, and for second-order models, only the terms corresponding to the process variables can be squared. Also, despite the external similarity to the polynomials for process variables only, it is not possible to make any conclusions about the importance of the terms by inspecting the values of the regression coefficients. Because the process variables depend on one another, the coefficients are correlated. Basically, the regression model for mixture and process variables can be divided into three main parts mixture terms, process terms, and mixture-process interaction terms that describe the interaction between both types of variables. To clearly understand these kinds of models, the order of the mixture and process parts of the model must be specified. Below are listed some widely used structures of combined canonical models. The number of the mixture variables is designated by q, the number of the process variables is designated by p, and the total number of variables is n = q + p. [Pg.284]

It is worth noting that a linear model is linear with respect to the parameters, but not necessarily with respect to the true independent variable ), since the variables x2,..., xm can be non-linear functions of the true independent variable(s). For example, a polynomial regression is actually a linear model. [Pg.312]

Remedial Actions When linearity has been rejected by one or more of the above-mentioned tests, the regression plot can be linearized or one can make use of polynomial or nonlinear regression models. [Pg.142]

For statistical samples of small volume, an increase in the order of the polynomial regression of variables can produce a serious increase in the residual variance. We can reduce the number of the coefficients from the model but then we must introduce a transcendental regression relationship for the variables of the process. From the general theory of statistical process modelling (relations (5.1)-(5.9)) we can claim that the use of these types of relationships between dependent and independent process variables is possible. However, when using these relationships between the variables of the process, it is important to obtain an excellent ensemble of statistical data (i.e. with small residual and relative variances). [Pg.362]

In file regression model, a polynomial function is considered y = o bix b2X bo,x for file linear model, the terms and larger are ignored ... [Pg.121]

Example C.3 applies GREGPLUS with differential reactor data to assess four rival models of hydrogenation of iso-octenes for production of aviation gasoline. The variance estimate needed for discrimination of these models was estimated from the residuals of high-order polynomial regressions of the same data set. [Pg.229]

Seme and Muller (1987) describe attempts to hnd statistical empirical relations between experimental variables and the measured sorption ratios (R(js). Mucciardi and Orr (1977) and Mucciardi (1978) used linear (polynomial regression of first-order independent variables) and nonlinear (multinomial quadratic functions of paired independent variables, termed the Adaptive Learning Network) techniques to examine effects of several variables on sorption coefficients. The dependent variables considered included cation-exchange capacity (CEC) and surface area (S A) of the solid substrate, solution variables (Na, Ca, Cl, HCO3), time, pH, and Eh. Techniques such as these allow modelers to constmct a narrow probability density function for K s. [Pg.4764]


See other pages where Polynomial regression model is mentioned: [Pg.137]    [Pg.140]    [Pg.287]    [Pg.1486]    [Pg.153]    [Pg.293]    [Pg.357]    [Pg.40]    [Pg.137]    [Pg.140]    [Pg.287]    [Pg.1486]    [Pg.153]    [Pg.293]    [Pg.357]    [Pg.40]    [Pg.127]    [Pg.616]    [Pg.251]    [Pg.160]    [Pg.75]    [Pg.151]    [Pg.156]    [Pg.64]    [Pg.213]    [Pg.121]    [Pg.140]    [Pg.267]    [Pg.311]    [Pg.485]    [Pg.461]    [Pg.489]    [Pg.262]    [Pg.293]    [Pg.545]    [Pg.690]   
See also in sourсe #XX -- [ Pg.212 , Pg.214 ]




SEARCH



Models polynomial

Polynomial

Regression model

Regression modeling

Regression polynomial

© 2024 chempedia.info