Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Regression Functions

It can be easily proved that when /(x) is the regression function (Vapnik, 1982)... [Pg.202]

Determination of sample results from regression function... [Pg.719]

Hartley, H.O., "The Modified Gauss-Newton Method for the Fitting of Non-Linear Regression Functions by Least Squares", Technometrics, 3(2), 269-280(1961). [Pg.395]

The basic assumptions for application of graphic isotherm and regression equations are that the data be derived under equilibrium conditions, constant temperature, and minimal fixation effects, and the data can be modeled as a regression function. The equations are valid only within the experimental concentration ranges used to determine the sorption. [Pg.174]

Table 2.6 Linear regression functions representing the dependence of log(A s) vs. log( /Jnax) for different thiekness of the film. The conditions of the simulations are tiEg = 50 mV, AE = 10 mV, Oa = 0.5, D = I X 10 cm s f The last column lists the interval of kg values over which the regression line was calculated... Table 2.6 Linear regression functions representing the dependence of log(A s) vs. log( /Jnax) for different thiekness of the film. The conditions of the simulations are tiEg = 50 mV, AE = 10 mV, Oa = 0.5, D = I X 10 cm s f The last column lists the interval of kg values over which the regression line was calculated...
Thereby we have to consider that the outlier test assumes the chosen approach for the regression function to be correct. First we should have a look on the plot of the residual analysis, because from there we can recognise potential outliers. We calculate the regression both with and without the potential outlier. Then we can apply either the F-test or the t-test... [Pg.191]

Or we may calculate the prognostic interval of the regression function without the potential outlier, i.e. the interval within which we would expect an additional value to lie with a certain confidence. [Pg.192]

If there is a linear connection, calculation of unknowns by the regression function is possible. But as shown in Fig. 1.1, for example, the linear correlation very often exists only approximately in a small range. [Pg.235]

Calculation of the molecular mass of an unknown protein follows the same procedure as, for example, quantitative protein determination plotting of the Rf of the calibration proteins against their molecular mass computing of a standard curve and estimation of the MW of the unknown protein and using the regression functions of the standard curve (c.f Fig. 2.1). [Pg.243]

The white-line surface area was obtained as the integrated Lorentzian part of the regression function ... [Pg.301]

The most common calibration model or function in use in analytical laboratories assumes that the analytical response is a linear function of the analyte concentration. Most chromatographic and spectrophotometric methods use this approach. Indeed, many instruments and software packages have linear calibration (regression) functions built into them. The main type of calculation adopted is the method of least squares whereby the sums of the squares of the deviations from the predicted line are minimised. It is assumed that all the errors are contained in the response variable, T, and the concentration variable, X, is error free. Commonly the models available are Y = bX and Y = bX + a, where b is the slope of the calibration line and a is the intercept. These values are the least squares estimates of the true values. The following discussions are only... [Pg.48]

No coefficients of determination should be used as sole criteria for the evaluation of regression functions. Problems may, e.g., arise with autocorrelated measurements (see also Section 6.6). [Pg.61]

The next step is the very easy computation of the main effects of each factor, the two-factor interaction, and the model parameters of the possible regression function. [Pg.81]

Because introducing the reader to actual optimization techniques is beyond the scope of this book, let us only indicate here that with the model obtained the analyst or technician is able to find the optimum value of y by partially differentiating the regression function. Setting each differential to zero he or she he may find the optimum values of the single variables. Substitution of these values into the model equation will yield the optimum value of y. [Pg.85]

The coefficient of the LMS regression function for the relationship between plant and soil lead content, ao soii = 0.90 mg kg 1 Pb, corresponds to the median amount of lead taken up by the plants from atmospheric emissions if the soil lead content were zero,... [Pg.344]

Since it is a characteristic feature of the precision of the prediction of the regression function (1) the estimated standard deviation of a predicted single value ypred at position x0... [Pg.254]

The cumulant functions provide a basis for parameter estimation using weighted least squares. The expected value function kx (t) could serve as the regression function, the variance function /v,-2 (t) supplies the weights, and K3 (/,) provides a simple indicator of possible departure from an assumed symmetric distribution. [Pg.266]

Polynomial Regression Polynomial regression is a special case of multiple linear regression. The regression function is given by ... [Pg.142]

To compute the BLUP (18) of a marginal effect we need the vectors fe(xe) in (16) and re(x) following (22), both of which involve integration over all the variables not in e. For computational convenience in performing these integrations, we need two further product-structure conditions, in addition to (8) and (9). They relate to the properties of the random-function model, specifically the regression functions, f(x), in (1) and the correlation function, R(x, x), in (2). [Pg.324]

First, we assume each regression function is a product of functions in just one input variable that is, element k of /(x) can be written... [Pg.325]

If the number of independent variables is increased in the process, then the regression function will contain all the independent variables as well as their simple or multiple interactions. At the same time, the number of dependent variables also increases, and, for each of the new dependent variables, we have to consider the problem of identifying the parameters. [Pg.330]

With respect to this experimental effort, it is important to specify that it is sometimes difficult to measure the variables involved in a chemical process. They include concentrations, pressures, temperatures and masses or flow rates. In addition, during the measurement of each factor or dependent variable, we must determine the procedure, as well as the precision, corresponding to the requirements imposed by the experimental plan [5.4]. When the investigated process shows only a few independent variables. Fig. 5.3 can be simplified. The case of a process with one independent and one dependent variable has a didactic importance, especially when the regression function is not linear [5.15]. [Pg.331]

In experimental research, each studied case is generally characterized by the measurement of x (x values) and y (y values). Each chain of x and each chain of y represents a statistical selection because these chains must be extracted from a very large number of possibilities (tvhich can be defined as populations). However, for simplification purposes in the example above (Table 5.2), we have limited the input and output variables to only 5 selections. To begin the analysis, the researcher has to answer to this first question what values must be used for x (and corresponding y) when we start analysing of the identification of the coefficients by a regression function Because the normal equation system (5.9) requires the same number of x and y values, we can observe that the data from Table 5.2 cannot be used as presented for this purpose. To prepare these data for the mentioned scope, we observe that, for each proposed x value (x = 13.5 g/1, x=20 g/1, x = 27 g/1, X = 34 g/1, X = 41 g/1), several measurements are available these values can be summed into one by means of the corresponding mean values. So, for each type of X data, we use a mean value, where, for example, i = 5 for the first case (proposed X = 13.5 g/1), i = 3 for the third case, etc. The same procedure will be applied for y where, for example, i = 4 for the first case, i = 6 for the second case, etc. [Pg.334]


See other pages where Regression Functions is mentioned: [Pg.75]    [Pg.98]    [Pg.483]    [Pg.162]    [Pg.177]    [Pg.201]    [Pg.158]    [Pg.161]    [Pg.728]    [Pg.183]    [Pg.4]    [Pg.203]    [Pg.574]    [Pg.64]    [Pg.65]    [Pg.345]    [Pg.472]    [Pg.474]    [Pg.267]    [Pg.222]    [Pg.258]    [Pg.312]    [Pg.313]    [Pg.803]    [Pg.331]    [Pg.695]   
See also in sourсe #XX -- [ Pg.103 ]

See also in sourсe #XX -- [ Pg.2 , Pg.947 ]




SEARCH



© 2024 chempedia.info