Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Regression, nonlinear

A nonlinear model that occurs quite frequently is  [Pg.145]

This model is usually handled by means of taking the natural log of both sides of the equations yielding. [Pg.145]

Letting Z=lnY, a0=lnp0 and p1=p1, the model thus reduces to the linear model  [Pg.145]

Nowadays the method of least squares is applied to determine the regression coefficients a0 and aj. [Pg.145]

This nonlinear model becomes linear when logarithms and substitutions are [Pg.145]

Nonlinear regression is a curve fit in which the unknown parameters enter into the problem in a nonlinear way. Nonlinear regression is much more difficult (for the computer), so it is best to always try to manipulate your model into a form that is linear. Sometimes that is not possible, and then nonlinear regression must be used. You need to be aware, though, that the methods described here do not always work. Nonlinear regression uses techniques borrowed from the field of optimization, and it is difficult to construct a method that works every single time for every problem. [Pg.304]

To use nonlinear regression, you minimize Eq. (E.3) with respect to the unknown parameters. Polynomial and multiple regression do this too (behind the scenes), but for nonlinear curve fits it is necessary to use functions such as Solver in Excel and fminsearch in MATLAB. This is demonstrated using the same example given above for multiple regression. [Pg.304]

In nonlinear regression analysis, we search for those parameter t alues that minimize the sum of the squares of the differences beiw een the measured values and the calculated values for all the data points.- Not only can nonlinear regression find the best estimates of parameter values, it can al,so be used to discriminate between different rate law models, such as the Langmutr-Hin-shelw ood models discussed in Chapter 10. Many software programs are available to find these parameter values so that all one has to do is enter the data, The Polymath software will be used to illustrate this technique. In order to carry out the search efficiently, in some cases one has to enter initial estimates of the parameter -alues close to the actual values. These estimates can be obtained using Ihe linear-least-squares technique discussed on the CD-ROM Professional Reference Shelf. [Pg.271]

We will now apply nonlinear least-squares analysis to reaction rate data to determine the rate law parameters. Here we make estimates of the parameter values (e.g.. reaction order, specific rate constants) in order to calculate the rate of reaction. f. . We then search for those values that will minimize the sum of the squared differences of the measured reaction rates., and the calculated reaction rates, r. That is, we want the sum of for all data points to [Pg.271]

To illustrate this technique, let s consider the first-order reaction [Pg.272]

The reaction rate will be measured at a number of different concentrations. V now choose values of k and a and calculate the rate of reaction (rj ) at eac concentration at which an experimental point was taken. We then subtract tl calculated value from the measured value (r , ). square the result, and su the squares for all the runs for the values of k and a we have chosen. [Pg.272]

This procedure is continued by further varying a and i until we find the best values, that is, those values that minimize the sum of the squares. Mar well-known searching techniques are available to obtain the minimum vak [Pg.272]

We will now apply nonlinear regres.sion to reaction rate data to determine the rate law parameters. Here we make initial estimates of the parameter values (e.g.. reaction order, specific rate constant) in order to calculate the concentration for each data point. Cj,., obtained by. solving an integrated fonn of the combined mole balance and rate law. We then compare the measured concentration at that point. C(, . with the calculated value, for the parameter values chosen. We make this comparison by calculating the sum of the. squares of the differences at each point S(C, —We then continue to choose new parameter values and search for those values of the rate law that will minimize the sum of the squared differences of the measured concentrations. Cm,. and the calculated concentrations values, C,v.. That is, we want to find the rate law parameters for which the sum of all data points S(C, — C,) is a minimum. If we carried out N experiments, we would want to find the parameter values (e.g , activation energy, reaction orders) that minimize the quantity [Pg.259]

K = number of parameters to be determined Cj = measured concentration rate for run i C,r = calculated concentration rate for run i [Pg.259]

There are numerous methods for performing nonlinear regression. Here, a simple analysis is presented in order to provide the reader the general concepts used in performing a nonlinear regression analysis. [Pg.347]

To begin a nonlinear regression analysis, the model function must be known. Let  [Pg.347]

As with linear least squares analysis, X is minimized as follows. The partial derivatives of X with respect to the parameters of a are set equal to zero, for example, with (3  [Pg.348]

there will be n equations containing the n parameters of a. These equations involve the function /(x,, a) and the partial derivatives of the function, that is. [Pg.348]

The set of n equations of the type shown in Equation (B.4.3) needs to be solved. This set of equations is nonlinear if /(x,, a) is nonlinear. Thus, the solution of this set of equations requires a nonlinear algebraic equation solver. These are readily available. For information on the type of solution, consult any text on numerical analysis. Since the solution involves a set nonlinear algebraic equation, it is performed by an iterative process. That is, initial guesses for the parameters a are required. Often, the solution will terminate at local minimum rather than the global minimum. Thus, numerous initial guesses should be used to assure that the final solution is independent of the initial guess. [Pg.348]

In many cases, it may not be possible to obtain a valid linear regression model, and it may be necessary to perform nonlinear regression. In general, since nonlinear regression can handle an arbitrarily complex function, there is really no need to make any simplifications about the form of the regression model. Therefore, the model to be identified can be written as [Pg.120]

This ability to deal with general models means that much of the linear regression analysis cannot be performed exactly, since the underlying assumptirais are not valid any more. Nevertheless, most of the linear regression results hold if the number of data points is much larger than the number of parameters to be estimated. The optimisation algorithm can be written as [Pg.120]

All nonlinear regression approaches use numerical methods, such as the Gauss-Newton or Levenberg-Marquardt algorithm optimisaticai algorithms, to search for the optimal point. [Pg.120]

The derivative matrix of this problem, called the grand Jacobian matrix, J, plays a role similar to that of the A matrix in linear regression. The Jacobian, J, for the system can be calculated as [Pg.120]

The value of J is determined for each of the data points present to obtain the grand Jacobian matrix, J. Thus, J can be written as [Pg.121]

In some of the previous examples, the independent variable, y, obtained by linearizing the rate equation, was not the reaction rate, —rA. Therefore, when the best values of the slope and the intercept were determined by linear regression, we were minimizing the sum of the squares of the deviations between the calculated and experimental values of some variable other than —rj. For example, in the first stage of the analysis of the data in Table 6-4, the value of a was determined by minimizing the sum of the squares of the deviations in y = ln(—rA). In the second stage of the analysis, the values of and were determined by minimizing the sum of the squares of the deviations in y = ln(—rA/CJ). These values of a, P, and are not necessarily the same as the ones that would have been obtained if we had minimized the sum of the squares of the deviations in —r, itself. A new set of tools is required to find the best values of the parameters when —rA is not linear in the various concentrations. [Pg.171]

Fortunately, powerful nonlinear regression programs now are available. These programs allow us to minimize the sum of the squares of the deviations in any variable we choose, linear or not. Moreover, some of the easier nonlinear regression problems can be solved with a simple spreadsheet. Let s illustrate the use of a spreadsheet to carry out nonlinear regression by reanalyzing the AIBN decomposition data in Table 6-5. [Pg.171]

To begin, the Arrhenius relationship will be written in the equivalent form [Pg.171]

This transformation usually improves the convergence and stability of the numerical techniques that are used in nonlinear regression programs. Let s choose To as the midpoint of the range of temperatures in Table 6-5, i.e., 7b = 90°C = 363 K. We wiU use nonlinear regression to find the values of 363) and E. [Pg.171]

There are two common approaches to determining parameter values by nonlinear regression. The first is to minimize the sum of the squares of the absolute deviations in the objective fimction, i.e., the rate constant, k, for the present problem. This involves finding the values of 363) and E that produce a minimum value of h, tbeo — i,exp). where [Pg.171]

The inverse matrix C = a provides an estimate for the confidence intervals for the estimated parameters. The diagonal elements of [C] are the variances of the fitted parameters, i.e., [Pg.368]

The off-diagonal elements of C, Cy,fc, are the covariances between peirameters Pj and Pit, which show the extent of correlation among parameters. This correlation is imdesirable for a regression. It can appear when too many parameters are being sought in the regression, but correlation among parameters may sometimes be unavoidable due to the structure of the model. [Pg.368]

Consider a general function /(P) = 0 that is nonlinear with respect to parameters Pfc. Under the assumption that /(P) is twice continuously differentiable, a Taylor-series expansion about a parameter set Pq yields [Pg.368]

The optimal value for P is found when /(P) has a minimum value. At the minimum, derivatives with respect to the parameter increments AP, should be equal to zero thus. [Pg.368]

The general formulation described above can now be applied to the nonlinear least-squares problem. [Pg.369]


VLE data are correlated by any one of thirteen equations representing the excess Gibbs energy in the liquid phase. These equations contain from two to five adjustable binary parameters these are estimated by a nonlinear regression method based on the maximum-likelihood principle (Anderson et al., 1978). [Pg.211]

Some formulas, such as equation 98 or the van der Waals equation, are not readily linearized. In these cases a nonlinear regression technique, usually computational in nature, must be appHed. For such nonlinear equations it is necessary to use an iterative or trial-and-error computational procedure to obtain roots to the set of resultant equations (96). Most of these techniques are well developed and include methods such as successive substitution (97,98), variations of Newton s rule (99—101), and continuation methods (96,102). [Pg.246]

The usual practice in these appHcations is to concentrate on model development and computation rather than on statistical aspects. In general, nonlinear regression should be appHed only to problems in which there is a weU-defined, clear association between the independent and dependent variables. The generalization of statistics to the associated confidence intervals for nonlinear coefficients is not well developed. [Pg.246]

In Figure 2, a double-reciprocal plot is shown Figure 1 is a nonlinear plot of as a function of [S]. It can be seen how the least accurately measured data at low [S] make the deterrnination of the slope in the double-reciprocal plot difficult. The kinetic parameters obtained in this example by making linear regression on the double-reciprocal data ate =1.15 and = 0.25 (arbitrary units). The same kinetic parameters obtained by software using nonlinear regression are = 1.00 and = 0.20 (arbitrary units). [Pg.287]

The regression constants A, B, and D are determined from the nonlinear regression of available data, while C is usually taken as the critical temperature. The hquid density decreases approximately linearly from the triple point to the normal boiling point and then nonhnearly to the critical density (the reciprocal of the critical volume). A few compounds such as water cannot be fit with this equation over the entire range of temperature. Liquid density data to be regressed should be at atmospheric pressure up to the normal boihng point, above which saturated liquid data should be used. Constants for 1500 compounds are given in the DIPPR compilation. [Pg.399]

When experimental data is to be fit with a mathematical model, it is necessary to allow for the facd that the data has errors. The engineer is interested in finding the parameters in the model as well as the uncertainty in their determination. In the simplest case, the model is a hn-ear equation with only two parameters, and they are found by a least-squares minimization of the errors in fitting the data. Multiple regression is just hnear least squares applied with more terms. Nonlinear regression allows the parameters of the model to enter in a nonlinear fashion. The following description of maximum likehhood apphes to both linear and nonlinear least squares (Ref. 231). If each measurement point Uj has a measurement error Ayi that is independently random and distributed with a normal distribution about the true model y x) with standard deviation <7, then the probability of a data set is... [Pg.501]

Complex Rate Equations Complex rate equations may require individual treatment, although the examples in Fig. 7-2 are aU hn-earizable. A perfectly general procedure is nonlinear regression. For instance, when r =f(C, a, b,. . . ) where a,h,. . , ) are the constants to be found, the condition is... [Pg.688]

Oztnrk et al. (1987) developed a new correlation on the basis of a modification of the Aldta-Yoshida correlation suggested hy Nakanoh and Yoshida (1980). In addition, the hnhhle diameter rather than the colnmn diam-eterwas used as the characteristic length as the colnmn diameter has little influence on k a. The value of was assumed to he approximately constant (dfc = 0.003 m). The correlation was obtained hy nonlinear regression is as follows ... [Pg.1426]

Also, the excellent properties of the robust procedures are demonstrated at constmcting the nonlinear regression models for the two-atomic system potential energy curves. [Pg.22]

The value of the integration constant is determined by the magnitude of the displacement from the equilibrium position at zero time. King also gives a solution for Scheme IV, and Pladziewicz et al. show how these equations can be used with a measured instrumental signal to estimate the rate constants by means of nonlinear regression. [Pg.62]

If an analytical solution is available, the method of nonlinear regression analysis can be applied this approach is described in Chapter 2 and is not treated further here. The remainder of the present section deals with the analysis of kinetic schemes for which explicit solutions are either unavailable or unhelpful. First, the technique of numerical integration is introduced. [Pg.106]

At some stage between cases 2 and 3, coalescence into a single broadened band takes place. A full quantitative treatment requires nonlinear regression of the line shape to the theoretical relationship. [Pg.168]

Grunwald has shown applications of Eqs. (5-78) and (5-79) as tests of the theory and as mechanistic criteria. One way to do this, for a reaction series, is to estimate AG° and AG from thermodynamic data and from reasonable approximations and then to fit experimental rate data (AG values) to Eq. (5-78) by nonlinear regression. This yields estimates of AGq and AG (which are constants within the reaction series), and these are then used in Eq. (5-79) to obtain the transition state coordinates. [Pg.240]

Kinetic studies at several temperatures followed by application of the Arrhenius equation as described constitutes the usual procedure for the measurement of activation parameters, but other methods have been described. Bunce et al. eliminate the rate constant between the Arrhenius equation and the integrated rate equation, obtaining an equation relating concentration to time and temperature. This is analyzed by nonlinear regression to extract the activation energy. Another approach is to program temperature as a function of time and to analyze the concentration-time data for the activation energy. This nonisothermal method is attractive because it is efficient, but its use is not widespread. ... [Pg.250]

When estimates of k°, k, k", Ky, and K2 have been obtained, a calculated pH-rate curve is developed with Eq. (6-80). If the experimental points follow closely the calculated curve, it may be concluded that the data are consistent with the assumed rate equation. The constants may be considered adjustable parameters that are modified to achieve the best possible fit, and one approach is to use these initial parameter estimates in an iterative nonlinear regression program. The dissociation constants K and K2 derived from kinetic data should be in reasonable agreement with the dissociation constants obtained (under the same experimental conditions) by other means. [Pg.290]

One shortcoming of Schild analysis is an overemphasized use of the control dose-response curve (i.e., the accuracy of every DR value depends on the accuracy of the control EC o value). An alternative method utilizes nonlinear regression of the Gaddum equation (with visualization of the data with a Clark plot [10], named for A. J. Clark). This method, unlike Schild analysis, does not emphasize control pECS0, thereby giving a more balanced estimate of antagonist affinity. This method, first described by Lew and Angus [11], is robust and theoretically more sound than Schild analysis. On the other hand, it is not as visual. Schild analysis is rapid and intuitive, and can be used to detect nonequilibrium steady states in the system that can corrupt... [Pg.113]

Nonlinear regression, a technique that fits a specified function of x and y by the method of least squares (i.e., the sum of the squares of the differences between real data points and calculated data points is minimized). [Pg.280]

Solution The classic way of fitting these data is to plot ln( /7 ) versus T" and to extract and Tact from the slope and intercept of the resulting (nearly) straight line. Special graph paper with a logarithmic j-axis and a l/T A-axis was made for this purpose. The currently preferred method is to use nonlinear regression to fit the data. The object is to find values for kQ and Tact that minimize the sum-of-squares ... [Pg.152]

Versions of Volume I exist for C, Basic, and Pascal. Matlab enthusiasts will find some coverage of optimization (and nonlinear regression) techniques in... [Pg.205]

Section 5.1 shows how nonlinear regression analysis is used to model the temperature dependence of reaction rate constants. The functional form of the reaction rate was assumed e.g., St = kab for an irreversible, second-order reaction. The rate constant k was measured at several temperatures and was fit to an Arrhenius form, k = ko exp —Tact/T). This section expands the use of nonlinear regression to fit the compositional and temperature dependence of reaction rates. The general reaction is... [Pg.209]

The sum of squares as defined by Equation 7.8 is the general form for the objective function in nonlinear regression. Measurements are made. Models are postulated. Optimization techniques are used to adjust the model parameters so that the sum-of-squares is minimized. There is no requirement that the model represent a simple reactor such as a CSTR or isothermal PER. If necessary, the model could represent a nonisothermal PFR with variable physical properties. It could be one of the distributed parameter models in Chapters 8 or 9. The model... [Pg.211]

Use nonlinear regression to fit these data to a plausible functional form for See Example 7.20 for linear regression results that can provide good initial guesses. [Pg.250]

Repeat Problem 7.1 using the entire set. First do a preliminary analysis using linear regression and then make a final determination of the model parameters using nonlinear regression. [Pg.252]

Solution The conversion is low so that the polymer composition is given by Equation 13.41 with the monomer concentrations at the initial values. There are five data and only two unknowns, so that nonlinear regression is appropriate. The sum-of-squares to be minimized is... [Pg.489]

Referring to Example 14.9, Vermeulen and Fortuin estimated aU the parameters in their model from physical data. They then compared model predictions with experimental results and from this they made improved estimates using nonlinear regression. Their results... [Pg.536]

Figure 4.30. Back-calculated results for file VALID2.dat. The data from the left half of Fig. 4.29 are superimposed to show that the day-to-day variability most heavily influences the results at the lower concentrations. The lin/lin format is perceived to be best suited to the upper half of the concentration range, and nearly useless below 5 ng/ml. The log/log format is fairly safe to use over a wide concentration range, but a very obvious trend suggests the possibility of improvements (a) nonlinear regression, and (b) elimination of the lowest concentrations. Option (b) was tried, but to no avail While the curvature disappeared, the reduction in n, logf.t) range, and Sxx made for a larger Pres and. thus, larger interpolation errors. Figure 4.30. Back-calculated results for file VALID2.dat. The data from the left half of Fig. 4.29 are superimposed to show that the day-to-day variability most heavily influences the results at the lower concentrations. The lin/lin format is perceived to be best suited to the upper half of the concentration range, and nearly useless below 5 ng/ml. The log/log format is fairly safe to use over a wide concentration range, but a very obvious trend suggests the possibility of improvements (a) nonlinear regression, and (b) elimination of the lowest concentrations. Option (b) was tried, but to no avail While the curvature disappeared, the reduction in n, logf.t) range, and Sxx made for a larger Pres and. thus, larger interpolation errors.

See other pages where Regression, nonlinear is mentioned: [Pg.217]    [Pg.287]    [Pg.288]    [Pg.504]    [Pg.505]    [Pg.198]    [Pg.51]    [Pg.89]    [Pg.250]    [Pg.114]    [Pg.126]    [Pg.247]    [Pg.162]    [Pg.28]    [Pg.28]    [Pg.163]    [Pg.53]    [Pg.54]    [Pg.152]    [Pg.211]    [Pg.256]    [Pg.128]    [Pg.129]    [Pg.131]    [Pg.131]   
See also in sourсe #XX -- [ Pg.75 ]

See also in sourсe #XX -- [ Pg.61 , Pg.451 ]

See also in sourсe #XX -- [ Pg.934 ]

See also in sourсe #XX -- [ Pg.165 , Pg.166 , Pg.167 , Pg.168 , Pg.169 , Pg.170 , Pg.171 , Pg.172 , Pg.173 ]

See also in sourсe #XX -- [ Pg.203 , Pg.204 , Pg.205 , Pg.206 , Pg.207 ]

See also in sourсe #XX -- [ Pg.144 ]

See also in sourсe #XX -- [ Pg.101 ]

See also in sourсe #XX -- [ Pg.146 ]

See also in sourсe #XX -- [ Pg.67 ]

See also in sourсe #XX -- [ Pg.451 ]

See also in sourсe #XX -- [ Pg.67 ]

See also in sourсe #XX -- [ Pg.68 , Pg.85 , Pg.224 ]

See also in sourсe #XX -- [ Pg.144 ]

See also in sourсe #XX -- [ Pg.190 ]

See also in sourсe #XX -- [ Pg.432 ]

See also in sourсe #XX -- [ Pg.388 ]

See also in sourсe #XX -- [ Pg.397 , Pg.398 ]

See also in sourсe #XX -- [ Pg.453 , Pg.456 ]

See also in sourсe #XX -- [ Pg.46 ]

See also in sourсe #XX -- [ Pg.388 ]

See also in sourсe #XX -- [ Pg.200 , Pg.201 , Pg.673 ]




SEARCH



A Nonlinear Regression for AIBN Decomposition

Arrhenius regression analysis nonlinear

Batch reactors nonlinear regression

Complex Nonlinear Regression

Computational Example of Nonlinear Regression

Cross-Contributions Between Analyte and Internal Standard - a Need for Nonlinear Regression

Curve fitting with nonlinear regression

Curve fitting with nonlinear regression analysis

Curve fitting, nonlinear regression analysis

Error-in-variables nonlinear regression

Gauss-Newton Solution for Nonlinear Regression

Linear and Nonlinear Regression Functions

Michaelis-Menten model nonlinear regression

Multiple Nonlinear Regression

Nonlinear Models and Regression

Nonlinear Regression Case Study Pharmacokinetic Modeling of a New Chemical Entity

Nonlinear Regression Example in Excel

Nonlinear Regression Example in MATLAB

Nonlinear Regression Problems

Nonlinear Regression Template

Nonlinear Regression Using Excel

Nonlinear Regression Using MATLAB

Nonlinear Regression and Modeling

Nonlinear Regression of Experimental Data

Nonlinear least squares regression analysis

Nonlinear least-squares regression

Nonlinear least-squares regression analysis kinetic data

Nonlinear regression Michaelis-Menten equation

Nonlinear regression case studies

Nonlinear regression case studies pharmacokinetic modeling

Nonlinear regression described

Nonlinear regression determined

Nonlinear regression for

Nonlinear regression model

Nonlinear regression technique

Other Nonlinear Regression Methods for Algebraic Models

Parameter estimation nonlinear regression

Parameters nonlinear regression

Polymath program nonlinear regression

Regression analysis nonlinear

Regression analysis nonlinear least squares method

Regression for Nonlinear Data the Quadratic Fitting Function

Statistical analysis nonlinear regression

Useful Formulae for Nonlinear Regression

© 2024 chempedia.info