Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Models That Are Nonlinear in the Parameters

This value will not lead in one step to the minimum of (2.6.3.1-2). The value Abs+i is then added to bs and so on, until convergence is achieved. The method converges rapidly close to the minimum of 5(P) but with poor initial values of bo it may not. To avoid divergence in this region, Marquardt [1963] worked out a compromise between the method of steepest descent and that of Newton-Gauss. The steepest descent method is most efficient far from the minimum, whereas it is the other way around close to the minimum. The compromise has the ability to change the size and the direction of the optimization step by means of a scalar parameter X, which is a Lagrangian multiplier, so that (2.6.3.2-4) becomes [Pg.117]

Significance tests (f-values) for the individual parameter estimates can be performed, as well as an overall significance test (F-value) of the regression. Provided that replicate experiments have been performed, the model adequacy (F-value) can be tested. [Pg.118]


Next we covered analysis of data. We used probability and random variables to model the irreproducibie part of the experiment. For models that are linear in the parameters, we can perform parameter estimation and construct exact confidence intervals analytically. For models that are nonlinear in the parameters, we compute para ter estimates and construct approximate confidence intervals using nonlinear optimization methods. [Pg.614]

Tests 1 and 2 determine the true model adequacy test 3 can only yield the best model. Applying the above statistics to models that are nonlinear in the parameters requires the model to be locally linear. For the particular application considered here, this means that the residual mean square distribution is approximated to a reasonable extent by the distribution. Furthermore, care has to be taken for outliers, since appears to be rather sensitive to departures of the data from normality. In Example 2.7.1.1.A, given below, this was taken care of by starting the elimination from scratch again after each experiment. Finally, the theory requires the variance estimates that are tested on homogeneity to be statistically independent. It is hard to say to what extent this restriction is fulfilled. From the examples given, which have a widely different character, it would seem that the procedure is efficient and reliable. [Pg.129]

Geometry of the trial model Because the structure-refinement procedures are based on functions that are nonlinear in the parameters, it... [Pg.87]

The general scheme required here is shown in Fig. 29, a generalization of Fig. 28. With some ideas as to good areas of experimentation, the experimenter takes an initial set of data. These data are then analyzed to determine the best estimates of the parameters of the model or models under consideration. Since models that usually arise in these circumstances are nonlinear in the parameters, some version of nonlinear estimation will usually be employed in this analysis. Nonlinear estimation techniques, of course, almost always require the use of a computer. [Pg.170]

Basically the analyses and techniques presented in sections 3.1.2 and 3.2 are applicable to nonlinear models as well. Actually, polynomial models are linear in parameters, and thus the theory of linear regression applies. Normally, nonlinear regression refers to regression analysis of models that are nonlinear in parameters. This topac is not treated in this chapter, and the interested reader may see e.g. (Bard, 1973)... [Pg.122]

When the models are nonlinear in the parameters, Xis replaced by J. An example of application of sequential design for optimal estimation is given by Van Parijs et al. [1986] in their study of benzothiophene hydrogenolysis. At 533 K the desired accuracy of the parameters of the equations given in Example 2.6.4.A was reached after four more experiments beyond the seven required for discriminating between the 16 rival models. In total, no more than 16 experiments were required for the four temperature levels that were investigated. [Pg.139]

Several methods are used to fit rate models, the two most common of which often give erroneous results. The first is the transformation of a proposed rate model to achieve a model form that is linear in the parameters. An example is the nonlinear model ... [Pg.175]

The formulation of the parameter estimation problem is equally important to the actual solution of the problem (i.e., the determination of the unknown parameters). In the formulation of the parameter estimation problem we must answer two questions (a) what type of mathematical model do we have and (b) what type of objective function should we minimize In this chapter we address both these questions. Although the primary focus of this book is the treatment of mathematical models that are nonlinear with respect to the parameters nonlinear regression) consideration to linear models linear regression) will also be given. [Pg.7]

ARR residuals may be obtained from a DBG. If ARRs cannot be deduced from a DBG in closed symbolic form because of nonlinear implicit equations indicated by causal paths, ARR residuals are given implicitly. Their numerical values can be obtained by solving the entire DBG model in each step of the parameter estimation iteration. Derivatives of measured variables with respect to time that are needed in the evaluation of the DBG model are to be performed in discrete time. Once residuals are available for the time points of an observation window, the cost function can be built. If a gradient based parameter estimation method is used, the gradient of the least squares cost function can be obtained by using discrete derivatives. [Pg.147]

A suitable transformation of the model equations can simplify the structure of the model considerably and thus, initial guess generation becomes a trivial task. The most interesting case which is also often encountered in engineering applications, is that of transformably linear models. These are nonlinear models that reduce to simple linear models after a suitable transformation is performed. These models have been extensively used in engineering particularly before the wide availability of computers so that the parameters could easily be obtained with linear least squares estimation. Even today such models are also used to reveal characteristics of the behavior of the model in a graphical form. [Pg.136]

If two independent variables are involved in the model, plots such as those shown in Figure 2.5 can be of assistance in this case the second independent variable becomes a parameter that is held constant at various levels. Figure 2.6 shows a variety of nonlinear functions and their associated plots. These plots can assist in selecting relations for nonlinear functions of y versus x. Empirical functions of more than two variables must be built up (or pruned) step by step to avoid including an excessive number of irrelevant variables or missing an important one. Refer to Section 2.4 for suitable procedures. [Pg.51]

Physico-chemical measurements using chromatographic methods produce responses that are linear to the concentrations. As IA measures the resulting signals of a reaction, however, the response is a nonlinear function of the analyte concentration. Often, the regression model used to describe this relationship is a four- or five-parameter logistic function, as shown in the sigmoid shape standard curve in Fig. 6.4. [Pg.160]

Beyond pharmacokinetics and pharmacodynamics, population modeling and parameter estimation are applications of a statistical model that has general validity, the nonlinear mixed effects model. The model has wide applicability in all areas, in the biomedical science and elsewhere, where a parametric functional relationship between some input and some response is studied and where random variability across individuals is of concern [458]. [Pg.314]

This method is usually satisfactory for routine work, particularly when the model function is linear in the adjustable parameters. However, there may arise circumstances in nonlinear least squares for which more care is needed. Consider a situation in which one parameter, say ayy, is of special importance. It may be that it was the whole point of the experiment to determine that one parameter, while the other parameters are of minor importance apart from their being necessary parts of the model. Let the best value of this parameter, determined by a convergent least-squares procedure, be a. Let us step away from a in both directions by a constant interval to obtain several values of for the procedure to be described. [The interval chosen might be, for example, 5 as calculated with Eq. (40), and the number of steps might be three or more on each side.] We then fix at each of these trial values in turn, refine the remaining parameters to convergence, and evaluate xl with... [Pg.679]


See other pages where Models That Are Nonlinear in the Parameters is mentioned: [Pg.50]    [Pg.795]    [Pg.60]    [Pg.117]    [Pg.117]    [Pg.50]    [Pg.795]    [Pg.60]    [Pg.117]    [Pg.117]    [Pg.65]    [Pg.53]    [Pg.261]    [Pg.504]    [Pg.8]    [Pg.170]    [Pg.326]    [Pg.127]    [Pg.136]    [Pg.91]    [Pg.257]    [Pg.306]    [Pg.683]    [Pg.382]    [Pg.104]    [Pg.118]    [Pg.118]    [Pg.141]    [Pg.134]    [Pg.32]    [Pg.108]    [Pg.149]    [Pg.258]    [Pg.464]    [Pg.207]    [Pg.115]    [Pg.331]    [Pg.2329]    [Pg.397]   


SEARCH



Model parameter

Nonlinear model

Nonlinear modeling

Nonlinearity parameter

The parameters

© 2024 chempedia.info