Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Multiresponse Models

The models considered are algebraic and linear in the parameters. The joint confidence region is given by [Pg.139]


Mundt el al.n9 have adapted the above scheme for the maltose-glycine system (pH 5.5, 70 °C). Multiresponse modelling gave excellent fits for the time course of the concentrations of glucose, melanoidins (A470, number of maltose molecules incorporated from U-14C-labelled maltose), 3-DH (as quinoxaline derivative), maltose, and SIV (in maltose-glycine-SIV systems). [Pg.37]

AS is a reactive intermediate, formed from the amino acid (A) and sugar (S) prior to the Amadori product (ARP). Its nature, say, Schiff base or glucosylamine/enaminol, was not studied. However, its inclusion was of major importance in obtaining fits by multiresponse modelling of the decomposition of fructosylglycine on its own (pH 5.5, 100 °C), when glycine and 1- and 3-deoxyosones (DG) were the only products detected. Because of the reactivity of DGs, further products were expected, hence the inclusion of k3. [Pg.38]

Multiparameter, multiresponse models call for digital optimization. Early workers minimized n(0) by search techniques, which were tedious and gave only a point estimate of 9. Newtonlike algorithms for minimization of i (0) and for interval estimation of 6, were given by Stewart and Sorensen (1976, 1981) and by Bates and Watts (1985, 1987). Corresponding algorithms for likelihood-based estimation were developed by Bard and Lapidus (1968) and Bard (1974), extended by Klaus and Rippin (1979) and Steiner, Blau, and Agin (1986). [Pg.142]

Box and Draper (1965) derived a density function for estimating the parameter vector 6 of a multiresponse model from a full data matrix Y, subject to errors normally distributed in the manner of Eq. (4.4-3) with a full unknown covariance matrix E. With this type of data, every event u has a full set of m responses, as illustrated in Table 7.1. The predictive density function for prospective data arrays Y from n independent events, consistent with Eqs. (7.1-1) and (7.1-3), is... [Pg.143]

Once a multiresponse model is fitted, the weighted residuals... [Pg.153]

Investigations of multiresponse modeling are summarized in Table 7.6 for various chemical process systems. All data types except that of Table 7.2a... [Pg.161]

The criterion, adapted from Box and Hill (1967), Hill and Hunter (1969), and Reilly (1970), is the expectation of entropy decrease (information increase) obtainable by adding event INEXT to the data set. The expectation calculations are described in Chapters 6 and 7, where the integral formula of Reilly (1970) is extended to multiresponse models and unknown covariance matrix. [Pg.224]

At this point it is necessary to discuss differences between uniresponse and multiresponse modeling. Take, for example, the reaction A B -> C. Usually, equations in differential or algebraic form are fitted to individual data sets. A, B, and C and a set of parameter estimates obtained. [Pg.29]

For the multiresponse model. Box and Draper (1965) proposed to optimize the posterior distribution of 0 using uniform or noninformative prior distributions of the parameters. They wrote ... [Pg.434]

Let us consider first the most general case of the multiresponse linear regression model represented by Equation 3.2. Namely, we assume that we have N measurements of the m-dimensional output vector (response variables), y , M.N. [Pg.27]

The corresponding (l-a)100% confidence interval for the multiresponse linear model is... [Pg.35]

The direct optimization of a single response formulation modelled by either a normal or pseudocomponent equation is accomplished by the incorporation of the component constraints in the Complex algorithm. Multiresponse optimization to achieve a "balanced" set of property values is possible by the combination of response desirability factors and the Complex algorithm. Examples from the literature are analyzed to demonstrate the utility of these techniques. [Pg.58]

The adaptive parameters in the model were estimated by nonlinear and multiresponse regression, performed using the Fortran subroutine BURENL23 based... [Pg.309]

For the multiresponse situation, several measurable responses are implicit in each model under consideration. For example, for the reaction... [Pg.173]

A single matrix least squares calculation can be employed when the same linear model is used to fit each of several system responses. The D, X, X X), and matrices remain the same, but the Y, X Y), 6, and R matrices have additional columns, one column for each response. Fit the model = Po + Pi u + following multiresponse data,y = 1, 2, 3 ... [Pg.149]

Observing a process, scientists and engineers frequently record several variables. For example, (ref. 20) presents concentrations of all species for the thermal isomerization of a-pinene at different time points. These species are ct-pinene (yj), dipentene ( 2) allo-ocimene ( 3), pyronene (y ) and a dimer product (y5). The data are reproduced in Table 1.3. In (ref. 20) a reaction scheme has also been proposed to describe the kinetics of the process. Several years later Box at al. (ref. 21) tried to estimate the rate coefficients of this kinetic model by their multiresponse estimation procedure that will be discussed in Section 3.6. They run into difficulty and realized that the data in Table 1.3 are not independent. There are two kinds of dependencies that may trouble parameter estimation ... [Pg.61]

A corresponding normal distribution is available for multiresponse data, that is, for interdependent observations of two or more measurable quantities. Such data are common in experiments with chemical mixtures, mechanical structures, and electric circuits as well as in population surveys and econometric studies. Modeling with multiresponse data is treated in Chapter 7 and in the software of Appendix C. [Pg.72]

The following chapters and the package GREGPLUS apply these principles to practical models and various data structures. Least squares, multiresponse estimation, model discrimination, and process function estimation are presented there as special forms of Bayesian estimation. [Pg.91]

The statistical investigation of a model begins with the estimation of its parameters from observations. Chapters 4 and 5 give some background for this step. For single-response observations with independent normal error distributions and given relative precisions, Bayes theorem leads to the famous method of least squares. Multiresponse observations need more detailed treatment, to be discussed in Chapter 7. [Pg.95]

Least squares has played a prominent role in the chemical engineering literature, especially since the advent of automatic computation. Some further references to this literature and to least-squares algorithms are included at the end of this chapter. Multiresponse data require a more detailed error model and will be treated in Chapter 7. [Pg.125]

Chapter 7 Process Modeling with Multiresponse Data... [Pg.141]

Multiresponse experimentation is important in studies of complex systems and of systems observed by multiple methods. Chemical engineers and chemists use multiresponse experiments to study chemical reactions, mixtures, separation, and mixing processes similar data structures occur widely in science and engineering. In this chapter we study methods for investigating process models with multiresponse data. Bayes theorem now yields more general methods than those of Chapter 6, and Jeffreys rule, discussed in Chapter 5, takes increased importance. [Pg.141]

The methods of Chapter 6 are not appropriate for multiresponse investigations unless the responses have known relative precisions and independent, unbiased normal distributions of error. These restrictions come from the error model in Eq. (6.1-2). Single-response models were treated under these assumptions by Gauss (1809, 1823) and less completely by Legendre (1805), co-discoverer of the method of least squares. Aitken (1935) generalized weighted least squares to multiple responses with a specified error covariance matrix his method was extended to nonlinear parameter estimation by Bard and Lapidus (1968) and Bard (1974). However, least squares is not suitable for multiresponse problems unless information is given about the error covariance matrix we may consider such applications at another time. [Pg.141]

Bayes theorem (Bayes 1763 Box and Tiao 1973, 1992) permits estimation of the error covariance matrix S from a multiresponse data set, along with the parameter vector 0 of a predictive model. It is also possible, under further assumptions, to shorten the calculations by estimating 6 and I separately, as we do in the computer package GREGPLUS provided in Athena. We can then analyze the goodness of fit, the precision of estimation of parameters and functions of them, the relative probabilities of alternative models, and the choice of additional experiments to improve a chosen information measure. This chapter summarizes these procedures and their implementation in GREGPLUS details and examples are given in Appendix C. [Pg.141]


See other pages where Multiresponse Models is mentioned: [Pg.218]    [Pg.322]    [Pg.159]    [Pg.164]    [Pg.174]    [Pg.343]    [Pg.61]    [Pg.139]    [Pg.218]    [Pg.322]    [Pg.159]    [Pg.164]    [Pg.174]    [Pg.343]    [Pg.61]    [Pg.139]    [Pg.427]    [Pg.25]    [Pg.34]    [Pg.34]    [Pg.11]    [Pg.134]    [Pg.302]    [Pg.49]    [Pg.2]    [Pg.142]    [Pg.143]    [Pg.145]    [Pg.145]    [Pg.147]    [Pg.149]    [Pg.151]   


SEARCH



Process Modeling with Multiresponse Data

© 2024 chempedia.info