Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Regression Problems

As we have mentioned, the particular characterization task considered in this work is to determine attenuation in composite materials. At our hand we have a data acquisition system that can provide us with data from both PE and TT testing. The approach is to treat the attenuation problem as a multivariable regression problem where our target values, y , are the measured attenuation values (at different locations n) and where our input data are the (preprocessed) PE data vectors, u . The problem is to find a function iy = /(ii ), such that i), za jy, based on measured data, the so called training data. [Pg.887]

The regression problem is here formulated as the optimization problem... [Pg.887]

More correctly, the regression problem involves means instead of averages in (1). Furthermore, when the criterion function is quadratic, the general (usually nonlinear) optimal solution is given by y = [p u ], i.e., the conditional mean of y given the observation u . [Pg.888]

Partial least-squares path modeling with latent variables (PLS), a newer, general method of handling regression problems, is finding wide apphcation in chemometrics. This method allows the relations between many blocks of data ie, data matrices, to be characterized (32—36). Linear and multiple regression techniques can be considered special cases of the PLS method. [Pg.426]

In Section 33.2.2 we showed how LDA classification can be described as a regression problem with class variables. As a regression model, LDA is subject to the problems described in Chapter 10. For instance, the number of variables should not exceed the number of objects. One solution is to apply feature selection or... [Pg.232]

The structure of such models can be exploited in reducing the dimensionality of the nonlinear parameter estimation problem since, the conditionally linear parameters, kl5 can be obtained by linear least squares in one step and without the need for initial estimates. Further details are provided in Chapter 8 where we exploit the structure of the model either to reduce the dimensionality of the nonlinear regression problem or to arrive at consistent initial guesses for any iterative parameter search algorithm. [Pg.10]

The objectives in this chapter are two. The first one is to briefly review the essentials of linear regression and to present them in a form that is consistent with our notation and approach followed in subsequent chapters addressing nonlinear regression problems. The second objective is to show that a large number of linear regression problems can now be handled with readily available software such as Microsoft Excel and SigmaPlot . [Pg.23]

As seen in Chapter 2 a suitable measure of the discrepancy between a model and a set of data is the objective function, S(k), and hence, the parameter values are obtained by minimizing this function. Therefore, the estimation of the parameters can be viewed as an optimization problem whereby any of the available general purpose optimization methods can be utilized. In particular, it was found that the Gauss-Newton method is the most efficient method for estimating parameters in nonlinear models (Bard. 1970). As we strongly believe that this is indeed the best method to use for nonlinear regression problems, the Gauss-Newton method is presented in detail in this chapter. It is assumed that the parameters are free to take any values. [Pg.49]

Having the smoothed values of the state variables at each sampling point and the derivatives, q we have essentially transformed the problem to a "usual" linear regression problem. The parameter vector is obtained by minimizing the following LS objective function... [Pg.117]

The above linear regression problem can be readily solved using any standard linear regression package. [Pg.119]

In this section we shall only present the derivative approach for the solution of the pyrolytic dehydrogenation of benzene to diphenyl and triphenyl regression problem. This problem, which was already presented in Chapter 6, is also used here to illustrate the use of shortcut methods. As discussed earlier, both state variables are measured and the two unknown parameters appear linearly in the governing ODEs which are also given below for ease of the reader. [Pg.129]

In general, a much larger number of parameters [wavelengths, frequencies, or factors] needs to be calculated in overlapping peak systems [some spectra or chromatograms] than in the linear regression problems, (p. 176)... [Pg.165]

The authors describe the use of a Taylor expansion to negate the second and the higher order terms under specific mathematical conditions in order to make any function (i.e., our regression model) first-order (or linear). They introduce the use of the Jacobian matrix for solving nonlinear regression problems and describe the matrix mathematics in some detail (pp. 178-181). [Pg.165]

In any regression problem, there are more equations than unknown parameters. In this instance, there are four equations, representing the four data pairs. The software then returns the optimum values of parameters in the governing equation(s). The Solu-tions/Statistics menu can be consulted to determine the squared residual between each experimental data point and the corresponding predicted value obtained from the parameter estimates. [Pg.640]

The letter b is reserved for the regression equation coefficients in a regression problem... [Pg.357]

A y-sample outlier, based on the sample s property value(s) (for regression problems only). [Pg.413]

Artificial neural networks (ANNs) have been widely applied in the electronic tongue literature both for classification and multivariate regression problems almost one-third of the papers on electronic tongues examined for this review show ANN applications (see Fig. 2.10). [Pg.91]

Solve the regression problem with relative weighting (us Compare the two sequences of residuals. [Pg.150]

The parameter RP among the input data is the ridge parameter that will be exploited in Section 3.5. In normal regression problems RP = 0 should be used. [Pg.155]

To obtain the estimate (3.23) in a multivariate linear regression problem we... [Pg.178]


See other pages where Regression Problems is mentioned: [Pg.81]    [Pg.426]    [Pg.159]    [Pg.174]    [Pg.220]    [Pg.313]    [Pg.327]    [Pg.35]    [Pg.46]    [Pg.55]    [Pg.447]    [Pg.448]    [Pg.92]    [Pg.115]    [Pg.176]    [Pg.177]    [Pg.235]    [Pg.376]    [Pg.80]    [Pg.226]    [Pg.158]    [Pg.163]    [Pg.163]    [Pg.178]    [Pg.205]    [Pg.346]   
See also in sourсe #XX -- [ Pg.392 ]




SEARCH



Differential model, regression problems

Functional estimation problem regression function

Linear Regression Problems

Nonlinear Regression Problems

Nonmatrix Solutions to the Linear, Least-Squares Regression Problem

Principal components regression problem

Regression Sample Problem

Regression some potential problems

Special Problems in Simple Linear Regression

© 2024 chempedia.info