Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Linear models dependent variables

Thus, Tis a linear function of the new independent variables, X, X2,. Linear regression analysis is used to ht linear models to experimental data. The case of three independent variables will be used for illustrative purposes, although there can be any number of independent variables provided the model remains linear. The dependent variable Y can be directly measured or it can be a mathematical transformation of a directly measured variable. If transformed variables are used, the htting procedure minimizes the sum-of-squares for the differences... [Pg.255]

Multiple linear regression (MLR) models a linear relationship between a dependent variable and one or more independent variables. [Pg.481]

Multiple linear regression is strictly a parametric supervised learning technique. A parametric technique is one which assumes that the variables conform to some distribution (often the Gaussian distribution) the properties of the distribution are assumed in the underlying statistical method. A non-parametric technique does not rely upon the assumption of any particular distribution. A supervised learning method is one which uses information about the dependent variable to derive the model. An unsupervised learning method does not. Thus cluster analysis, principal components analysis and factor analysis are all examples of unsupervised learning techniques. [Pg.719]

We now consider the case in which, again, the independent variable jc, is considered to be accurately known, but now we suppose that the variances in the dependent variable y, are not constant, but may vary (either randomly or continuously) with JC . To show the basis of the method we use the simple linear univariate model, written as Eq. (2-76). [Pg.44]

Whenever one property is measured as a function of another, the question arises of which model should be chosen to relate the two. By far the most common model function is the linear one that is, the dependent variable y is defined as a linear combination containing two adjustable coefficients and X, the independent variable, namely. [Pg.94]

As an extension of perceptron-like networks MLF networks can be used for non-linear classification tasks. They can however also be used to model complex non-linear relationships between two related series of data, descriptor or independent variables (X matrix) and their associated predictor or dependent variables (Y matrix). Used as such they are an alternative for other numerical non-linear methods. Each row of the X-data table corresponds to an input or descriptor pattern. The corresponding row in the Y matrix is the associated desired output or solution pattern. A detailed description can be found in Refs. [9,10,12-18]. [Pg.662]

The differential equations are often highly non-linear and the equation variables are often highly interrelated. In the above formulation, yj represents any one of the dependent system variables and, fi is the general function relationship, relating the derivative, dyi/dt, with the other related dependent variables. Tbe system independent variable, t, will usually correspond to time, but may also represent distance, for example, in the simulation of steady-state models of tubular and column devices. [Pg.123]

The classical multivariate calibration represents the transition of common single component analysis from one dependent variable y (measured value) to m dependent variables (e.g., wavelengths or sensors) which can be simultaneously included in the calibration model. The classical linear calibration (Danzer and Currie [1998] Danzer et al. [2004]) is therefore represented by the generalized matrix relation... [Pg.183]

Linear models exhibit the important property of superposition nonlinear ones do not. Equations (and hence models) are linear if the dependent variables or their derivatives appear only to the first power otherwise they are nonlinear. In practice the ability to use linear models is of great significance because they are an order of magnitude easier to manipulate and solve than nonlinear ones. [Pg.43]

To compensate for the errors involved in experimental data, the number of data sets should be greater than the number of coefficients p in the model. Least squares is just the application of optimization to obtain the best solution of the equations, meaning that the sum of the squares of the errors between the predicted and the experimental values of the dependent variable y for each data point x is minimized. Consider a general algebraic model that is linear in the coefficients. [Pg.55]

Theory for the transformation of the dependent variable has been presented (Bll) and applied to reaction rate models (K4, K10, M8). In transforming the dependent variable of a model, we wish to obtain more perfectly (a) linearity of the model (b) constancy of error variance, (c) normality of error distribution and (d) independence of the observations to the extent that all are simultaneously possible. This transformation will also allow a simpler and more precise data analysis than would otherwise be possible. [Pg.159]

To model the relationship between PLA and PLR, we used each of these in ordinary least squares (OLS) multiple regression to explore the relationship between the dependent variables Mean PLR or Mean PLA and the independent variables (Berry and Feldman, 1985).OLS regression was used because data satisfied OLS assumptions for the model as the best linear unbiased estimator (BLUE). Distribution of errors (residuals) is normal, they are uncorrelated with each other, and homoscedastic (constant variance among residuals), with the mean of 0. We also analyzed predicted values plotted against residuals, as they are a better indicator of non-normality in aggregated data, and found them also to be homoscedastic and independent of one other. [Pg.152]

For comparison purposes, regression parameters were computed for the model defined by Equations 6, 7, 8, and 10 and the model obtained by replacing In (1/R) in those equations by R. The dependent variable (y) is particulate concentration because it is desired to predict particulate content from reflectance values. Data from Tables I and II were also fitted to exponential and power functions where the independent variable (x) was reflectance but the fits were found to be inferior to that of the linear relationship. [Pg.76]

An extension of linear regression, multiple linear regression (MLR) involves the use of more than one independent variable. Such a technique can be very effective if it is suspected that the information contained in a single dependent variable (x) is insufficient to explain the variation in the independent variable (y). In PAT, such a situation often occurs because of the inability to find a single analyzer response variable that is affected solely by the property of interest, without interference from other properties or effects. In such cases, it is necessary to use more than one response variable from the analyzer to build an effective calibration model, so that the effects of such interferences can be compensated. [Pg.361]

From the receptor model viewpoint, the total aerosol mass, M, collected on a filter at a receptor Is the dependent variable and equal to a linear sum of the mass contributed by p Individual sources,... [Pg.77]

Even after linearization, the state-space model often contains too many dependent variables for controller design or for implementation as part of the actual control system. Low-order models are thus required for on-line implementation of multivariable control strategies. In this section, we study the reduction in size, or order, of the linearized model. [Pg.178]

This section introduces the regression theory that is needed for the establishment of the calibration models in the forthcoming sections and chapters. The multivariate linear models considered in this chapter relate several independent variables (x) to one dependent variable (y) in the form of a first-order polynomial ... [Pg.164]

Clearly, the model cannot be estimated by ordinary least squares, since there is an autocorrelated disturbance and a lagged dependent variable. The parameters can be estimated consistently, but inefficiently by linear instrumental variables. The inefficiency arises from the fact that the parameters are overidentified. The linear estimator estimates seven functions of the five underlying parameters. One possibility is a GMM estimator. Let v, = g, -(y+< >)g,-i + (y< >)g, 2. Then, a GMM estimator can be defined in terms of, say, a set of moment equations of the fonn E[v,w,] = 0, where w, is current and lagged values of x and z. A minimum distance estimator could then be used for estimation. [Pg.98]

Suppose that a linear probability model is to be fit to a set of observations on a dependent variable, y, which takes values zero and one, and a single regressor, x, which varies continuously across observations. Obtain the exact expressions for the least squares slope in the regression in terns of the mean(s) and variance of x and interpret the result. [Pg.107]

Another critical building block for chemometrics is the technique of linear regression.1,20,21 In chemometrics, this technique is typically used to build a linear model that relates an independent variable (X) to a dependent variable (Y). For example, in PAC, one... [Pg.233]

Another assumption, which becomes apparent when one carefully examines the model (Equation 8.7), is that all of the model error (f) is in the dependent variable (y). There is no provision in the model for errors in the independent variable (x). In PAC, this is equivalent to saying that there is error only in the reference method, and no error in the on-line analyzer responses. Although this is obviously not true, practical experience over the years has shown that linear regression can be very effective in analytical chemistry applications. [Pg.235]

Support Vector Machine (SVM) is a classification and regression method developed by Vapnik.30 In support vector regression (SVR), the input variables are first mapped into a higher dimensional feature space by the use of a kernel function, and then a linear model is constructed in this feature space. The kernel functions often used in SVM include linear, polynomial, radial basis function (RBF), and sigmoid function. The generalization performance of SVM depends on the selection of several internal parameters of the algorithm (C and e), the type of kernel, and the parameters of the kernel.31... [Pg.325]


See other pages where Linear models dependent variables is mentioned: [Pg.274]    [Pg.426]    [Pg.307]    [Pg.328]    [Pg.338]    [Pg.24]    [Pg.65]    [Pg.28]    [Pg.250]    [Pg.133]    [Pg.133]    [Pg.165]    [Pg.71]    [Pg.443]    [Pg.359]    [Pg.217]    [Pg.457]    [Pg.146]    [Pg.104]    [Pg.990]    [Pg.35]    [Pg.81]    [Pg.65]    [Pg.234]    [Pg.61]   
See also in sourсe #XX -- [ Pg.535 ]




SEARCH



Dependence model

Linear variables

Linearized model

Linearly dependent

Model Linearity

Model dependencies

Model variability

Models linear model

Models linearization

Variable dependent

Variable, modeling

Variables dependant

© 2024 chempedia.info