Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Ordinary least-squares estimated using

The transformed response values were regressed on the transformed amount values using the simple linear regression model and ordinary least squares estimation. The standard deviation of the response values (about the regression line) was calculated, and plots were formed of the transformed response values and of the residuals versus transformed amounts. [Pg.136]

A two way fixed effects model Suppose the fixed effects model is modified to include a time specific dummy variable as well as an individual specific variable. Then, yit = a, + y, + P x + At every observation, the individual- and time-specific dummy variables sum to one, so there are some redundant coefficients. The discussion in Section 13.3.3 shows one way to remove the redundancy. Another useful way to do this is to include an overall constant and to drop one of the time specific and one of the time-dummy variables. The model is, thus, ylt = 5 + (a, - aj) + (y, - y,) + P x + e . (Note that the respective time or individual specific variable is zero when t or i equals one.) Ordinary least squares estimates of P can... [Pg.57]

To estimate the coefficients, we will use a two step FGLS procedure. Ordinary least squares estimates based on Section 19.4.3 are consistent, but inefficient. The OLS regression produces... [Pg.109]

This bias exists if we estimate /J using an ordinary least-square estimation, but can increase significantly if we estimate a non-cointegrated process based on Eqn (3), using other approaches that iteratively determine both and p. The alternative approach we would usually apply in such a situation is an estimation based on the first differences. But once again we show in the Appendix that the error on the left-hand side of the equation can create a very strong bias in this estimation. [Pg.57]

Non-linear sensor responses can be modeled using linear equations with the same assumptions as the ordinary least-squares approach. Using a polynomial to fit curves is better in most cases than using transformations to linearize the data. For example, the following equation can be used to estimate a curvilinear calibration curve ... [Pg.294]

Associated with each of the three schemes of contrasts one can calculate the variance of the resulting estimates. Figure 25.6 gives the factor by which a /N would have to be multiplied to obtain the variance for each of the three ordinary least-squares estimators of phenotypic response. A fourth universal contrast has been added. This does not correspond to any least-squares solution, except the linear one when p (a) = 0.5. It is, however, universally unbiased for any of the three models of inheritance being considered here dominant, recessive, additive (linear). This is because it uses the contrast (—1, 1) and hence does not use the heterozygous group at all. In fact, it is... [Pg.446]

The goal of this study is to test hypotheses about the relationships between multiple independent variables and one dependent variable. As most of my latent constructs are measured on interval scales and I expect linear relationships between the variables, multiple linear regression analysis with ordinary least squares estimation was used (Cohen 2003 Tabachnick and Fidell 1989). The study had two thematically separate parts the first part is focused on the antecedents of lead usemess of employees in firms (n=83, hypotheses 1-3, dependent variable lead usemess). in the second part, h3 otheses about lead usemess of employees and behavioral outcomes are tested (n=149, hypotheses 4-8, dependent variables innovative work behavior, internal boundaiy spanning behavior, external boundary spanning behavior, organizational... [Pg.136]

In analytical practice, linear calibration by ordinary least squares is mostly used. Therefore, the estimates are summarized before the uncertainties of the estimates will be given ... [Pg.159]

Ordinary least squares regression requires constant variance across the range of data. This has typically not been satisfied with chromatographic data ( 4,9,10 ). Some have adjusted data to constant variance by a weighted least squares method ( ) The other general adjustment method has been by transformation of data. The log-log transformation is commonly used ( 9,10 ). One author compares the robustness of nonweighted, weighted linear, and maximum likelihood estimation methods ( ). Another has... [Pg.134]

To model the relationship between PLA and PLR, we used each of these in ordinary least squares (OLS) multiple regression to explore the relationship between the dependent variables Mean PLR or Mean PLA and the independent variables (Berry and Feldman, 1985).OLS regression was used because data satisfied OLS assumptions for the model as the best linear unbiased estimator (BLUE). Distribution of errors (residuals) is normal, they are uncorrelated with each other, and homoscedastic (constant variance among residuals), with the mean of 0. We also analyzed predicted values plotted against residuals, as they are a better indicator of non-normality in aggregated data, and found them also to be homoscedastic and independent of one other. [Pg.152]

Using data shown in Figure 13.4, we used ordinary least squares to estimate the effect of the probability of survival to age 65 on life expectancy at birth and found a significantly positive association between these two measures a 10% increase in probability of survival to age 65 was associated with a 1.3% increase in life expectancy. This result, combined with the estimates in Tables 13.2 and 13.3, implies that a 10% increase in the stock of pharmaceutical innovation would lead to an increase in life expectancy at birth by 0.10% (i.e., 0.8% X 1.3%) to 0.18% (1.4% x 1.3%). [Pg.255]

Because the model has both lagged dependent variables and autocorrelated disturbances, ordinary least squares will be inconsistent. Consistent estimates could be obtained by the method of instrumental variables. We can use x and Xt 2 as the instruments for yy and yt.2. Efficient estimates can be obtained by a two step procedure. We write the model asy, - pyM = a(l-p) + P(xy - pxM) + y(y, i - py, 2) + 5(yt 2 - pty3) + With a consistent estimator of p, we could use FGLS. The residuals from the IV estimator can be used to estimate p. Then OLS using the transformed data is asymptotically equivalent to GLS. The method of Hatanaka discussed in the text is another possibility. [Pg.97]

Clearly, the model cannot be estimated by ordinary least squares, since there is an autocorrelated disturbance and a lagged dependent variable. The parameters can be estimated consistently, but inefficiently by linear instrumental variables. The inefficiency arises from the fact that the parameters are overidentified. The linear estimator estimates seven functions of the five underlying parameters. One possibility is a GMM estimator. Let v, = g, -(y+< >)g,-i + (y< >)g, 2. Then, a GMM estimator can be defined in terms of, say, a set of moment equations of the fonn E[v,w,] = 0, where w, is current and lagged values of x and z. A minimum distance estimator could then be used for estimation. [Pg.98]

When the uncertainty is negligible in comparison with S, the estimations by Eq. (7) do not differ from the estimation used in the ordinary least squares technique. However, the uncertainty increase will influence the result of such estimation. Theoretically it may even happen that o > S, and Eq. (7) lead to an absurd result. In such cases p]=°° is accepted and no absurd results will be obtained [6]. This influence is very important not only for determination of the analyte concentration X0 corresponding to the response Y0 by the calibration curve, but also for the correct uncertainty evaluation in the determination result. [Pg.106]

Normally, the uncertainties in the concentrations of the calibration solutions (variable x) are small in relation to the uncertainties of the response of the measurement system (variable y), so that the regression parameter can be estimated using ordinary least squares (OLS). In exceptional cases, the test quantity SeJSx (Sex2 means the variance in the concentration of the calibration solutions for a particular calibration level, and Sx2 indicates the total variance in concentration) can be calculated, and if Sex/Sx>0.2 the regression parameters should be estimated by orthogonal distance regression (ODR) [10],... [Pg.255]

Ridge regression analysis is used when the independent variables are highly interrelated, and stable estimates for the regression coefficients cannot be obtained via ordinary least squares methods (Rozeboom, 1979 Pfaffenberger and Dielman, 1990). It is a biased estimator that gives estimates with small variance, better precision and accuracy. [Pg.169]

Inhibition constants (K ) are estimated by Dixon plot analysis and linear regression using ordinary least squares. Apparent Km values are estimated by nonlinear regression (Vavricka et al. 2002). [Pg.534]

Assuming that we have measured a series of concentrations over time/ we can define a model structure and obtain initial estimates of the model parameters. The objective is to determine an estimate of the parameters (CLe, Vd) such that the differences between the observed and predicted concentrations are comparatively small. Three of the most commonly used criteria for obtaining a best fit of the model to the data are ordinary least squares (OLS)/ weighted least squares (WLS)/ and extended least squares (ELS) ELS is a maximum likelihood procedure. These criteria are achieved by minimizing the following quantities/... [Pg.130]

Ordinary Least Square regression (OLS), also called Multiple Linear Regression (MLR), is the most common regression technique used to estimate the quantitative relationship between molecular descriptors and the property. Partial Least Squares (PLS) regression is widely applied especially when there are a large number of molecular descriptors with respect to the number of training compounds, as it happens for methods such as GRID and CoMFA. [Pg.1252]


See other pages where Ordinary least-squares estimated using is mentioned: [Pg.53]    [Pg.53]    [Pg.53]    [Pg.53]    [Pg.30]    [Pg.134]    [Pg.355]    [Pg.74]    [Pg.300]    [Pg.275]    [Pg.255]    [Pg.674]    [Pg.179]    [Pg.309]    [Pg.161]    [Pg.50]    [Pg.60]    [Pg.75]    [Pg.98]    [Pg.106]    [Pg.181]    [Pg.27]    [Pg.134]    [Pg.87]    [Pg.101]    [Pg.50]    [Pg.60]    [Pg.75]    [Pg.122]    [Pg.123]    [Pg.346]   
See also in sourсe #XX -- [ Pg.261 ]




SEARCH



Estimate least squares

Least estimate

Ordinary least squares

© 2024 chempedia.info