Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Regression, parameter estimation

The relationship between renal impairment and the absorption and disposition of HMR1964 (insulin glulisine) will be assessed by regressing pharmacokinetic parameters onto CLcr. Regression parameter estimates ( standard error) with confidence intervals and coefficients of correlation (Pearson) with p-values for test of difference from zero will be reported. Scatter plots of the concentration time profiles and pharmacokinetic parameters against creatinine clearance will be produced. [Pg.691]

Also under OLS assumptions, the regression parameter estimates have a number of optimal properties. First, 0 is an unbiased estimator for 0. Second, the standard error of the estimates are at a minimum, i.e., the standard error of the estimates will be larger than the OLS estimates given any other assumptions. Third, assuming the errors to be normally distributed, the OLS estimates are also the maximum likelihood (ML) estimates for 0 (see below). It is often stated that the OLS parameter estimates are BLUE (Best Linear Unbiased Predictors) in the sense that best means minimum variance. Fourth, OLS estimates are consistent, which in simple terms means that as the sample size increases the standard error of the estimate decreases and the bias of the parameter estimates themselves decreases. [Pg.59]

The variance-covariance of linear regression parameter estimates is given by o2(xTx) 1 and a statistic that summarizes the properties of the variance/covariance matrix is the generalized variance of the regression parameters... [Pg.73]

One assumption until now has been that the dependent and independent variables are measured without error. The impact of measurement error on the regression parameter estimates depends on whether the error affects the dependent or independent variable. When Y has measurement error, the effect on the regression model is not problematic if the measurement errors are uncorrelated and unbiased. In this case the linear model becomes... [Pg.79]

However, the expected value of s is zero with variance a2 + O o. Thus the variance of the measurement errors are propagated to the error variance term, thereby inflating it. An increase in the residual variance is not the only effect on the OLS model. If X is a random variable due to measurement error such that when there is a linear relationship between xk and Y, then X is negatively correlated with the model error term. If OLS estimation procedures are then used, the regression parameter estimates are both biased and inconsistent (Neter et al., 1996). [Pg.80]

Iwatsubo et al., 1996 Iwatsubo et al., 1997), or the estimation of drug clearance based on creatinine clearance (Bazunga et al., 1998 Lefevre et al., 1997). In these three examples, log P, in vitro clearance, and creatinine clearance, all have some measurement error associated with them that may be large enough to produce significantly biased regression parameter estimates. [Pg.80]

With linear models, exact inferential procedures are available for any sample size. The reason is that as a result of the linearity of the model parameters, the parameter estimates are unbiased with minimum variance when the assumption of independent, normally distributed residuals with constant variance holds. The same is not true with nonlinear models because even if the residuals assumption is true, the parameter estimates do not necessarily have minimum variance or are unbiased. Thus, inferences about the model parameter estimates are usually based on large sample sizes because the properties of these estimators are asymptotic, i.e., are true as n —> oo. Thus, when n is large and the residuals assumption is true, only then will nonlinear regression parameter estimates have estimates that are normally distributed and almost unbiased with minimum variance. As n increases, the degree of unbiasedness and estimation variability will increase. [Pg.104]

Table 4.3 Results of Monte Carlo simulation testing the effect of residual variance model misspecification on nonlinear regression parameter estimates. Table 4.3 Results of Monte Carlo simulation testing the effect of residual variance model misspecification on nonlinear regression parameter estimates.
Factor Effects versus Regression Parameters Although we have considered factor effect calculations and regression parameter estimation independently, it is important to understand that both concepts are linked together. More exactly, the following relationship holds ... [Pg.123]

Theorem 3.2 Under the assumptions stated above, the regression parameter estimates are unbiased. [Pg.95]

In the above formula, L represents a matrix with dimension r X p, /B represents the p X 1 regression parameters estimation and Vs is the corresponding estimated variance-covariance matrix. is an estimated positive scalar to prevent inverting a highly singular matrix. (Guo et al., 2003) However, the limitation of this method is that it is only suitable for one-sample problem and using the asymptotic theory will not be suitable for small number of replicates. [Pg.217]

Second card FORMAT(8F10.2), control variables for the regression. This program uses a Newton-Raphson type iteration which is susceptible to convergence problems with poor initial parameter estimates. Therefore, several features are implemented which help control oscillations, prevent divergence, and determine when convergence has been achieved. These features are controlled by the parameters on this card. The default values are the result of considerable experience and are adequate for the majority of situations. However, convergence may be enhanced in some cases with user supplied values. [Pg.222]

When estimates of k°, k, k", Ky, and K2 have been obtained, a calculated pH-rate curve is developed with Eq. (6-80). If the experimental points follow closely the calculated curve, it may be concluded that the data are consistent with the assumed rate equation. The constants may be considered adjustable parameters that are modified to achieve the best possible fit, and one approach is to use these initial parameter estimates in an iterative nonlinear regression program. The dissociation constants K and K2 derived from kinetic data should be in reasonable agreement with the dissociation constants obtained (under the same experimental conditions) by other means. [Pg.290]

Parameter Estimate from physical data Estimate from regression analysis Units... [Pg.537]

The quintessential statistical operation in analytical chemistry consists in estimating, from a calibration curve, the concentration of an analyte in an unknown sample. If the regression parameters a and b, and the unknown s analytical response y are known, the most likely concentration is given by Eq. (2.19), y being the average of all repeat determinations on the unknown. [Pg.108]

Experimental polymer rheology data obtained in a capillary rheometer at different temperatures is used to determine the unknown coefficients in Equations 11 - 12. Multiple linear regression is used for parameter estimation. The values of these coefficients for three different polymers is shown in Table I. The polymer rheology is shown in Figures 2 - 4. [Pg.137]

The process rnust be iterated until convergence and the final estimates are denoted with Plb, bi,LB, and colb- The individual regression parameter can be therefore estimated by replacing the final fixed effects and random effects estimates in the function g so that ... [Pg.99]

Nonlinear regression of the data provides the parameter estimates (shown in Table 2) associated with the models listed in Table 1. [Pg.543]

Appendix B. Parameter estimation and statistical analysis of regression... [Pg.539]

Model parameters estimated by linear regression, weighted linear regression, and unweighted non-linear regression are shown in Table B-1. [Pg.544]

Statistical testing of model adequacy and significance of parameter estimates is a very important part of kinetic modelling. Only those models with a positive evaluation in statistical analysis should be applied in reactor scale-up. The statistical analysis presented below is restricted to linear regression and normal or Gaussian distribution of experimental errors. If the experimental error has a zero mean, constant variance and is independently distributed, its variance can be evaluated by dividing SSres by the number of degrees of freedom, i.e. [Pg.545]

The models with insignificant overall model regression as indicated by the F -value and with meaningless parameter estimates (with confidence limits) as indicated by r-values should be rejected. If rejection of the parameter does not lead to a physically nonsensical model stmcture, repeat parameter estimation and statistical analysis. [Pg.550]

By way of illustration, the regression parameters of a straight line with slope = 1 and intercept = 0 are recursively estimated. The results are presented in Table 41.1. For each step of the estimation cycle, we included the values of the innovation, variance-covariance matrix, gain vector and estimated parameters. The variance of the experimental error of all observations y is 25 10 absorbance units, which corresponds to r = 25 10 au for all j. The recursive estimation is started with a high value (10 ) on the diagonal elements of P and a low value (1) on its off-diagonal elements. [Pg.580]

Weighted regression of U- " U- °Th- Th isotope data on three or more coeval samples provides robust estimates of the isotopic information required for age calculation. Ludwig (2003) details the use of maximum likelihood estimation of the regression parameters in either coupled XY-XZ isochrons or a single three dimensional XYZ isochron, where A, Y and Z correspond to either (1) U/ Th, °Th/ Th and... [Pg.414]


See other pages where Regression, parameter estimation is mentioned: [Pg.545]    [Pg.306]    [Pg.73]    [Pg.862]    [Pg.95]    [Pg.545]    [Pg.306]    [Pg.73]    [Pg.862]    [Pg.95]    [Pg.217]    [Pg.887]    [Pg.247]    [Pg.2549]    [Pg.163]    [Pg.537]    [Pg.198]    [Pg.677]    [Pg.539]    [Pg.542]    [Pg.337]    [Pg.367]    [Pg.576]    [Pg.577]    [Pg.579]    [Pg.581]    [Pg.582]    [Pg.600]   


SEARCH



Parameter estimation

Regression estimation

© 2024 chempedia.info