Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Regression calculations

The linear regression calculations for a 2 factorial design are straightforward and can be done without the aid of a sophisticated statistical software package. To simplify the computations, factor levels are coded as +1 for the high level, and -1 for the low level. The relationship between a factor s coded level, Xf, and its actual value, Xf, is given as... [Pg.677]

The dashed line in Fig. 3.2 corresponds to a linear regression calculation yielding Me = 8 kg/mol for the average molecular mass between entanglements if no chemical crosslinks are present (MR - oo). This result agrees reasonably with values for various thermoplastics as determined from elasticity measurements on melts [30, 31, 32], Examples are given in Table 3.2. [Pg.325]

The 20% soy lecithin (Table 7.17) and the 2% DOPC (Table 7.15) intrinsic permeabilities may be compared in a Collander equation, as shown in Fig. 7.44. The slope of the regression line, soy versus DOPC, is greater than unity. This indicates that the soy membrane is more lipophilic than the DOPC membrane. Intrinsic permeabilities are generally higher in the soy system. Three molecules were significant outliers in the regression metoprolol, quinine, and piroxicam. Metoprolol and quinine are less permeable in the DOPC system than expected, based on their apparent relative lipophilicities and in vivo absorptions [593]. In contrast, piroxicam is more permeable in DOPC than expected based on its relative lipophilicity. With these outliers removed from the regression calculation, the statistics were impressive at r2 0.97. [Pg.215]

We will not repeat Anscombe s presentation, but we will describe what he did, and strongly recommend that the original paper be obtained and perused (or alternatively, the paper by Fearn [15]). In his classic paper, Anscombe provides four sets of (synthetic, to be sure) univariate data, with obviously different characteristics. The data are arranged so as to permit univariate regression to be applied to each set. The defining characteristic of one of the sets is severe nonlinearity. But when you do the regression calculations, all four sets of data are found to have identical calibration statistics the slope, y-intercept, SEE, R2, F-test and residual sum of squares are the same for all four sets of data. Since the numeric values that are calculated are the same for all data sets, it is clearly impossible to use these numeric values to identify any of the characteristics that make each set unique. In the case that is of interest to us, those statistics provide no clue as to the presence or absence of nonlinearity. [Pg.425]

Matlab recognises the importance of linear regression calculations and introduced a very elegant and useful notation the / forward- and backslash operators, see p.l 17-118. [Pg.109]

In this chapter we expand the linear regression calculation into higher dimensions, i.e. instead of a vector y of measurements and a vector a of fitted linear parameters, we deal with matrices Y of data and A of parameters. [Pg.139]

Non-linear regression calculations are extensively used in most sciences. The goals are very similar to the ones discussed in the previous chapter on Linear Regression. Now, however, the function describing the measured data is non-linear and as a consequence, instead of an explicit equation for the computation of the best parameters, we have to develop iterative procedures. Starting from initial guesses for the parameters, these are iteratively improved or fitted, i.e. those parameters are determined that result in the optimal fit, or, in other words, that result in the minimal sum of squares of the residuals. [Pg.148]

Recall equation (4.19) yCaic=Fa. To achieve minimal %2 in a linear regression calculation, all we need to do is to divide each element of y and of the column vectors f-j by its corresponding ayi to result in the weighted vectors yw and fw (... [Pg.190]

Due to the orthonormality of V, this is a particularly simple linear regression calculation. The vector b is computed as ... [Pg.229]

Table I. Algebraic Equations for First-Order Regression Calculations... Table I. Algebraic Equations for First-Order Regression Calculations...
Uncet tainty in Pure Spectra (Model Diagnostic) The pure-component spectra are estimated from a standard multiple linear regression calculation (Equation 5-16) and. therefore, error estimates are available. The error estimates for all pure spectra at variable are shown in Equation 5.17 ... [Pg.294]

Determine nmol glucose in the 200-pl sample by linear regression calculation of standard values. Calculate glucose concentration of the sample in mmol/1. [Pg.439]

Determine values for 0 min and 20 min by linear regression calculation from values of the six time points from 0 min to 25 min —>... [Pg.466]

Figure 13.4 Plot of/tNu CH3Br versus nNu,cH3i, values for a series of nucleophiles (see Tables 13.3 and 13.4). A linear regression calculation yields the relationship ... Figure 13.4 Plot of/tNu CH3Br versus nNu,cH3i, values for a series of nucleophiles (see Tables 13.3 and 13.4). A linear regression calculation yields the relationship ...
It is often assumed in regression calculations that the experimental error only affects the y value and is independent for the concentration, which is typically placed on the x axis. Should this not be the case, the data points used to estimate the best parameters for a straight line do not have the same quality. In such cases, a coefficient Wj is applied to each data point and a weighted regression is used. A variety of formulae have been proposed for this method. [Pg.395]

In this equation, fugacities were used instead of partial pressures, to take into account the nonideal behaviour of gases at high pressure. The coefficients, A to E, were determined by means of non-linear regression calculation by a method of Marquardt [25], From the measurements at various temperatures, the frequency factor, k0, and the activation energy, E, were evaluated. The data are collected in Table 3.3-1. [Pg.90]

Summary of individual regression calculations for each strain... [Pg.17]

The interaction energy of the solute with the solvent is not expressed as the geometric mean of cpx and cp2, as shown in Equation (3.13), but as the polynomial power series of the solubility parameter of the solvent. The coefficients of the polynomial equation are determined experimentally by regression analysis. Figure 3.2 shows that the regressed (calculated) solubilities of caffeine in a mixture of water and dioxane are in good agreement with the experimental values. [Pg.132]

Principal component analysis enables reduction of a large data matrix into two or three main components that include orthogonally relevant information. In such a way, changes in metabolic profiles, described by many variables, can be measurably determined and compared. Subsequently, using other calculation procedures for the reduced data matrix, the importance of variables (metabolites) can be determined and assessed. Discrimination or regression calculation methods are of great importance in this step of the analysis. [Pg.247]

The inverse correlation between the bioavailability of Zn and MnAmAo "was relatively weak (r=0.Ul). The positive residuals of this relationship were largely data collected during the winter. The winter influx of fresh water into San Francisco Bay was accompanied by an increase in the sediments of hydroxylamine-extractable Fe (presumably, freshly precipitated Fe) and humic substances (2T). We have data from too few San Francisco Bay stations to include the humics in regression calculations. However, the increase in hydroxylamine-Fe generally coincided with increases in Zn concentrations in Macoma thus, the combined Fe/Mn ratio in Equation 2 explained 60 percent of the temporal and spatial variance in the Zn concentrations of the bivalve when all the data were considered. [Pg.592]

The use of °Kr and Kr to arrive at P fP s (Equation (12)) is unsatisfactory for samples that contain appreciable amounts of Br, from which neutron capture produces °Kr and Kr, but not Kr. To avoid this problem, PsfPss has often been calculated from an empirical relation obtained by selecting samples with negligible neutron capture effects and then regressing calculated values of PsfP 3 (from Equation (12)) on directly measured values of Kr/ Kr (Marti and Lugmair, 1971). The regression gave... [Pg.352]

It is obvious that there is error in both the x and the y values. Calculation of the least-squares slope and intercept by "standard" methods is clearly not valid. In the custom function that follows, the method of Deming is used to calculate the regression parameters for the straight line y = mx + b. The Deming regression calculation assumes Gaussian distribution of errors in both x and y values and uses duplicate measurements of x values (and of y values) to estimate the standard errors. A portion of a data table is shown in Figure 17-1. [Pg.299]

Verify that S-Plus 5.1 performs the NIST StRD Linear Regression calculations to within 3 signhcant digits... [Pg.87]


See other pages where Regression calculations is mentioned: [Pg.1188]    [Pg.318]    [Pg.109]    [Pg.168]    [Pg.78]    [Pg.107]    [Pg.235]    [Pg.236]    [Pg.240]    [Pg.775]    [Pg.346]    [Pg.496]    [Pg.496]    [Pg.274]    [Pg.58]    [Pg.15]    [Pg.436]    [Pg.481]    [Pg.696]    [Pg.568]    [Pg.177]    [Pg.334]    [Pg.1]    [Pg.343]    [Pg.293]    [Pg.239]   
See also in sourсe #XX -- [ Pg.421 ]

See also in sourсe #XX -- [ Pg.425 ]




SEARCH



ANOVA and regression calculations

Calculation and Testing of the Regression Coefficients

Calculator linear regression

Regression Parameter Calculation

Regression coefficients, calculating

© 2024 chempedia.info