Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Regression analysis, example

Chatterjee, S. and Price, B., 1977. Regression Analysis by Example. Wiley, New York. [Pg.334]

Although equations 5.13 and 5.14 appear formidable, it is only necessary to evaluate four summation terms. In addition, many calculators, spreadsheets, and other computer software packages are capable of performing a linear regression analysis based on this model. To save time and to avoid tedious calculations, learn how to use one of these tools. For illustrative purposes, the necessary calculations are shown in detail in the following example. [Pg.119]

Blanco and co-workers" reported several examples of the application of multiwavelength linear regression analysis for the simultaneous determination of mixtures containing two components with overlapping spectra. For each of the following, determine the molar concentration of each analyte in the mixture. [Pg.453]

In a curve-fitting method the concentration of a reactant or product is monitored continuously as a function of time, and a regression analysis is used to fit an appropriate differential or integral rate equation to the data. Eor example, the initial concentration of analyte for a pseudo-first-order reaction, in which the concentration of a product is followed as a function of time, can be determined by fitting a rearranged form of equation 13.12... [Pg.631]

The first two examples show that the interaction of the model parameters and database parameters can lead to inaccurate estimates of the model parameters. Any use of the model outside the operating conditions (temperature, pressures, compositions, etc.) upon which the estimates are based will lead to errors in the extrapolation. These model parameters are effec tively no more than adjustable parameters such as those obtained in linear regression analysis. More comphcated models mav have more subtle interactions. Despite the parameter ties to theoiy, tliey embody not only the uncertainties in the plant data but also the uncertainties in the database. [Pg.2556]

The example in Figure 3 is as complex as is usually possible to analyze. There are seven unknowns, if no indices of refracdon are being solved for in the regression analysis. If correlation is a problem, then a less complex model must be assumed. For example, the assumption that and are each fixed at a value of 0.5 might reduce correlation. The five remaining unknowns in the regression analysis would then be and 3. In practice one first assumes the simplest possible model,... [Pg.406]

A non-linear regression analysis is employed using die Solver in Microsoft Excel spreadsheet to determine die values of and in die following examples. Example 1-5 (Chapter 1) involves the enzymatic reaction in the conversion of urea to ammonia and carbon dioxide and Example 11-1 deals with the interconversion of D-glyceraldehyde 3-Phosphate and dihydroxyacetone phosphate. The Solver (EXAMPLEll-l.xls and EXAMPLEll-3.xls) uses the Michaehs-Menten (MM) formula to compute v i- The residual sums of squares between Vg(,j, and v j is then calculated. Using guessed values of and the Solver uses a search optimization technique to determine MM parameters. The values of and in Example 11-1 are ... [Pg.849]

We now consider a type of analysis in which the data (which may consist of solvent properties or of solvent effects on rates, equilibria, and spectra) again are expressed as a linear combination of products as in Eq. (8-81), but now the statistical treatment yields estimates of both a, and jc,. This method is called principal component analysis or factor analysis. A key difference between multiple linear regression analysis and principal component analysis (in the chemical setting) is that regression analysis adopts chemical models a priori, whereas in factor analysis the chemical significance of the factors emerges (if desired) as a result of the analysis. We will not explore the statistical procedure, but will cite some results. We have already encountered examples in Section 8.2 on the classification of solvents and in the present section in the form of the Swain et al. treatment leading to Eq. (8-74). [Pg.445]

The various independent variables can be the actual experimental variables or transformations of them. Dilferent transformations can be used for different variables. The independent variables need not be actually independent. For example, linear regression analysis can be used to fit a cubic equation by setting X, X and Z as the independent variables. [Pg.256]

Example 7.20 Use linear regression analysis to determine k, m, and n for the data taken at 1 atm total pressure for the ethane iodination reaction in Problem 7.1. [Pg.257]

One-dimensional data are plotted versus an experimental variable a prime example is the Lambert-Beer plot of absorbance vs. concentration, as in a calibration run. The graph is expected to be a straight line over an appreciable range of the experimental variable. This is the classical domain of linear regression analysis. [Pg.91]

The final values of the rate constants along with their temperature dependencies were obtained with nonlinear regression analysis, which was applied to the differential equations. The model fits the experimental results well, having an explanation factor of 98%. Examples of the model fit are provided by Figures 8.3 and 8.4. An analogous treatment can be applied to other hemicelluloses. [Pg.176]

As an example, five different synthetic colorants (Tartrazine, Sunset Yellow, Ponceau 4R, Amaranth, and Brilliant Blue FCF) from drinks and candies were separated on a polyamide adsorbent at pH 4, eluted with an alkaline-ammonia solution. By another method, 13 synthetic food colorants were isolated from various foods using specific adsorption on wool. After elution with 10% ammonia solution and gentle warming, an absorption spectrum of the resulting colorant solution was recorded, compared to the reference spectra of pure colorants, and identified by linear regression analysis. ... [Pg.534]

A central concept of statistical analysis is variance,105 which is simply the average squared difference of deviations from the mean, or the square of the standard deviation. Since the analyst can only take a limited number n of samples, the variance is estimated as the squared difference of deviations from the mean, divided by n - 1. Analysis of variance asks the question whether groups of samples are drawn from the same overall population or from different populations.105 The simplest example of analysis of variance is the F-test (and the closely related t-test) in which one takes the ratio of two variances and compares the result with tabular values to decide whether it is probable that the two samples came from the same population. Linear regression is also a form of analysis of variance, since one is asking the question whether the variance around the mean is equivalent to the variance around the least squares fit. [Pg.34]

More than just a few parameters have to be considered when modelling chemical reactivity in a broader perspective than for the well-defined but restricted reaction sets of the preceding section. Here, however, not enough statistically well-balanced, quantitative, experimental data are available to allow multilinear regression analysis (MLRA). An additional complicating factor derives from comparison of various reactions, where data of quite different types are encountered. For example, how can product distributions for electrophilic aromatic substitutions be compared with acidity constants of aliphatic carboxylic acids And on the side of the parameters how can the influence on chemical reactivity of both bond dissociation energies and bond polarities be simultaneously handled when only limited data are available ... [Pg.60]

To verify such a steric effect a quantitative structure-property relationship study (QSPR) on a series of distinct solute-selector pairs, namely various DNB-amino acid/quinine carbamate CSPpairs with different carbamate residues (Rso) and distinct amino acid residues (Rsa), has been set up [59], To provide a quantitative measure of the effect of the steric bulkiness on the separation factors within this solute-selector series, a-values were correlated by multiple linear and nonlinear regression analysis with the Taft s steric parameter Es that represents a quantitative estimation of the steric bulkiness of a substituent (Note s,sa indicates the independent variable describing the bulkiness of the amino acid residue and i s.so that of the carbamate residue). For example, the steric bulkiness increases in the order methyl < ethyl < n-propyl < n-butyl < i-propyl < cyclohexyl < -butyl < iec.-butyl < t-butyl < 1-adamantyl < phenyl < trityl and simultaneously, the s drops from -1.24 to -6.03. In other words, the smaller the Es, the more bulky is the substituent. The obtained QSPR equation reads as follows ... [Pg.22]

Three critical points can be made in this analysis. The first one is located at the "thorough look Instruction. This examination in reality involves a critical analysis of the experimental protocol and the data produced from it. For example, it was quite evident in collecting the standards data from DATASET D that values were well out of line with previous determinations. See other DATASETS, especially DATASET E in the Appendix, for confirmation of this idea. The second critical point is at the "Preparation of the problem Instruction. In this case hetero-scedasticity must be removed before submitting the data to regression analysis. Weighted least squares of several types (11) and power transformations (10) can be used. The third critical point... [Pg.46]

It is important to realize that an / or r value (instead of an or value) might give a false sense of how well the factors explain the data. For example, the R value of 0.956 arises because the factors explain 91.4% of the sum of squares corrected for the mean. An R value of 0.60 indicates that only 36% of 55 has been explained by the factors. Although most regression analysis programs will supply both R (or r) and R (or r ) values, researchers seem to prefer to report the coefficients of correlation R and r) simply because they are numerically larger and make the fit of the model look better. [Pg.164]

Although our purpose in introducing the subject of data treatment has been to provide insight into the design of experiments, the technique of least squares (regression analysis) is often used to fit models to data that have been acquired by observation rather than by experiment. In this chapter we discuss an example of regression analysis applied to observational data. [Pg.177]


See other pages where Regression analysis, example is mentioned: [Pg.179]    [Pg.251]    [Pg.64]    [Pg.179]    [Pg.251]    [Pg.64]    [Pg.682]    [Pg.714]    [Pg.394]    [Pg.422]    [Pg.373]    [Pg.258]    [Pg.73]    [Pg.985]    [Pg.254]    [Pg.454]    [Pg.565]    [Pg.82]    [Pg.7]    [Pg.180]    [Pg.474]    [Pg.356]    [Pg.375]    [Pg.220]    [Pg.177]    [Pg.277]    [Pg.127]    [Pg.64]    [Pg.34]   
See also in sourсe #XX -- [ Pg.177 ]




SEARCH



An Example of Regression Analysis on Existing Data

Analysis Examples

Examples regression

Regression analysis

© 2024 chempedia.info