Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Response variables

Understanding the distribution allows us to calculate the expected values of random variables that are normally and independently distributed. In least squares multiple regression, or in calibration work in general, there is a basic assumption that the error in the response variable is random and normally distributed, with a variance that follows a ) distribution. [Pg.202]

What is a reasonable statistical model, or equation, to approximate the relationship between the independent variables and each response variable ... [Pg.522]

The term nonlinear in nonlinear programming does not refer to a material or geometric nonlinearity but instead refers to the nonlinearity in the mathematical optimization problem itself. The first step in the optimization process involves answering questions such as what is the buckling response, what is the vibration response, what is the deflection response, and what is the stress response Requirements usually exist for every one of those response variables. Putting those response characteristics and constraints together leads to an equation set that is inherently nonlinear, irrespective of whether the material properties themselves are linear or nonlinear, and that nonlinear equation set is where the term nonlinear programming comes from. [Pg.429]

Prediction of the useful life, or the remaining life, of coatings from physical or analytical measurements presents many problems in data analysis and interpretation. Two important considerations are that data must be taken over a long period of time, and the scatter from typical paint tests is large. These considerations require innovative application of statistical techniques to provide adequate prediction of the response variables of interest. [Pg.88]

Canonical Correlation Analysis (CCA) is perhaps the oldest truly multivariate method for studying the relation between two measurement tables X and Y [5]. It generalizes the concept of squared multiple correlation or coefficient of determination, R. In Chapter 10 on multiple linear regression we found that is a measure for the linear association between a univeiriate y and a multivariate X. This R tells how much of the variance of y is explained by X = y y/yV = IlylP/llylP. Now, we extend this notion to a set of response variables collected in the multivariate data set Y. [Pg.317]

As already noted by Campbell and Greaves (16), the rhizosphere lacks physically precise delimitations and its boundary is hard to demarcate. Dimensions may vary with plant species and cullivar, stage of development, and type of soil. Soil moisture may affect the measurable size of the rhizosphere as well wetter soils may stick better to roots than drier soils (Fig, 1). This will change the volume of soil regarded as rhizosphere soil upon separation of rhizosphere from bulk soil and thus alter the measured concentration in rhizosphere and non-rhizosphere soil of a response variable in exudate concentration or microbial production. [Pg.162]

Figure 1 Scbematic representation of the dynamics rf a response variable, c.g., concentration of rbizodeposited C, in the rhizosphere (das/ied line) and ihe measured concentrations in rhizosphere and nonrhizosphere samples solid lines). The vertical airow indicates the separation of rhizosphere and nonrhizosphere soil the effect of soil moisture is indicated by horizontal arrows. Figure 1 Scbematic representation of the dynamics rf a response variable, c.g., concentration of rbizodeposited C, in the rhizosphere (das/ied line) and ihe measured concentrations in rhizosphere and nonrhizosphere samples solid lines). The vertical airow indicates the separation of rhizosphere and nonrhizosphere soil the effect of soil moisture is indicated by horizontal arrows.
A single experiment consists of the measurement of each of the m response variables for a given set of values of the n independent variables. For each experiment, the measured output vector which can be viewed as a random variable is comprised of the deterministic part calculated by the model (Equation 2.1) and the stochastic part represented by the error term, i.e.,... [Pg.9]

In certain circumstances, the model equations may not have an explicit expression for the measured variables. Namely, the model can only be represented implicitly. In such cases, the distinction between dependent and independent variables becomes rather fuzzy, particularly when all the variables are subject to experimental error. As a result, it is preferable to consider an augmented vector of measured variables, y, that contains both regressor and response variables (Box. [Pg.10]

Finally, we should refer to situations where both independent and response variables are subject to experimental error regardless of the structure of the model. In this case, the experimental data are described by the set (yf,x,), i=l,2,...N as opposed to (y,Xj), i=l,2,...,N. The deterministic part of the model is the same as before however, we now have to consider besides Equation 2.3, the error in Xj, i.e., x, = Xj + ex1. These situations in nonlinear regression can be handled very efficiently using an implicit formulation of the problem as shown later in Section 2.2.2... [Pg.11]

Case I Let us consider the stringent assumption that the error terms in each response variable and for each experiment (8,j, i=l,...N j=lare all identically and independently distributed (i.i.d) normally with zero mean and variance, a g. Namely,... [Pg.17]

Case II Next let us consider the more realistic assumption that the variance of a particular response variable is constant from experiment to experiment however, different response variables have different variances, i.e.,... [Pg.17]

If the covariance matrices of the response variables are unknown, the maximum likelihood parameter estimates are obtained by maximizing the Loglikeli-hood function (Equation 2.20) over k and the unknown variances. Following the distributional assumptions of Box and Draper (1965), i.e., assuming that i= 2=...=En= , it can be shown that the ML parameter estimates can be obtained by minimizing the determinant (Bard, 1974)... [Pg.19]

In implicit estimation rather than minimizing a weighted sum of squares of the residuals in the response variables, we minimize a suitable implicit function of the measured variables dictated by the model equations. Namely, if we substitute the actual measured variables in Equation 2.8, an error term arises always even if the mathematical model is exact. [Pg.20]

The simple linear regression model which has a single response variable, a single independent variable and two unknown parameters. [Pg.24]

Given N measurements of the response variables (output vector), the parameters are obtained by minimizing the Linear Least Squares (LS) objective function which is given below as the weighted stun of squares of the residuals, namely,... [Pg.26]

Let us consider first the most general case of the multiresponse linear regression model represented by Equation 3.2. Namely, we assume that we have N measurements of the m-dimensional output vector (response variables), y , M.N. [Pg.27]

Once we have estimated the unknown parameter values in a linear regression model and the underlying assumptions appear to be reasonable, we can proceed and make statistical inferences about the parameter estimates and the response variables. [Pg.32]

The covariance matrix COV(k ) is obtained by Equation 3.30. Let us now concentrate on the expected mean response of a particular response variable. The (l-a)100% confidence interval of yl0 (i=l.,w). the i,h element of the response vector y0 at x0 is given below... [Pg.34]

In all the above cases we presented confidence intervals for the mean expected response rather than a future observation (future measurement) of the response variable, y0. In this case, besides the uncertainty in the estimated parameters, we must include the uncertainty due to the measurement error (so). [Pg.35]

Problems that can be described by a multiple linear regression model (i.e., they have a single response variable, 1) can be readily solved by available software. We will demonstrate such problems can be solved by using Microsoft Excel and SigmaPlot . [Pg.35]

Step 5. Click in the text box for known values for the singe response variable y then go to the Excel sheet and highlight the y values. [Pg.36]

These problems refer to models that have more than one (w>l) response variables, (mx ) independent variables and p (= +l) unknown parameters. These problems cannot be solved with the readily available software that was used in the previous three examples. These problems can be solved by using Equation 3.18. We often use our nonlinear parameter estimation computer program. Obviously, since it is a linear estimation problem, convergence occurs in one iteration. [Pg.46]

The nature of the mathematical model that describes a physical system may dictate a range of acceptable values for the unknown parameters. Furthermore, repeated computations of the response variables for various values of the parameters and subsequent plotting of the results provides valuable experience to the analyst about the behavior of the model and its dependency on the parameters. As a result of this exercise, we often come up with fairly good initial guesses for the parameters. The only disadvantage of this approach is that it could be time consuming. This counterbalanced by the fact that one learns a lot about the structure and behavior of the model at hand. [Pg.135]

If two or more of the unknown parameters are highly correlated, or one of the parameters does not have a measurable effect on the response variables, matrix A may become singular or near-singular. In such a case we have a so called ill-posed problem and matrix A is ill-conditioned. [Pg.141]

Thus, the error in the solution vector is expected to be large for an ill-conditioned problem and small for a well-conditioned one. In parameter estimation, vector b is comprised of a linear combination of the response variables (measurements) which contain the error terms. Matrix A does not depend explicitly on the response variables, it depends only on the parameter sensitivity coefficients which depend only on the independent variables (assumed to be known precisely) and on the estimated parameter vector k which incorporates the uncertainty in the data. As a result, we expect most of the uncertainty in Equation 8.29 to be present in Ab. [Pg.142]

This is equivalent to assuming a constant standard error in the measurement of the j response variable, and at the same time the standard errors of different response variables are proportional to the average value of the variables. This is a "safe" assumption when no other information is available, and least squares estimation pays equal attention to the errors from different response variables (e.g., concentration, versus pressure or temperature measurements). [Pg.147]

If however the measurements of a response variable change over several orders of magnitude, it is better to use the non-constant diagonal weighting matrix Qj given below... [Pg.147]

This is equivalent to assuming that the standard error in the i1 1 measurement of the response variable is proportional to its value, again a rather "safe" assumption as it forces least squares to pay equal attention to all data points. [Pg.148]

Finally it is noted that the above equations can be readily extended to the multi-response case especially if we assume that there is no cross-correlation between different response variables. [Pg.157]


See other pages where Response variables is mentioned: [Pg.411]    [Pg.412]    [Pg.521]    [Pg.523]    [Pg.94]    [Pg.100]    [Pg.106]    [Pg.71]    [Pg.670]    [Pg.337]    [Pg.162]    [Pg.148]    [Pg.8]    [Pg.16]    [Pg.24]    [Pg.24]    [Pg.25]    [Pg.33]    [Pg.134]   
See also in sourсe #XX -- [ Pg.298 ]

See also in sourсe #XX -- [ Pg.298 ]

See also in sourсe #XX -- [ Pg.14 , Pg.19 , Pg.27 , Pg.28 , Pg.30 , Pg.65 , Pg.90 , Pg.127 , Pg.154 , Pg.155 , Pg.157 , Pg.158 , Pg.193 , Pg.206 , Pg.208 , Pg.242 , Pg.244 , Pg.278 , Pg.279 , Pg.411 , Pg.424 , Pg.428 , Pg.443 , Pg.444 ]

See also in sourсe #XX -- [ Pg.121 ]




SEARCH



Bacterial Response Variables

Blank responses, variability

Clopidogrel response variability

Dose-response relationships variability

Drop , process flow, variables responses

Effects of input variables on responses in example

Inference on the Expected Response Variables

Optimization when there is more than one response variable

Outlying Response Variable Observations

Predictor variables predicted responses

Predictor variables single-response regression

Process Flow, Variables, and Responses Aseptic Fill Products

Process Flow, Variables, and Responses Lyophilized Products

Quality control response variables

Response Variability

State variable step response

State variable time response

Suspension , process flow, variables responses

Time Scale and Scope of Bacterial Response Variables

Variability of response

Variable-response pairs

© 2024 chempedia.info