Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Predictor variables predicted responses

St/pen/7sed Data Mining. Searching large volumes of data for hidden predictive relationships. Supervised analysis requires one or more "dependent" or response variables, to be predicted from a set of "independent" or predictor variables. The techniques used include various classification methods (decision tree, support vector, Bayesian) and various estimation methods (regression, neural nets). [Pg.411]

A set of statistical methods using a mathematical equation to model the relationship between an observed or measured response and one or more predictor variables. The goal of this analysis is twofold modelling and predicting. The relationship is described in algebraic form as ... [Pg.62]

PHARMACOKINETICS The area under the plasma concentration-time curve (AUC) was identified, in a preliminary analysis, as the important exposure covariate that was predictive of the safety biomarker outcome. Consequently, it became necessary to compare the distributions of AUC values across studies and dosage regimens. Figure 47.8 illustrates distributions of the exposure parameter AUC across studies. It is evident that AUC values are higher in diseased subjects than in healthy volunteer subjects at the same dose level. To adjust for the difference between the two subpopulations, an indicator function was introduced in a first-order regression model to better characterize the dose-exposure data. Let y be the response variable (i.e., AUC), X is a predictor variable, P is the regression coefficient on x, and e is the error term, which is normally distributed with a mean of zero and variance cP. Thus,... [Pg.1183]

The goodness of prediction statistic measures how well a model can be used to estimate future (test) data, that is, how well a regression model (or a classification model) estimates the response variable given a set of values for predictor variables. This statistics is obtained using —> validation techniques. [Pg.644]

Correlation models differ from regression models in that each variable (y,s and XiS) plays a symmetrical role, with neither variable designated as a response or predictor variable. They are viewed as relational, instead of predictive in this process. Correlation models can be very useful for making inferences about any one variable relative to another, or to a group of variables. We use the correlation models in terms of y and single or multiple x,s. [Pg.205]

It is important at the outset to clearly define ones basic goals in the scientific study of living systems. Indeed, these objectives are often difficult to discern from research methodology textbooks. Simply put, this enterprise is primarily concerned with the detection of systematic relationships amidst the morass of variability in biobehavioral responses. This task calls for the partitioning of observed variability into systematic and random components, which in turn will yield patterns of associations such that events can be described, predicted, controlled, and ultimately understood. In the simplest case, a research question or hypothesis is tested by an investigation of the existence, direction, and magnitude of a relationship between an independent or predictor variable (IV) and a DV or criterion variable. [Pg.61]

With the exception of those variables having zero variance (which pick themselves), the decision about which variables to eliminate/include and the method by which this is done depends on several factors. The two most important factors are whether the dataset consists of two blocks of variables, a response block (Y) and a descriptor/predictor block (X), and whether the purpose of the analysis is to predict/describe values for one or more of the response variables from a model relating the variables in the two blocks. If this result is indeed the aim of the analysis, then it seems reasonable that the choice of variables to be included should depend, to some extent, on the response variable or variables being modeled. This approach is referred to as supervised variable selection. On the other hand, if the variable set consists of only one block of variables, the choice of variables in any analysis will be done with what are referred to as unsupervised variable selection. [Pg.307]

In Section 8.3 we discuss the issues we face when we are modelling using the multiple logistic regression. We investigate the issue of which predictor variables to include in the model. When we include an extraneous predictor variable that does not affect the response, we will improve the fit to the given data set, but will degrade the predictive effectiveness of the model. On the other hand, when the predictors... [Pg.179]

Often, the logistic regression model is run including all possible predictor variables that we have data for., Some of these variables may affect the response very little if at all. The true coefficient of such a variable 0j would be very close to zero. Leaving these unnecessary predictor variables in the model can complicate the determination of the effects of the remaining predictor variables. Their removal will lead to an improved model for predictions. This is often referred to as the principal of parsimony. [Pg.194]

We propose a mathematical relation, which maps the predictors x to the responses that involves a set of adjustable model parameters 0 e 9t, whose values we wish to estimate from the measured response data. Let us say that we have a set of N experiments, in which for experiment k=, 2,... N, is the row vector of predictor variables and the row vector of measured response data isyf f For each experiment, we have a model prediction of the response... [Pg.372]

Again, we perform a number N of experiments, where in the Ath experiment, we have a known set of M predictor variables, e 91, and we observe the L responses j[t] g We wish to estimate the values of P unknown parameters 0 e in a model whose predicted responses for each experiment form a vector /(x l 0) We assume that the measured responses are equal to the model predictions plus a random error vector. [Pg.414]

Throughout this chapter, the terms instrumental response", independent variables" or predictors" (this last term is the preferred one) denote the atomic spectra, whereas dependent", predictand or predicted variable" (the second term is preferred) refer to concentration(s) of the analyte(s). [Pg.182]

Describe the use of regression equations to predict the value of a dependent variable ( response ) from that of an independent one ( predictor )... [Pg.169]


See other pages where Predictor variables predicted responses is mentioned: [Pg.307]    [Pg.273]    [Pg.296]    [Pg.421]    [Pg.95]    [Pg.114]    [Pg.466]    [Pg.199]    [Pg.848]    [Pg.187]    [Pg.65]    [Pg.1476]    [Pg.147]    [Pg.152]    [Pg.153]    [Pg.2265]    [Pg.311]    [Pg.234]    [Pg.604]    [Pg.179]    [Pg.180]    [Pg.192]    [Pg.153]    [Pg.361]    [Pg.75]    [Pg.376]    [Pg.307]    [Pg.94]    [Pg.400]    [Pg.400]    [Pg.651]    [Pg.16]    [Pg.277]    [Pg.62]    [Pg.205]    [Pg.335]   
See also in sourсe #XX -- [ Pg.399 ]




SEARCH



Predictable variability

Prediction variables

Predictors

Response variable

Responsivity prediction

© 2024 chempedia.info