Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Inferences on the Parameters

The least squares estimator has several desirable properties. Namely, the parameter estimates are normally distributed, unbiased (i.e., (k )=k) and their covariance matrix is given by [Pg.32]

Given all the above it can be shown that the (1 -a)lOO% joint confidence region for the parameter vector k is an ellipsoid given by the equation  [Pg.33]

The corresponding (I-a) 100% marginal confidence interval for each parameter, k i=l,2.p, is [Pg.33]

The standard error of parameter ki,, is obtained as the square root of the corresponding diagonal element of the inverse of matrix A multiplied by 6g, i.e., [Pg.33]


In frequentist statistics, by contrast, nuisance parameters are usually treated with point estimates, and inference on the parameter of interest is based on calculations with the nuisance parameter as a constant. This can result in large errors, because there may be considerable uncertainty in the value of the nuisance parameter. [Pg.322]

Procedures on how to make inferences on the parameters and the response variables are introduced in Chapter 11. The design of experiments has a direct impact on the quality of the estimated parameters and is presented in Chapter 12. The emphasis is on sequential experimental design for parameter estimation and for model discrimination. Recursive least squares estimation, used for on-line data analysis, is briefly covered in Chapter 13. [Pg.448]

Figure 5. A decision tree for the choice of sample size and inference method. The first decision node represents the choice of the sample size re. After this decision, the experiment is conducted and generates the data y that are assumed to follow a distribution with parameter 6. The data are used to make an inference on the parameter 6, and the second decision node a represents the statistical procedure that is used to make this inference. The last node represents the loss induced by choosing such an experiment. Figure 5. A decision tree for the choice of sample size and inference method. The first decision node represents the choice of the sample size re. After this decision, the experiment is conducted and generates the data y that are assumed to follow a distribution with parameter 6. The data are used to make an inference on the parameter 6, and the second decision node a represents the statistical procedure that is used to make this inference. The last node represents the loss induced by choosing such an experiment.
Suppose that we wish to make inferences on the parameters 0i,i = 1,g, where 9i represents the logarithm of the ratio of the expression levels of gene i under normal and disease conditions. If the ith gene has no differential expression, then the ratio is 1 and hence 0 = 0. In testing the g hypotheses Ho, 0 = 0, / = 1,..., g, suppose we set R, = 1 if H0, is rejected and Ri = 0 otherwise. Then, for any multiple testing procedure, one could in theory provide a complete description of the joint distribution of the indicator variables R, ..., Rg as a function of 0i,..., 0g in the entire parameter space. This is impractical if g > 2. Different controls of the error rate control different aspects of this joint distribution, with the most popular being weak control of the familywise error rate (FWER), strong control of the familywise error rate, and control of the false discovery rate (FDR). [Pg.144]

As subjects enroll in a study, the experimenter usually cannot control how old they are or what their weight is exactly. They are random. Still, in this case one may wish to either make inferences on the parameter estimates or predictions of future Y values. Begin by assuming that Y can be modeled using a simple linear model and that X and Y have a joint probability density function that is bivariate normal... [Pg.77]

Inferences on the parameter estimates are made the same as for a linear model, but the inferences are approximate. Therefore, using a T-test in nonlinear regression to test for some parameter being equal to zero or some other value is risky and should be discouraged (Myers, 1986). However, that is not to say that the standard errors of the parameter estimates cannot be used as a model discrimination criterion. Indeed, a model with small standard errors is a better model than a model with large standard errors, all other factors... [Pg.105]

Thoman, D.R., Bain, L.J., and Antle, C.E. (1969) Inferences on the parameters of the Weibull distribution, Technometrics 11, 445. Used numerical methods to determine m. [Pg.307]

Sometimes, only one of the parameters is of interest to us. We don t want to estimate the other parameters and call them "nuisance" parameters. All we want to do is make sure the nuisance parameters don t interfere with our inference on the parameter of interest. Because using the Baye.sian approach the joint posterior density is a probability density, and using the likelihood approach the joint likelihood function is not a probability density, the two approaches have different ways of dealing with the nuisance parameters. This is true even if we use independent flat priors. so that the posterior density and likelihood function have the same shape. [Pg.13]

After we have let the chain run a long time, the state the chain is in does not depend on the initial state of the chain. This length of time is called the burn-in period. A draw from the chain after the bum-in time is approximately a random draw from the posterior. However, the sequence of draws from the chain after that time is not a random sample from the posterior, rather it is a dependent sample. In Chapter 3, we saw how we could do inference on the parameters using a random sample from the posterior. In Section 7.3 we will continue with that approach to using the Markov chain Monte Carlo sample from the posterior. We will have to thin the sample so that we can consider it to be approximately a random sample. A chain with good mixing properties will require a shorter burn-in period and less thinning. [Pg.160]

A switch to double-angle vectors given by Eq. (2.3.12) not only significantly simplifies the treatment of orientation phase transitions in planar systems of nonpolar molecules but also leads to a number of substantial inferences on the transition nature. First of all note that the long-range-order parameter t] (vanishing in a disordered phase and equal to unity at T = 0) in a -dimensional space (specified by the orientations of long molecular axes) can be defined as ... [Pg.45]

James et al. (2003) inferred, on the basis of comparison between experimental results and natural data, that upwelling rate is another parameter that is critical to interpretation of Li isotope signatures of pore fluids. At low temperatures (<100°C), Li may be lost to fluids by sediments (Chan et al. 1994a). However, enrichment in Li of pore fluids to concentrations greater than that of seawater near basement contacts may more prevalently reflect slow rate of upwelling, as fluid-sediment interaction is thereby favored. This interpretation is consistent with data from a variety of samples from ridge flanks (e.g., Elderfield et al. 1999 Wheat and Mottl 2000). [Pg.178]

Sampling error In surveys, investigators frequently take measurements (or samples) on the parameters of interest, from which inferences to the true but unknown population are inferred. The inability of the sample statistics to represent the true population statistics is called sample error. There are many reasons why the sample may be inaccurate, from the design of the experiment to the inability of the measuring device. In some cases, the sources of error may be separated (see Variance components). [Pg.182]

Thus, discussing the influence of the concentration of donors in a semiconductor matrix on the parameters of electronic surface states, we can only make the inference that there exists a certain tendency towards the increase of the interaction between metal nanophase and semiconductor matrix with lowering a matrix doping. [Pg.169]

FYom observations and any available prior information, Bayes theorem infers a probability distribution for the parameters of a postulated model. This posterior distribution tells all that can be inferred about the parameters on the basis of the given information. From this function the most probable parameter values can be calculated, as well as various measures of the precision of the parameter estimation. The same can be done for any quantity predicted by the model. [Pg.77]

For simply producing the magnitude of observed excesses, both dynamic melting models and transport models are viable. The largest uncertainty in either type of model is the appropriate partition coefficients (and D as a function of pressure) as these control the inferences on the porosity through the parameter D (f>. The more stringent tests on these models, however, come from the observed correlations... [Pg.1756]

Up to now it has been assumed that x is fixed and under control of the experimenter, e.g., the dose of drug given to subjects or sex of subjects in a study, and it is of interest to make prediction models for some dependent variable Y or make inferences on the regression parameters. There are times when x is not fixed, but is a random variable, denoted X. An example would be a regression analysis of weight vs. total clearance, or age vs. volume of distribution. In both cases, it is possible for the experimenter to control age or weight, but more than likely these are samples randomly drawn from subjects in the population. [Pg.77]

One might define the function of applied statistics as the ait and science of collecting euid processing data in order to meike inferences aixmt the parameters of one or more populations associated with random phenomena. These inferences are made in such a way that the conclusions reached are consistent and unbiased. When properly applied and executed, statistical procedures depend entirely on specific methodologies, definitions, and parameters required by the statistical test chosen. [Pg.2241]

The change of knowledge about future x brought by the observation of the past xs will, in turn, impact the distribution of the output variable Z via the deterministic trans fer function given by Equation 1. Gi-rard Parent (2004) particularly insist on the idea that the Bayesian analyst should focus on inferring on the posterior predictive distribution of observable variables rather than on model s parameters 6. As already pointed by Box (1980), parameters estimation is just the first step of the statistician s work (inductive phase) which must by followed by the deductive phase of statistical analysis, i.e. coming back from the conceptual world of models to the real problems. [Pg.1701]

Farris and Bauer (1988) introduced an experimental approach to measure the separation energy F based on the geometrical configuration under discussion here. Motivated by this approach, the separation energy is expressed as a function of ap in (4.45), rather than the other way around, to anticipate the interpretation of measurements. The material parameters of the film, the film thickness and the mismatch stress are presumably known on the basis of measurements made separately from any delamination experiments. The radius h is controlled in the experiment, and therefore its value is known. Thus, if ap can be observed in an experiment, a value for F can be inferred on the basis of (4.45). [Pg.291]

From the above, it is obvious that the efficient use of the physicochemical parameters in searching reaction databases requires some chemic insight on the side of the user. However, these parameters can become powerful tools when used intelligently and allow us to draw inferences on the reaction conditions for a desired transformation. [Pg.437]

Bayesian statistics has a single way of dealing with nuisance parameters. Because the joint posterior is a probability density in all dimensions, we can find the marginal densities by integration. Inference about the parameter of interest 0i is based on the marginal posterior g 0i data), which is found by integrating the nuisance parameter 2 out of the joint posterior, a process referred to as marginalization ... [Pg.15]

The computational approach to Bayesian statistics allows the posterior to be approached from a completely different direction. Instead of using the computer to calculate the posterior numerically, we use the computer to draw a Monte Carlo sample from the posterior. Fortunately, all we need to know is the shape of the posterior density, which is given by the prior times the likelihood. We do not need to know the scale factor necessary to make it the exact posterior density. These methods replace the very difficult numerical integration with the much easier process of drawing random samples. A Monte Carlo random sample from the posterior will approximate the true posterior when the sample size is large enough. We will base our inferences on the Monte Carlo random sample from the posterior, not from the numerically calculated posterior. Sometimes this approach to Bayesian inference is the only feasible method, particularly when the parameter space is high dimensional. [Pg.26]

More realistically, both parameters are unknown, and then the observations would be from a two-dimensional exponential family. Usually we want to do our inference on the mean y and regard the variance as a nuisance parameter. Using a joint prior,... [Pg.77]

It is obvious that all of these types of inconsistencies can be identified as a logical consequence of contradiction statements. Hence, the use of OWL, its description logics based semantics and the inference types that can be drawn from an OWL model (e.g., whether a concept is satisfiable or whether inconsistencies in the model exist) is an appropriate solution approach. As a consequence, based on the parameter ranges in a feature we formulate a set of possible feature states by means of an OWL concept. Each implication defined in the requirements truth tables can then be specified as a sub-concept of this feature state— an OWL reasoner can consequently determine whether or not this concept is satisfiable, i.e., whether the requirement is consistent to the feature. If implications are over-specified (see, for instance, the specification of the implication Error = None in Fig. 14.10), the respective implication is defined as the intersection of the existing set of implications. If the set of implications is inconsistent, an OWL reasoner identifies the model to be inconsistent, i.e., that there is an inconsistency between the requirements. Finally, test cases are regarded as states of the feature concept and, hence, if contradictions between... [Pg.371]

A Bayesian approach for modal identiflcaticm provides a fundamental means for processing the information contained in the data to make inference on the modal parameters consistent... [Pg.224]


See other pages where Inferences on the Parameters is mentioned: [Pg.237]    [Pg.32]    [Pg.177]    [Pg.104]    [Pg.14]    [Pg.16]    [Pg.53]    [Pg.198]    [Pg.237]    [Pg.32]    [Pg.177]    [Pg.104]    [Pg.14]    [Pg.16]    [Pg.53]    [Pg.198]    [Pg.317]    [Pg.380]    [Pg.131]    [Pg.78]    [Pg.37]    [Pg.45]    [Pg.113]    [Pg.247]    [Pg.376]    [Pg.18]    [Pg.47]    [Pg.269]   


SEARCH



Inference

The parameters

© 2024 chempedia.info