Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Variables mathematical definition

No single method or algorithm of optimization exists that can be apphed efficiently to all problems. The method chosen for any particular case will depend primarily on (I) the character of the objective function, (2) the nature of the constraints, and (3) the number of independent and dependent variables. Table 8-6 summarizes the six general steps for the analysis and solution of optimization problems (Edgar and Himmelblau, Optimization of Chemical Processes, McGraw-HiU, New York, 1988). You do not have to follow the cited order exac tly, but vou should cover all of the steps eventually. Shortcuts in the procedure are allowable, and the easy steps can be performed first. Steps I, 2, and 3 deal with the mathematical definition of the problem ideutificatiou of variables and specification of the objective function and statement of the constraints. If the process to be optimized is very complex, it may be necessaiy to reformulate the problem so that it can be solved with reasonable effort. Later in this section, we discuss the development of mathematical models for the process and the objec tive function (the economic model). [Pg.742]

Equation 41-A3 can be checked by expanding the last term, collecting terms and verifying that all the terms of equation 41-A2 are regenerated. The third term in equation 41-A3 is a quantity called the covariance between A and B. The covariance is a quantity related to the correlation coefficient. Since the differences from the mean are randomly positive and negative, the product of the two differences from their respective means is also randomly positive and negative, and tend to cancel when summed. Therefore, for independent random variables the covariance is zero, since the correlation coefficient is zero for uncorrelated variables. In fact, the mathematical definition of uncorrelated is that this sum-of-cross-products term is zero. Therefore, since A and B are random, uncorrelated variables ... [Pg.232]

Steps 1, 2, and 3 deal with the mathematical definition of the problem, that is, identification of variables, specification of the objective function, and statement of the constraints. We devote considerable attention to problem formulation in the remainder of this chapter, as well as in Chapters 2 and 3. If the process to be optimized is very complex, it may be necessary to reformulate the problem so that it can be solved with reasonable effort. [Pg.18]

Many theories developed in this book are expressed by equations or results involving continuous functions for example, the spatially variable concentration c(r). Materials systems are fundamentally discrete and do not have an inherent continuous structure from which continuous functions can be constructed. Whereas the composition at a particular point can be understood both intuitively and as an abstract quantity, a rigorous mathematical definition of a suitable composition function is not straightforward. Moreover, using a continuous position vector f in conjunction with a crystalline system having discrete atomic positions may lead to confusion. [Pg.7]

This expression serves as a precise mathematical definition of temperature. It is interesting to note that temperature, a variable with which we have intuitive and sensory familiarity, is defined based on entropy, one with which we may be less familiar. In fact, we shall see that entropy and temperature are intimately related in the concept of free energy, in which temperature determines the relative importances of energy and entropy in driving thermodynamic processes. [Pg.287]

Mathematically, a correlation is a measure of the relation between two or more quantitative variables. A linear correlation is characterized by a slope and an intercept, and the closeness of the relationship is measured in terms of a coefficient of correlation. From a biopharmaceutical standpoint, correlation simply means relationships observed between parameters derived from in vitro and in vivo studies, irrespective of the mathematical definition of the term. [Pg.2062]

Continuous functions are ones in which the dependent variable changes smoothly and continuously for smooth and continuous changes of the independent variables. Figures 2.1 and 2.2 represent continuous functions, but Figure 2.3 represents a function which is continuous for x 7 a but shows a discontinuity from —00 to -hoo at X = a. The mathematical definition of continuity is that /(x) is continuous at x = a if /(a) is defined, and if lima a /(x) = /(a). [Pg.9]

Optimal control problems in optimization involve vector decision variables like the technological and socio-economic profiles to be determined for sustainabdity. It involves integral objective function and the underlying model is a differential algebraic system. Shastri and Diwekar [24] presented a mathematical definition of the sustainability hypothesis proposed by Cabezas and Path [16] based on FI. They assumed a system with n species, and calculated the time average Fb using Eq. [8.16]. [Pg.195]

The definition of entropy ultimately brings us to an idea that we call the second law of thermodynamics Any spontaneous change occurs with a concurrent increase in the entropy of the universe. The mathematical definition of entropy, in terms of the change in heat for a reversible process, allows us to derive many mathematical expressions we can use to calculate the entropy change for a physical or chemical process. The concept of order brings us to what we call the third law of thermodynamics that the absolute entropy of a perfect crystal at absolute zero is exactly zero. We can therefore speak of absolute entropies of materials at temperatures other than 0 K. Entropy becomes—and will remain—the only thermodynamics state function for a system that we can know absolutely. (Contrast this with state variables like p, V, T, and n, whose values we can also know absolutely.)... [Pg.96]

Autocorrelation Coefficient n The auto covariance normalized by the product of the standard deviations of the two sections from the single random variable sequence used to calculate the autocovariance. In other words the autocorrelation coefficient is the cross-correlation coefficient of two sub-sequences of the same random variable. It is probably the most commonly used measure of the correlation between two sections of a single random variable sequence. It is often simply but incorrectly referred to as the autocorrelation, which is the un-normalized expectation value of the product of the two sequence sections. The autocorrelation coefficient of the two subsequences of random variable, X, is often denoted by Pxx(ii>T) where i is the starting index of the second section, and T is the length of the sections. The precise mathematical definition of autocorrelation coefficient of two random variable sequence sections is given by ... [Pg.969]

Covariance n A basic measure of the association between two random variables. It is a quantification of how much the two variables change together. The covariance of two random variables, Xj and X2, is often denoted by Cov(Xi, X2). The precise mathematical definition of covariance of two random... [Pg.977]

Expectation Value Also referred to as expectation, expected value, or mathematical expectation. In simple terms the expectation value for a function of a random variable is the expected average value of the function over a large number of samples. The precise mathematical definition is for a continuous random variable, X,... [Pg.981]

Kurtosis n A primary measure of shape or descriptive measure which describes the peakedness of a probability distribution, population or sample. It is ideally the 4 normalized central moment (or by many authors as the 4 normalized central moment minus 3, which is also called the excess kurtosis or simply the excess) and is often denoted as k or 72- The precise mathematical definition of kurtosis for a random variable, X, defined on a probability space, S, is given by ... [Pg.985]

Standard Deviation n A measure of the spread or dispersion of a random variable, probability distribution, population, or sample from the mean. It is defined as the positive square root of the variance and is one of the most used descriptive measures of a random variable, probability distribution, population or sample. The variance of a random variable is defined as the second central moment of the random variable and is often denoted by cr where cr is the standard deviation. The precise mathematical definition of the standard... [Pg.997]

Note that in spite of the mathematical definitions cited, detection limits are rather nebulous quantities. Because they depend on many variables, a factor of 2—3 times uncertainty in the values can be anticipated. They can vary significantly between various manufacturers instrumentation and are especially sensitive to different modes of sample introduction. They can also be modified by the optimization for the determination of specific elements. When performing multielement analyses, a compromise of optimization must be tolerated. This compromise usually results in the achievement of optimal detection limits for only a few elements, with the remainder often being a factor of 2—3 times their optimized values. Also, because detection limits are so dependent on operating parameters, it is prudent to frequently (i.e., with each batch of samples analyzed) compute detection limits to reliably report ultra-trace concentration levels. Care must be taken to not report too many significant figures when stating detection limits, so as to be consistent with the probability level selected in the computation. Typical published detection limits for various types of instrumentation are tabulated in Table 10.1. [Pg.152]

In this book, we always choose to use the local concentration difference at a particular position in the column. Such a choice implies a local mass transfer coefficient to distinguish it from an average mass transfer coefficient. Use of a local coefficient means that we often must make a few extra mathematical calculations. However, the local coefficient is more nearly constant, a smooth function of changes in other process variables. This definition was implicitly used in Examples 8.1-1, 8.1-3, and 8.1-4 in the previous section. It was used in parallel with a type of average coefficient in Example 8.1-2. [Pg.245]

The natural laws in any scientific or technological field are not regarded as precise and definitive until they have been expressed in mathematical form. Such a form, often an equation, is a relation between the quantity of interest, say, product yield, and independent variables such as time and temperature upon which yield depends. When it happens that this equation involves, besides the function itself, one or more of its derivatives it is called a differential equation. [Pg.453]

The changeover to thermodynamic activities is equivalent to a change of variables in mathematical equations. The relation between parameters and a. is unambiguous only when a definite value has been selected for the constant p. For solutes this constant is selected so that in highly dilute solutions where the system p approaches an ideal state, the activity will coincide with the concenttation (lim... [Pg.39]

Let us first introduce some important definitions with the help of some simple mathematical concepts. Critical aspects of the evolution of a geological system, e.g., the mantle, the ocean, the Phanerozoic clastic sediments,..., can often be adequately described with a limited set of geochemical variables. These variables, which are typically concentrations, concentration ratios and isotope compositions, evolve in response to change in some parameters, such as the volume of continental crust or the release of carbon dioxide in the atmosphere. We assume that one such variable, which we label/ is a function of time and other geochemical parameters. The rate of change in / per unit time can be written... [Pg.344]

The interpretation of the elements of the matrix 0 is slightly more subtle, as they represent the derivatives of unknown functions fi(x) with respect to the variables x at the point x° = 1. Nevertheless, an interpretation of these parameters is possible and does not rely on the explicit knowledge of the detailed functional form of the rate equations. Note that the definition corresponds to the scaled elasticity coefficients of Metabolic Control Analysis, and the interpretation is reminiscent to the interpretation of the power-law coefficients of Section VII.C Each element 6% of the matrix measures the normalized degree of saturation, or likewise, the effective kinetic order, of a reaction v, with respect to a substrate Si at the metabolic state S°. Importantly, the interpretation of the elements of does again not hinge upon any specific mathematical representation of specific... [Pg.192]

Depending on the aim of data analysis different mathematical criteria are applied for the definition of latent variables ... [Pg.65]

Parameter Two distinct definitions for parameter are used. In the first usage (preferred), parameter refers to the constants characterizing the probability density function or cumulative distribution function of a random variable. For example, if the random variable W is known to be normally distributed with mean p and standard deviation o, the constants p and o are called parameters. In the second usage, parameter can be a constant or an independent variable in a mathematical equation or model. For example, in the equation Z = X + 2Y, the independent variables (X, Y) and the constant (2) are all parameters. [Pg.181]


See other pages where Variables mathematical definition is mentioned: [Pg.169]    [Pg.488]    [Pg.80]    [Pg.230]    [Pg.37]    [Pg.352]    [Pg.969]    [Pg.979]    [Pg.75]    [Pg.113]    [Pg.359]    [Pg.179]    [Pg.120]    [Pg.428]    [Pg.29]    [Pg.22]    [Pg.467]    [Pg.246]    [Pg.363]   
See also in sourсe #XX -- [ Pg.149 ]




SEARCH



Mathematical variables

Variability definition

Variables definition

© 2024 chempedia.info