Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Function single-variable

For functions of a single variable (e.g., energy, momentum or time) the projector Prz)(x) is simply 0(a ), the Heaviside step function, or a combination thereof. When also replacing x, k by the variables , t, the Fourier transform in Eq. (5) is given by... [Pg.112]

From basic calculus, it is known that a function of a single variable is analytic at a given interval if and only if it has well-defined derivatives, to any order, at any point in that interval. In the same way, a function of several variables is analytic in a region if at any point in this region, in addition to having well-defined derivatives for all variables to any order, the result of the differentiation with respect to any two different variables does not depend on the order of the differentiation. [Pg.718]

Figure B-3 Maxima, Minima, and Inflection Points of a Function of a Single Variable... Figure B-3 Maxima, Minima, and Inflection Points of a Function of a Single Variable...
The case A = 2 is of greatest interest. Since the force is central, it is not necessary to use rj and ri as variables. The single variable r 2 is sufficient since the position of the center of mass is irrelevant. Thus, we have the radial distribution function (RDF), g r 12). [Pg.138]

A good place to start our study- is with a function of a single variable f(x). Consider the function... [Pg.234]

The golden section search is the optimization analog of a binary search. It is used for functions of a single variable, F a). It is faster than a random search, but the difference in computing time will be trivial unless the objective function is extremely hard to evaluate. [Pg.207]

A set of complete orthonormal functions ipfx) of a single variable x may be regarded as the basis vectors of a linear vector space of either finite or infinite dimensions, depending on whether the complete set contains a finite or infinite number of members. The situation is analogous to three-dimensional cartesian space formed by three orthogonal unit vectors. In quantum mechanics we usually (see Section 7.2 for an exception) encounter complete sets with an infinite number of members and, therefore, are usually concerned with linear vector spaces of infinite dimensionality. Such a linear vector space is called a Hilbert space. The functions ffx) used as the basis vectors may constitute a discrete set or a continuous set. While a vector space composed of a discrete set of basis vectors is easier to visualize (even if the space is of infinite dimensionality) than one composed of a continuous set, there is no mathematical reason to exclude continuous basis vectors from the concept of Hilbert space. In Dirac notation, the basis vectors in Hilbert space are called ket vectors or just kets and are represented by the symbol tpi) or sometimes simply by /). These ket vectors determine a ket space. [Pg.80]

Methods based on linear projection exploit the linear relationship among inputs by projecting them on a linear hyperplane before applying the basis function (see Fig. 6a). Thus, the inputs are transformed in combination as a linear weighted sum to form the latent variables. Univariate input analysis is a special case of this category where the single variable is projected on itself. [Pg.11]

The nature of the relationships and constraints in most design problems is such that the use of analytical methods is not feasible. In these circumstances search methods, that require only that the objective function can be computed from arbitrary values of the independent variables, are used. For single variable problems, where the objective function is unimodal, the simplest approach is to calculate the value of the objective function at uniformly spaced values of the variable until a maximum (or minimum) value is obtained. Though this method is not the most efficient, it will not require excessive computing time for simple problems. Several more efficient search techniques have been developed, such as the method of the golden section see Boas (1963b) and Edgar and Himmelblau (2001). [Pg.28]

In addition to the programs to select the optimum discussed previously, graphic approaches are also available and graphic output is provided by a plotter from computer tapes. The output includes plots of a given response as a function of a single variable (Fig. 11) or as a function of all five variables (Fig. 12). The abscissa for both types is produced in experimental units, rather than physical units, so that it extends from—1.547 to + 1.547 (see Table 5). Use of the experimental units allows the superpositioning of the single plots (see Fig. 11) to obtain the composite plots (see Fig. 12). [Pg.618]

A more sophisticated method for optimization of a single variable is Newton s method, which exploits first and second derivatives of the objective function. Newton s method starts by supposing that the following equation needs to be solved ... [Pg.38]

The method of steepest descent uses only first-order derivatives to determine the search direction. Alternatively, Newton s method for single-variable optimization can be adapted to carry out multivariable optimization, taking advantage of both first- and second-order derivatives to obtain better search directions1. However, second-order derivatives must be evaluated, either analytically or numerically, and multimodal functions can make the method unstable. Therefore, while this method is potentially very powerful, it also has some practical difficulties. [Pg.40]

Thus far in this chapter, functions of only a single variable have been considered. However, a function may depend on several independent variables. For example, z — f(x,y), where x and y are independent variables. If one of these variables, say y, is held constant the function depends only on x. Then, the derivative can be found by application of the methods developed in this chapter. In this case the derivative is called the partial derivative of z with respect to jc, which is represented by dz/dx or Bf/Bx. The partial derivative with respect to y is analogous. The same principle can be applied to implicit functions of several independent variables by the method developed in Section 2.5. Clearly, the notion of partial derivatives can be extended to functions of any number of independent variables. However, it must be remembered that when differentiating with respect to a given independent variable, all others are held constant. [Pg.234]

The interpretation of these results has not been completely clarified. A plausible starting point might be to assume that in the steady state, all concentrations and fluxes in the crystal are functions of the single variable C — x — vetf, where x is position relative to the original (t = 0) crystal surface, t is the time since the start of etching, and vet is the etch velocity. At any f, the diffusion-drift flux J of hydrogen must obey... [Pg.310]

Graphical presentation of data assists in determining the form of the function of a single variable (or two variables). The response y versus the independent variable x can be plotted and the resulting form of the model evaluated visually. Figure 2.4 shows experimental heat transfer data plotted on log-log coordinates. The plot... [Pg.49]

Functions of a single variable and their corresponding trajectories. (Continues)... [Pg.52]

We define the property of continuity as follows. A function of a single variable x is continuous at a point x0 if... [Pg.114]

Figure 4.16 illustrates the character of ffx) if the objective function is a function of a single variable. Usually we are concerned with finding the minimum or maximum of a multivariable function fix)- The problem can be interpreted geometrically as finding the point in an -dimension space at which the function has an extremum. Examine Figure 4.17 in which the contours of a function of two variables are displayed. [Pg.135]

If both first and second derivatives vanish at the stationary point, then further analysis is required to evaluate the nature of the function. For functions of a single variable, take successively higher derivatives and evaluate them at the stationary point. Continue this procedure until one of the higher derivatives is not zero (the nth one) hence,/ (jc ),/"(jc ),. . ., /(w-1)(jc ) all vanish. Two cases must be analyzed ... [Pg.138]

As an example consider the following function of a single variable x (see Figure 5.1). [Pg.153]

One method of optimization for a function of a single variable is to set up as fine a grid as you wish for the values of x and calculate the function value for every point on the grid. An approximation to the optimum is the best value of /(x). Although this is not a very efficient method for finding the optimum, it can yield acceptable results. On the other hand, if we were to utilize this approach in optimizing a multivariable function of more than, say, five variables, the computer time is quite likely to become prohibitive, and the accuracy is usually not satisfactory. [Pg.155]

In optimization of a function of a single variable, we recognize (as for general multivariable problems) that there is no substitute for a good first guess for the starting point in the search. Insight into the problem as well as previous experience... [Pg.156]

Quasi-Newton methods may seem crude, but they work well in practice. The order of convergence is (1 + /5)/2 1.6 for a single variable. Their convergence is slightly slower than a properly chosen finite difference Newton method, but they are usually more efficient in terms of total function evaluations to achieve a specified accuracy (see Dennis and Schnabel, 1983, Chapter 2). [Pg.161]


See other pages where Function single-variable is mentioned: [Pg.426]    [Pg.201]    [Pg.457]    [Pg.558]    [Pg.744]    [Pg.176]    [Pg.479]    [Pg.479]    [Pg.538]    [Pg.52]    [Pg.234]    [Pg.234]    [Pg.42]    [Pg.319]    [Pg.289]    [Pg.27]    [Pg.37]    [Pg.38]    [Pg.193]    [Pg.33]    [Pg.32]    [Pg.86]    [Pg.155]    [Pg.158]   
See also in sourсe #XX -- [ Pg.3 ]




SEARCH



Function of single variable

Functions of a single variable

© 2024 chempedia.info