Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Vector algebra

I have assumed that the reader has no prior knowledge of concepts specific to computational chemistry, but has a working understanding of introductory quantum mechanics and elementary mathematics, especially linear algebra, vector, differential and integral calculus. The following features specific to chemistry are used in the present book without further introduction. Adequate descriptions may be found in a number of quantum chemistry textbooks (J. P. Lowe, Quantum Chemistry, Academic Press, 1993 1. N. Levine, Quantum Chemistry, Prentice Hall, 1992 P. W. Atkins, Molecular Quantum Mechanics, Oxford University Press, 1983). [Pg.444]

Uncharged reaction components are transported by diffusion and convection, even though their migration fluxes are zero. The total flux density Jj of species j is the algebraic (vector) sum of densities of all flux types, and the overall equation for mass balance must be written not as Eq. (4.1) but as... [Pg.20]

Derivative of a response function in the direction of its perpendicular on response surface is the algebraic vector called gradient of response function in the observed point. The observed vector in each point of the domain of a response function is perpendicular to its response surface of a constant value in this observed point and, by its direction, it corresponds to the fastest change of the response function value. This is how movement in the direction of the gradient of a response function leads to the optimum in the shortest possible way. [Pg.386]

Strictly speaking, these vectors should be called geometric vectors since they do not, in all cases, satisfy the properties of algebraic vectors (e.g., algebraic vectors satisfy the axioms of a linear vector space, namely, the addition of two vectors or the multiplication of a vector by a scalar should result in another vector that also lies in the space). Nevertheless, the terminology vector, which is common in chemical informatics, will be used here to include both classes of vectors. [Pg.17]

The effective nuclear kinetic energy operator due to the vector potential is formulated by multiplying the adiabatic eigenfunction of the system, t t(/ , r) with the HLH phase exp(i/2ai ctan(r/R)), and operating with T R,r), as defined in Eq. fl), on the product function and after little algebraic simplification, one can obtain the following effective kinetic energy operator. [Pg.45]

Now, we recall the remarkable result of [72] that if the adiabatic electronic set in Eq. (90) is complete (N = oo), then the curl condition is satisfied and the YM field is zero, except at points of singularity of the vector potential. (An algebraic proof can be found in Appendix 1 in [72]. An alternative derivation, as well as an extension, is given below.) Suppose now that we have a (pure) gauge g(R), that satisfies the following two conditions ... [Pg.149]

Ac Che limic of Knudsen screaming Che flux relacions (5.25) determine Che fluxes explicitly in terms of partial pressure gradients, but the general flux relacions (5.4) are implicic in Che fluxes and cheir solution does not have an algebraically simple explicit form for an arbitrary number of components. It is therefore important to identify the few cases in which reasonably compact explicit solutions can be obtained. For a binary mixture, simultaneous solution of the two flux equations (5.4) is straightforward, and the result is important because most experimental work on flow and diffusion in porous media has been confined to pure substances or binary mixtures. The flux vectors are found to be given by... [Pg.42]

The existence of these aimple algebraic relations enormously simplifies the problem of solving the implicit flux equations, since (11.3) permit all the flux vectors to be expressed In terms of any one of them. From equations (11.1), clearly... [Pg.113]

The rules of matrix-vector multiplication show that the matrix form is the same as the algebraic form, Eq. (5-25)... [Pg.138]

As illustrated above, any p2 configuration gives rise to iD , and levels which contain nine, five, and one state respectively. The use of L and S angular momentum algebra tools allows one to identify the wavefunctions corresponding to these states. As shown in detail in Appendix G, in the event that spin-orbit coupling causes the Hamiltonian, H, not to commute with L or with S but only with their vector sum J= L +... [Pg.258]

Rules of matrix algebra can be appHed to the manipulation and interpretation of data in this type of matrix format. One of the most basic operations that can be performed is to plot the samples in variable-by-variable plots. When the number of variables is as small as two then it is a simple and familiar matter to constmct and analyze the plot. But if the number of variables exceeds two or three, it is obviously impractical to try to interpret the data using simple bivariate plots. Pattern recognition provides computer tools far superior to bivariate plots for understanding the data stmcture in the //-dimensional vector space. [Pg.417]

This matrix approach is valuable ui the concept that the configuration of a plant at any time is represented by a vector whose elements represent the status of the components by a i for operable and a 0 for non-operable. A more refined and developed method is Bodle.m algebra. [Pg.36]

Matrix and tensor notation is useful when dealing with systems of equations. Matrix theory is a straightforward set of operations for linear algebra and is covered in Section A.I. Tensor notation, treated in Section A.2, is a classification scheme in which the complexity ranges upward from scalars (zero-order tensors) and vectors (first-order tensors) through second-order tensors and beyond. [Pg.467]

I assume that you are familiar with the elementary ideas of vectors and vector algebra. Thus if a point P has position vector r (I will use bold letters to denote vectors) then we can write r in terms of the unit Cartesian vectors ex, Cy and as ... [Pg.4]

In elementary algebra, a linear function of the coordinates xi of a variable vector f = (jci, JT2,..., Jc ) of the finite-dimensional vector space V = V P) is a polynomial function of the special form... [Pg.220]

The Linear Algebraic Problem.—Familiarity with the basic theory of finite vectors and matrices—the notions of rank and linear dependence, the Cayley-Hamilton theorem, the Jordan normal form, orthogonality, and related principles—will be presupposed. In this section and the next, matrices will generally be represented by capital letters, column vectors by lower case English letters, scalars, except for indices and dimensions, by lower case Greek letters. The vectors a,b,x,y,..., will have elements au f it gt, r) . .. the matrices A, B,...,... [Pg.53]

Just as a known root of an algebraic equation can be divided out, and the equation reduced to one of lower order, so a known root and the vector belonging to it can be used to reduce the matrix to one of lower order whose roots are the yet unknown roots. In principle this can be continued until the matrix reduces to a scalar, which is the last remaining root. The process is known as deflation. Quite generally, in fact, let P be a matrix of, say, p linearly independent columns such that each column of AP is a linear combination of columns of P itself. In particular, this will be true if the columns of P are characteristic vectors. Then... [Pg.71]

Within esqjlicit schemes the computational effort to obtain the solution at the new time step is very small the main effort lies in a multiplication of the old solution vector with the coeflicient matrix. In contrast, implicit schemes require the solution of an algebraic system of equations to obtain the new solution vector. However, the major disadvantage of explicit schemes is their instability [84]. The term stability is defined via the behavior of the numerical solution for t —> . A numerical method is regarded as stable if the approximate solution remains bounded for t —> oo, given that the exact solution is also bounded. Explicit time-step schemes tend to become unstable when the time step size exceeds a certain value (an example of a stability limit for PDE solvers is the von-Neumann criterion [85]). In contrast, implicit methods are usually stable. [Pg.156]

Eq. (122) represents a set of algebraic constraints for the vector of species concentrations expressing the fact that the fast reactions are in equilibrium. The introduction of constraints reduces the number of degrees of freedom of the problem, which now exclusively lie in the subspace of slow reactions. In such a way the fast degrees of freedom have been eliminated, and the problem is now much better suited for numerical solution methods. It has been shown that, depending on the specific problem to be solved, the use of simplified kinetic models allows one to reduce the computational time by two to three orders of magnitude [161],... [Pg.221]

Note that the algebraic signs of the columns in U and V are arbitrary as they have been computed independently. In the above illustration, we have chosen the signs such as to be in agreement with the theoretical result. This problem does not occur in practical situations, when appropriate algorithms are used for singular vector decomposition. [Pg.42]

Most of the algebra of vectors and matrices that is used in this chapter has been explained in Chapters 9 and 29. Small discrepancies between the tabulated values in the examples and their exact values may arise from rounding of intermediate results. [Pg.88]

In the previous section we have developed principal components analysis (PCA) from the fundamental theorem of singular value decomposition (SVD). In particular we have shown by means of eq. (31.1) how an nxp rectangular data matrix X can be decomposed into an nxr orthonormal matrix of row-latent vectors U, a pxr orthonormal matrix of column-latent vectors V and an rxr diagonal matrix of latent values A. Now we focus on the geometrical interpretation of this algebraic decomposition. [Pg.104]


See other pages where Vector algebra is mentioned: [Pg.8]    [Pg.341]    [Pg.633]    [Pg.638]    [Pg.57]    [Pg.8]    [Pg.341]    [Pg.633]    [Pg.638]    [Pg.57]    [Pg.213]    [Pg.119]    [Pg.152]    [Pg.43]    [Pg.253]    [Pg.201]    [Pg.106]    [Pg.141]    [Pg.67]    [Pg.66]    [Pg.220]    [Pg.9]    [Pg.10]    [Pg.40]    [Pg.40]    [Pg.92]   


SEARCH



Algebra, diagonal vector spaces

Algebraic Vector and Tensor Operations

Analytic Geometry Part 2 - Geometric Representation of Vectors and Algebraic Operations

Elementary Vector Algebra

Linear algebra orthogonal vectors

Three-Dimensional Vector Algebra

Unit-vector algebra

Vector algebra angle

Vector algebra basis vectors

Vector algebra equality

Vector algebra length

Vector algebra linear combination

Vector algebra orthonormality

Vector algebra scalar product

Vector and Matrix Algebra

Vector operators, 50 algebra

Vector operators, 50 algebra matrix representation

Vector operators, 50 algebra properties

© 2024 chempedia.info