Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Linear yield vectors

We use the yield information from the rigorous model in Table 5.21 to construct the LP yield vectors. The BASE vector is the average of the yields in each RON case. We choose the average value of N-i-2A of 64 to compute the Ax . We then use one of the N-I-2A data points to compute the DELTA-BASE vector. We show the steps and results of this calculation for RON 102 case in Table 5.22. We compare the results of the linear yield vector predictions and model predictions for an intermediate N-I-2A value of 66.6 in Table 5.23. Table 5.24 shows the DELTA-BASE calculated for all the RON cases. [Pg.306]

Since B depends on the choice of the linearly independent vectors used to form d> , all possible combinations must be explored in order to determine if one of them satisfies (5.96) and (5.97). Any set of linearly independent columns of d> that yields a matrix B satisfying (5.96) and (5.97) will be referred to hereinafter as a mixture-fraction basis. [Pg.184]

We start from Eq. (6) and restrict ourselves to linear case of weakly textured systems, so that we can use a linear approach similar to that in 2,3.4 [5], The corresponding equation for the local easy axis is n(r) = -a2 (r) ez + a(r), where a(r) is the transverse vector components of n. Linearization yields [5]... [Pg.76]

It is to be noticed the all linear transformations which are obtained by contracting all but one index of a given tensor, yield vectors which all have the structure of a gradient. All that is needed is therefore a general gradient formulation which allows for all kinds of spin combinations and which does not assume hermitean symmetry or permutation symmetry in the two-electron integrals. [Pg.82]

Substitution into the linearized output vector (Equation 6.5) yields... [Pg.113]

Let u be a vector valued stochastic variable with dimension D x 1 and with covariance matrix Ru of size D x D. The key idea is to linearly transform all observation vectors, u , to new variables, z = W Uy, and then solve the optimization problem (1) where we replace u, by z . We choose the transformation so that the covariance matrix of z is diagonal and (more importantly) none if its eigenvalues are too close to zero. (Loosely speaking, the eigenvalues close to zero are those that are responsible for the large variance of the OLS-solution). In order to liiid the desired transformation, a singular value decomposition of /f is performed yielding... [Pg.888]

Minimizing the square of the gradient vector under the condition c/ = I yields the following linear system of equations... [Pg.2338]

An alternative procedure is the dynamic programming method of Bellman (1957) which is based on the principle of optimality and the imbedding approach. The principle of optimality yields the Hamilton-Jacobi partial differential equation, whose solution results in an optimal control policy. Euler-Lagrange and Pontrya-gin s equations are applicable to systems with non-linear, time-varying state equations and non-quadratic, time varying performance criteria. The Hamilton-Jacobi equation is usually solved for the important and special case of the linear time-invariant plant with quadratic performance criterion (called the performance index), which takes the form of the matrix Riccati (1724) equation. This produces an optimal control law as a linear function of the state vector components which is always stable, providing the system is controllable. [Pg.272]

Solution of the above linear equation yields the least squares estimates of the parameter vector, k, ... [Pg.28]

In summary, at each iteration of the estimation method we compute the model output, y(x kw), and the sensitivity coefficients, G for each data point i=l,...,N which are used to set up matrix A and vector b. Subsequent solution of the linear equation yields Akf f 1 and hence k[Pg.53]

Again, let us assume that an estimate kw of the unknown parameters is available at the j,h iteration. Linearization of the output vector around and retaining first order terms yields... [Pg.85]

The above equation represents a set of p nonlinear equations which can be solved to obtain koutput vector around the trajectory xw(t). Kalogerakis and Luus (1983b) showed that when linearization of the output vector is used, the quasilinearization computational algorithm and the Gauss-Newton method yield the same results. [Pg.114]

Following the same approach as in Chapter 6 for ODE models, we linearize the output vector around the current estimate of the parameter vector kw to yield... [Pg.169]

At this point we can summarize the steps required to implement the Gauss-Newton method for PDE models. At each iteration, given the current estimate of the parameters, ky we obtain w(t,z) and G(t,z) by solving numerically the state and sensitivity partial differential equations. Using these values we compute the model output, y(t k(i)), and the output sensitivity matrix, (5yr/5k)T for each data point i=l,...,N. Subsequently, these are used to set up matrix A and vector b. Solution of the linear equation yields Ak(jH) and hence k°M) is obtained. The bisection rule to yield an acceptable step-size at each iteration of the Gauss-Newton method should also be used. [Pg.172]

In analogy to using a linear combination of atomic orbitals to form MOs, a variational procedure is used to construct many-electron wavefunctions from a set of N Slater determinants y, i.e. one sets up a N x. N matrix of elements flij = (d>, H d>y) which, upon diagonalization, yields state energies and associated vectors of coefficients a used to define (fi as a linear combination of A,s ... [Pg.241]

Once a mixture-fraction basis has been found, the linear transformation that yields the mixture-fraction vector is... [Pg.184]

Applying the same procedure to higher-dimensional mixture-fraction vectors yields expressions of the same form as (6.130). Note also that for any set of bounded scalars that can be linearly transformed to a mixture-fraction vector, (6.115) can be used to find the corresponding joint conditional scalar dissipation rate matrix starting from (e% C). [Pg.302]


See other pages where Linear yield vectors is mentioned: [Pg.61]    [Pg.94]    [Pg.61]    [Pg.94]    [Pg.61]    [Pg.94]    [Pg.61]    [Pg.94]    [Pg.245]    [Pg.92]    [Pg.370]    [Pg.406]    [Pg.59]    [Pg.305]    [Pg.1886]    [Pg.213]    [Pg.420]    [Pg.112]    [Pg.293]    [Pg.299]    [Pg.240]    [Pg.53]    [Pg.44]    [Pg.267]    [Pg.306]    [Pg.247]    [Pg.317]    [Pg.1457]    [Pg.179]    [Pg.31]   
See also in sourсe #XX -- [ Pg.61 ]

See also in sourсe #XX -- [ Pg.61 ]




SEARCH



Vectors, yield

© 2024 chempedia.info