Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Gaussian-random vectors

Once we fix the initial state ip,) and the final state ipj), the optimal field E(t) is obtained by some numerical procedures for appropriate values of the target time T and the penalty factor a. Though there should be many situations corresponding to the choice of ip,) and ipj), we only consider the case where they are Gaussian random vectors. It is defined by... [Pg.439]

We show two numerical examples for a 64 x 64 random matrix Hamiltonian One is the relatively short-time case with T = 20 and a = 1 shown in Fig. 1, and the other is the case with T = 200 and ot = 10 shown in Fig. 2. In both cases, we obtain the optimal field s(f) after 100 iterations using the Zhu-Botina-Rabitz (ZBR) scheme [13] with s(f) = 0 as an initial guess of the field. The initial and the target state is chosen as Gaussian random vectors as mentioned above. The final overlaps are Jo = 0.971 and 0.982, respectively. [Pg.439]

Figure 1. Optimal control between Gaussian random vectors in a 64 x 64 random matrix system by the Zhu-Botina-Rabitz scheme with T — 20 and a = 1. a) The optimal field after 100 iterations b) its power spectrum (c) the optimal evolution of the squared overlap with the target (([)(r) (py) as well as its magnified values near the target time in the inset (d) the convergence behavior of the overlap Jq (solid curve) and the functional J (dashed curve) versus the number of iteration steps. Figure 1. Optimal control between Gaussian random vectors in a 64 x 64 random matrix system by the Zhu-Botina-Rabitz scheme with T — 20 and a = 1. a) The optimal field after 100 iterations b) its power spectrum (c) the optimal evolution of the squared overlap with the target (([)(r) (py) as well as its magnified values near the target time in the inset (d) the convergence behavior of the overlap Jq (solid curve) and the functional J (dashed curve) versus the number of iteration steps.
We next examine when and how the analytic optimal field works for a random matrix system (256 x 256 GOE random matrix). Figure 9 demonstrates the coarse-grained Rabi oscillation induced by the analytic field, Eq. (45), with k = 3, where smooth oscillations of ((j)o(f) (t)(f))p and (Xo(0l4 (0)P observed. The initial and the target states are both Gaussian random vectors with 256 elements. This result shows that the field actually produces the CG Rabi oscillation in the random matrix system. [Pg.454]

Figure 10. The target-time dependence of the final overlap 7o by the analytic optimal field with k — 1 is shown. The residual probability 1 — 7o from perfect control = 1 is depicted for various matrix sizes N of GOE random matrices. The initial and the final states are Gaussian random vectors. Figure 10. The target-time dependence of the final overlap 7o by the analytic optimal field with k — 1 is shown. The residual probability 1 — 7o from perfect control = 1 is depicted for various matrix sizes N of GOE random matrices. The initial and the final states are Gaussian random vectors.
Mean and covariance for conditional gaussian random vector. The minimum mean square estimate of a gaussian random vector when we only have observations of some of its elements is the conditional mean of the remaining elements. The error covariance of this estimate is the conditional covariance. Consequently, if Z is a random gaussian vector composed of sub-vectors x and y, then we may write ... [Pg.180]

Linear systems over a gaussian random vector. If x or X is a gaussian vector with mean value m and covariance (the minimum square error estimate for x is x and x = m, ) which is considered to be in a formal linear system completed with a zero-mean gaussian vector (v is N (0, R)) then we have ... [Pg.180]

Recursive estimation of gaussian random vectors. We consider here a gaussian random vector composed of three sub-vectors x,y and z ... [Pg.181]

Since y is a zero-mean Gaussian vector process, both yj (Nyquist frequency tok, it can be shown that the covariance matrix of the... [Pg.112]

Consider a Gaussian random vector 0 with mean 0 and covariance matrix Z so its joint probability density function (PDF) is given by ... [Pg.257]

In order to take into account of the spatial variability of earthquake-induced ground motions, let us consider an n-DoF structural system subjected to an N support motion. It follows that the stochastic forcing vector process has to be modeled as a multi-correlated zero-mean Gaussian random vector process ... [Pg.3445]

Since the excitation G of the linear system is Gaussian, it is well known that the response x will be Gaussian as well. Therefore, the matrices Cg and Kg must be such that the difference d is minimized for every stationary Gaussian random vector x The equivalent linearization technique is thus composed of the following steps ... [Pg.3461]

The vector of errors at time t is e t). The essential assumption which we made is that e t) is a Gaussian random number, and that the errors are not correlated in time. [Pg.268]

Here 4 is the target state vector at time index k and Wg contains two random variables which describe the unknown process error, which is assumed to be a Gaussian random variable with expectation zero and covariance matrix Q. In addition to the target dynamic model, a measurement equation is needed to implement the Kalman filter. This measurement equation maps the state vector t. to the measurement domain. In the next section different measurement equations are considered to handle various types of association strategies. [Pg.305]

The vector nk describes the unknown additive measurement noise, which is assumed in accordance with Kalman filter theory to be a Gaussian random variable with zero mean and covariance matrix R. Instead of the additive noise term nj( in equation (20), the errors of the different measurement values are assumed to be statistically independent and identically Gaussian distributed, so... [Pg.307]

An important extension to this real Gaussian random process is its complex generalization, which has particular relevance to applications in optics. Here the column vector V is complex... [Pg.142]

For Gaussian random variables, the second derivatives of the objective function are constant for all 0 because the objective function is a quadratic function of 0. Therefore, the Hessian matrix can be computed without obtaining the mean vector 0. ... [Pg.257]

Consider a vector of two Gaussian random variables 0 = [6, 62V with mean 0 = [0J, l and covariance matrix E. The goal here is to obtain the parametric form of the joint PDF contour that covers an area with a prescribed probability. First, define a vector of two new random variables y = [yi, y2] by the following transformation ... [Pg.263]

Gaussian distribution of random vector x with mean jx and covariance matrix E... [Pg.311]

These transformations are in general nonlinear and are obtained by applying Rosenblatt s or Nataf s transformations, respectively (Huang Du 2006). They are linear only if the random vector x is jointly Gaussian distributed. By transformation 0 = Txg (x), also the Performance Function (PF) or Limit State Function (LSF) gx -) defined in the physical space (Section 1) can be transformed into g ( ) in the standard normal space ... [Pg.682]

Schematic diagrams of two receiver front ends that can be used to compute the coordinates of the data vector are shown in Fig. 12.60. The first is called the conelatorimplementation, and the second is called the matched filter implementation. Because the noise components are linear transformations of a Gaussian random process, they are also Gaussian, and can be shown to have zero means and covariances... Schematic diagrams of two receiver front ends that can be used to compute the coordinates of the data vector are shown in Fig. 12.60. The first is called the conelatorimplementation, and the second is called the matched filter implementation. Because the noise components are linear transformations of a Gaussian random process, they are also Gaussian, and can be shown to have zero means and covariances...
Another example of a model based system is the one proposed by Ephraim and Malah (1984) in which the spectral components of speech and noise in a given vector are modeled as statistically independent Gaussian random variables. The variance of a given spectral component of the speech signal varies from frame to frame and is estimated from the noisy signal. Under these assumptions, the minimum MSE estimate of the short-time spectral amplitude is derived. This approach outperforms spectral subtraction as it provides enhanced signals with similar clarity but with drastically reduced musical noise. [Pg.1469]

Rather academic is the problem of reconciliation when applying rigorously the maximum likelihood principle to a general (not Gaussian) probability distribution. One then assumes a given (twice continuously differentiable) probability density for the random (vector) variable e of measurement errors and solves the problem (9.1.12)... [Pg.393]

The reader may recall the well-known normal (Gaussian) distribution of a random variable. As a simple example, let random vector variable X take its values (x, ) in two-dimensional space, and let us consider the (so-called)... [Pg.589]

The mean squared end-to-end distance of a linear unperturbed chain of N monomers is proportional to N. As R is a sum of a large number of independent random vectors, the probability density for the end-to-end separation vector to have a certain value R is given by a Gaussian distribution ... [Pg.11]

Mutation operators use a single parent to produce a child. Lattice mutation applies a stain matrix with zero-mean Gaussian random strains to the lattice vectors soft-mode mutation (which we call softmutation for brevity from now) displaces atoms along the softest mode eigenvectors, or a random linear combination of softest eigenvectors the permutation operator swaps chemical identities of atoms in randomly selected pairs of unlike atoms. [Pg.223]


See other pages where Gaussian-random vectors is mentioned: [Pg.180]    [Pg.438]    [Pg.180]    [Pg.438]    [Pg.402]    [Pg.521]    [Pg.187]    [Pg.188]    [Pg.193]    [Pg.194]    [Pg.220]    [Pg.87]    [Pg.45]    [Pg.172]    [Pg.82]    [Pg.2095]    [Pg.608]    [Pg.145]    [Pg.30]    [Pg.2138]    [Pg.2139]   
See also in sourсe #XX -- [ Pg.180 ]




SEARCH



© 2024 chempedia.info