# Nyquist theorem

Nyquist theorem statement that a periodic signal must be [c.775]

The objective of this chapter is to discuss both the crystaUine and the glassy states of polymers. As noted above, only those polymers with the prerequisite order of molecular microstructure can crystallize, but all undergo the glass transition of whatever material remains amorphous at Tg. Thus we conclude this section on structure by observing that the glassy state lacks long-range order, although small domains of short-range order may be larger and last longer for glasses than for liquids because of the lower level of disruptive thermal energy below Tg. Within the glassy state there may be interspersed crystalline patches, but we shall take the position that the glassy state has essentially the same level of order or lack of order -as the liquid state. With the morphology of the glassy state thus dismissed, we next turn to the question of how the glassy state does differ from the liquid state. [c.244]

The next theorem provides an additional smoothness of the solution as compared to Theorem 3.4 provided that there is no a contact between two plates in a neighbourhood of a fixed point of the crack. [c.191]

Nyquist criterion Nyquist theorem Nystan Nystarescent Nystatin [c.693]

The analogue-to-digital converter (ADC) samples the fluctuating voltage produced in the coils of the probe at regular time intervals, storing each value as a binary encoded integer. The rate at which the ADC must sample the voltage is defined by the Nyquist theorem, which states that the sampling rate must be greater than or equal to twice the signal frequency. The maximum speed of the digitizer determines the maximum observable spectral width the number of bits used in storing each data point and the number of bits in each computer word determine the dynamic range of intensities that can be observed. [c.401]

Nyquist theorem statement that a periodic signal must be sampled at least twice each period to avoid a determinate error in measuring its frequency, (p. 184) [c.775]

A typical active clamp can be seen in Figure 4-7 along with its effect upon the ac waveform. During turn-off, a discharged clamp capacitor is placed in parallel with the power switch. The capacitor then forms a resonant tank circuit with the primary inductance plus its leakage inductance. The capacitor s voltage then increases until it reaches the value of the reflected transformer voltage. The series MOSFET clamp switch must be turned off prior to the next turn-on period of the power switch. The clamp switch must then be turned back on just prior to the next turn-off of the power switch with sufficient time to allow the capacitor to completely discharge. It is important to not allow the clamp capacitor to be electrically attached to the drain during its turn-on transition. The cycle is then repeated. [c.148]

Accordingly, the next term in the Taylor series [c.481]

Accordingly, the next term in the Taylor series [c.481]

In carbohydrate chemistry the term is restricted to sugars or their derivatives which differ only in the orientation of the groups attached to the carbon atom next to the potential aldehyde group of the sugar or to the corresponding group in the sugar derivative. Thus D-gluconic acid, [c.160]

Let us assume that stress gradient in axial direction is present but smooth. Then we can use a perturbation method and expand the solution of equation (30) in a series. The first term of this expansion will be a solution of the plane strain problem and potential N will be equal to zero. The next terms of the stress components will contain potential N also. [c.138]

Before entering the detailed discussion of physical and chemical adsorption in the next two chapters, it is worthwhile to consider briefly and in relatively general terms what type of information can be obtained about the chemical and structural state of the solid-adsorbate complex. The term complex is used to avoid the common practice of discussing adsorption as though it occurred on an inert surface. Three types of effects are actually involved (1) the effect of the adsorbent on the molecular structure of the adsorbate, (2) the effect of the adsorbate on the structure of the adsorbent, and (3) the character of the direct bond or local interaction between an adsorption site and the adsorbate. [c.582]

Van der Waals Equations of State. A logical step to take next is to consider equations of state that contain both a covolume term and an attractive force term, such as the van der Waals equation. De Boer [4] and Ross and Olivier [55] have given this type of equation much emphasis. [c.623]

An interesting alternative method for formulating f/(jt) was proposed in 1929 by de Boer and Zwikker [80], who suggested that the adsorption of nonpolar molecules be explained by assuming that the polar adsorbent surface induces dipoles in the first adsorbed layer and that these in turn induce dipoles in the next layer, and so on. As shown in Section VI-8, this approach leads to [c.629]

In order to analyse results from such experiments, it is appropriate to consider a general framework, linear response theory, which is usefid whenever the probe radiation weakly couples to the system. The linear response framework is also convenient for utilizing various synnnetry and analyticity properties of correlation fiinctions and response fiinctions, thereby reducing the general problem to detemiining quantities which are amenable to approximations in such a way that the synnnetry and analyticity properties are left intact. Such approximations are necessary in order to avoid the fiill complexity of many-body dynamics. The central quantity in the linear response theory is tlie response fimction. It is related to tlie corresponding correlation fimction (typically obtained from experimental measurements) tln-ough a fluctuation dissipation theorem. In the next section, section A3.3.2.1, we discuss only the subset of necessary results from the linear response theory, which is described in detail in the book by Eorster (see Eurther Reading). [c.718]

Next, for the log term (which normalizes the wave function), we have to choose, as in Eq. (15), suitable functions P t) that will correct the behavior of that term along the large semicircles. Among the multiplicity of choices, the following are the most rewarding (since they completely cancel the log term) [c.127]

It might be asked what happens when one adds further couplings beyond the quadratic one In the next higher order one finds a scalar cubic term of the form [c.136]

Next, we shall consider four kinds of integrals. The first is the expectation value of the Coulomb potential by one nucleus for one of the primitive basis function centered at that nucleus. The second is the expectation value of the Coulomb potential by one nucleus for one of the primitive basis function centered at a different point (usually another nucleus). Then, we will consider the matrix element of a Coulomb term between two primitive basis functions at different centers. The third case is when one basis function is centered at the nucleus considered. The fourth case is when both basis functions are not centered at that nucleus. By that we mean, for two Gaussian basis functions defined in Eqs. (73) and (74), we are calculating [c.413]

The next question asked is whether there are any indications, from ab initio calculations, to the fact that the non-adiabatic transfonnation angles have this feature. Indeed such a study, related to the H3 system, was reported a few years ago [64]. However, it was done for circular contours with exceptionally small radii (at most a few tenths of an atomic unit). Similar studies, for circular and noncircular contours of much larger radii (sometimes up to five atomic units and more) were done for several systems showing that this feature holds for much more general situations [11,12,74]. As a result of the numerous numerical studies on this subject [11,12,64-75] the quantization of a quasi-isolated two-state non-adiabatic coupling term can be considered as established for realistic systems. [c.638]

Next, by recalling the assumptions concerning the intensities of Ti2(i) and t2i s) we replace t(i), in the second temi of Eq. (116) with Ti2(s) and in the third term with T23(s). As a result Eq. (116) becomes [c.683]

Equation (B.IO) stands for the j,k) matrix element of the left-hand side of Eq. (B.7). Next, we consider the (j,k) element of the first term on the right-hand side of Eq. (B.7), namely. [c.720]

To increase the reliability of such simulations of inherently-chaotic systems, special care is needed to formulate efficient numerical procedures for generating dynamic trajectories. This reliability, or accuracy in a loose sense, must be measured with respect to the precise simulation questions posed. Mathematically, there are classes of methods for conservative Hamiltonian systems termed symplectic that possess favorable numerical properties in theory and practice [6]. Essentially, these schemes preserve volumes in phase space (as measured rigorously by the Jacobian of the transformation from one set of coordinates and momenta to the next). This preservation in turn implies certain physical invariants for the system. Another view of symplectic integration is that the computed trajectory remains close in time to the solution of a nearby Hamiltonian H (i.e., one which is order 0(At)P away from the initial value of the true Hamiltonian H = 5(U°) M(V °) - -El(Ar ), where p is the order of the integrator). This property translates to good long-time behavior in practice small fluctuations about the initial (conserved in theory) value of H, and no systematic drift in energy, as might be realized by a nonsymplectic method. Fig. 2 illustrates this behavior for a simple nonlinear system, a cluster of four water molecules, integrated by Verlet and by the classical fourth-order Runge-Kutta method. A clear damping trend is seen by the latter, nonsymplectic integrator, especially at the larger timestep. [c.230]

The previous sections have dealt mainly with the representation of chemical structures as flat, two-dimensional, or topological objects resulting in a structure diagram. The next step is the introduction of stereochemistry (see Section 2.8), leading to the term "configuration of a molecule. The configuration of a molecule defines the positions, among all those that are possible, in which the atoms in the molecule are arranged relative to each other, unless the various arrangements lead to distinguishable and isolable stereoisomeric compounds of one and the same molecule. A major characteristic of stereoisomeric compounds is that they have the same constitution, but are only interconvertible by breaking and forming new bonds. [c.91]

The seminal method for most modem semi-empirical MO techniques is MNDO, which was published by Dewar and Thiel in 1977 [15]. MNDO is an NDDO method in which Dewar and Thiel introduced a new multipole-based formalism for calculating the two-electron integrals. It was parameterized to reproduce experimental heats of formation, geometries, dipole moments, and ionization potentials. It proved to be very superior to the MINDO methods for most calculated quantities. However, MNDO has one weakness that severely limits its usefulness it does not reproduce hydrogen bonding. This weakness was fixed pragmatically in MNDO/H by Burstein and Isaev, [16] who simply modified the core-core repulsion potential with additional Gaussian functions in order to obtain hydrogen bonds. This "fix" was adopted by the Dewar group for their next method, AMI [17], which is otherwise identical to MNDO. AMI, in turn, was found to have significant weaknesses for nitro and hypervalent compounds. These weaknesses were addressed by Stewart in a new parameterization, named PM3 [18], which is otherwise identical to AMI. However, MNDO, MNDO/H, AMI, and PM3 are quantum mechanically essentially identical. Their differences are restricted to classical "correcting potentials between atoms and to which parameters are treated as variables in the parameterization procedure. [c.383]

Based on experience with the measurement of thin layers and related deconvolution techniques [5], [6] air-borne ultrasonics and a new deconvolution algorithm have been investigated [7]. Focussed and optimized composite probes have been used and were excited with a square wave pulser in pulse-echo mode. The signals have been acquired, digitized and after preprocessing (filtering) the difference of Time of Flight (TOF) of two overlapping reflection pulses have been deconvolved. The time resolution of the deconvolution is independant of the sampling and can be much better than the actual sampling rate. Of course the Nyquist theorem has to be fullfilled. This is no restriction since it just says that the samplerate has to be at least twice the maximum frequency (bandlimit, filter) present in the signal itself. [c.843]

According to the Nyquist theorem, to determine a periodic signal s true frequency, we must sample the signal at a rate that is at least twice its frequency (Figure 7.3b) that is, the signal must be sampled at least twice during a single cycle or period. When samples are collected at an interval of At, the highest frequency that can be accurately monitored has a frequency of (2 At) k For example, if samples are collected every hour, the highest frequency that we can monitor is 0.5 h k or a periodic cycle lasting 2 h. A signal with a cycling period of less than 2 h (a frequency of more than 0.5 h k cannot be monitored. Ideally, the sampling frequency should be at least three to four times that of the highest frequency signal of interest. Thus, if an hourly periodic cycle is of interest, samples should be collected at least every 15-20 min. [c.184]

According to the Nyquist theorem, to determine a periodic signal s true frequency, we must sample the signal at a rate that is at least twice its frequency (Figure 7.3b) that is, the signal must be sampled at least twice during a single cycle or period. When samples are collected at an interval of At, the highest frequency that can be accurately monitored has a frequency of (2 At) . For example, if samples are collected every hour, the highest frequency that we can monitor is 0.5 h, or a periodic cycle lasting 2 h. A signal with a cycling period of less than 2 h (a frequency of more than 0.5 h ) cannot be monitored. Ideally, the sampling frequency should be at least three to four times that of the highest frequency signal of interest. Thus, if an hourly periodic cycle is of interest, samples should be collected at least every 15-20 min. [c.184]

Sethna [1981] considered two limiting cases. The calculation of action in the fast flip approximation (a>j CO ) proceeds by utilizing the expansion exp ( — cu,-1t ) 1 — cu t. After substituting the first term, i.e. the unity, in (5.72) we get precisely the quantity which yields the Franck-Condon factor in the rate constant. The next term cancels the adiabatic renormalization and changes KM) [c.89]

Finally, select acetone from the molecules on screen. Here, both the LUMO and the LUMO map are available under the Surfaces menu. First, select LUMO and display it as a Solid. It describes a 7U-type antibonding ( i ) orbital concentrated primarily on the earbonyl carbon and oxygen. Next, turn off this surface (select None under the LUMO sub-menu), and then seleet LUMO Map under the Surfaces menu. Display the map as a transpareni solid. Note the blue spot (maximum value of the LUMO) directly over the carbonyl carbon. This reveah the most likely site for nucleophilic attack. [c.10]

The regular Conference of the US NDT TD was held on May 13, 1997. The members of the Society highly evaluated the activity of the Society s Governing Board and elected Prof V, A.Troitskij the Society Chairman for the next three-year term. [c.967]

Geometrically, Liouville s theorem means that if one follows the motion of a small phase volume in Y space, it may change its shape but its volume is invariant. In other words the motion of this volume in T space is like that of an incompressible fluid. Liouville s theorem, being a restatement of mechanics, is an important ingredient in the fomuilation of the theory of statistical ensembles, which is considered next. [c.383]

The previous calculations, while not altogether trivial, are among the simplest uses one can make of kinetic theory arguments. Next we turn to a somewhat more sophisticated calculation, that for the mean free path of a particle between collisions witii other particles in the gas. We will use the general fonn of the distribution fiinction at first, before restricting ourselves to the equilibrium case, so as to set the stage for discussions m later sections where we describe die fomial kinetic theory. Our approach will be first to compute the average frequency with which a particle collides with other particles. The inverse of this frequency is the mean time between collisions. If we then multiply the mean time between collisions by the mean speed, given by equation (A3.1.8), we will obtain the desired result for the mean free path between collisions. It is important to point out that one might choose to define the mean free path somewhat differently, by using die root mean square velocity instead of v, for example. The only change will be in a mimerical coefficient. The important issue will be to obtain the dependence of the mean free path upon the density and temperature of the gas and on the size of the particles. The mimerical factors are not that unportant. [c.669]

(,2/ )) again employing chain rules for the transformation
[c.706]

Note that, in loeal eoordinates. Step 2 is equivalent to integrating the equations (13). Thus, Step 2 can either be performed in loeal or in eartesian coordinates. We consider two different implicit methods for this purpose, namely, the midpoint method and the energy conserving method (6) which, in this example, coineides with the method (7) (because the V term appearing in (6) and (7) for q = qi — q2 is quadratie here). These methods are applied to the formulation in cartesian and in local coordinates and the properties of the resulting propagation maps are discussed next.
[c.289]

Th e algorithm con sists of several steps. Th e first one in volves making an initial guess at the position of th c tran sit ion state. It will cal-cii late the gradien t vector g and th e Hessian matrix H, at th e in itial poin t. Th e second step in volve diagon ali/ation of the Hessian and determination of local surface characteristics (niimher of negative eigenvalues). The next step depends on the structure of the Hessian. If the Hessian h as th e wrong n urn her of negative eigenvalues, it will determine which Hessian mode has the greatest overlap with the eigenvector followed. If mode following has not been switched on, this algorith m will follow th e lowest m ode. The next step will determine SCFcon vergen ce. If the criteria are satisfied, it will stop at th IS point as the position of the tran sition state. If con -vergen ce criteria are not satisfied, it will calculate the energy and gradient vector at thenew point, provided that maximum number of steps h as n ot been exceeded.
[c.66]

I he. Austin Model 1 (AMI) model was the next semi-empirical theory produced by Dewar s gii.uip [Dewar et al. 1985]. AMI was designed to eliminate the problems with MNDO, which were considered to arise from a tendency to overestimate repulsions between atoms sepa rated by distances approximately equal to the sum of their van der Waals radii. The strategy adopted was to modify the core-core term using Gaussian functions. Both attractive and repulsive Gaussian functions were used the attractive Gaussians were designed to m cicome the repulsion directly and were centred in the region where the repulsions W ere too large. Repulsive Gaussian functions were centred at smaller internuclear separations. With this modification the expression for the core-core term was related to the MNDO expression by
[c.117]

To use such a column, the crude liquid is placed in a round-t ottomed flask C having a short wide neck, the usual fragments of unglazed porcelain are added, and the column then fixed in position, great care being taken to ensure that it is mounted absolutely vertically, again in order to avoid channelling. A water-condenser is then fitted in turn to the side-arm of the column, particularly when the components of the mixture have low boiling-points. The mixture is then heated with a very small flame, carefully protected from draughts to ensure a uniform supply of heat. It is essential that the initial heating of the liquid in C (while it is still mounting the cold column) should not be hurried, as considerable extra condensation occurs while the column is warming up, and the latter may easily choke when once distillation has started, and a thermal equilibrium has been established between the column and its surroundings, the tendency to choke should disappear. The heating is then adjusted until the distillate is issuing from the side-arm of the column not faster than about i drop every 4-5 seconds. In these circumstances, so efficient a fractionation should be obtained that, when the lowest-boiling fraction has distilled over, distillation completely ceases, as the next lower fraction is refluxing definitely below the side-arm of the column. The heating is then cautiously increased, and a sharp rise in boiling-point (and therefore a sharp fractionation) should occur as the second fraction starts to distil. Although in Fig. 12(B) a condenser is shown fitted to the side-arm of the column, this is required only for low-boiling components for most mixtures, however, the above rate of distillation, necessary for efficient fractionation, will be accompanied by complete condensation in the side-arm of the column, from which the successive fractions may be collected directly.
[c.27]

See pages that mention the term

**Nyquist theorem**:

**[c.184] [c.224] [c.184] [c.224] [c.199] [c.267] [c.973] [c.178] [c.67] [c.132] [c.236] [c.370]**

Modern analytical chemistry (2000) -- [ c.184 ]

Modern Analytical Chemistry (2000) -- [ c.18 ]