State-space approach


The state-space-approach  [c.232]

The state-space approach is a generalized time-domain method for modelling, analysing and designing a wide range of control systems and is particularly well suited to digital computational techniques. The approach can deal with  [c.232]

Another approach involves starting with an initial wavefimction Iq, represented on a grid, then generating // /q, and consider that tiiis, after orthogonalization to Jq, defines a new state vector. Successive applications //can now be used to define an orthogonal set of vectors which defines as a Krylov space via the iteration (n = 0,.. ., A)  [c.984]

The AMI Hamiltonian with a 2 x 2 CAS-CI (two electrons in the space of the HOMO and LUMO) was used to describe the surfaces and coupling elements. The electron-transfer process studied takes place on the ground-state, with the upper state providing diabatic effects, that is, passage to this surface can delay, or even hinder, the transfer process. A surface hopping approach was used for the dynamics with a Landau-Zener hopping probability and using the Miller-George correction for the momentum after a hop. The charge distii-bution was used to describe the positions along the reaction coordinate with charge localization on the left and right corresponding to reactant and product, and the symmetric delocalized charge denoting the non-adiabatic region. The studies used trajectories taken from thermalized ensembles to provide detailed dynamic information for the transfer processes, and the relationship between energy gap, electronic coupling between states and rates of transfer.  [c.310]

The phase change of the total polyelectronic wave function in a chemical reaction [22-25], which is more extensively discussed in Section in, is central to the approach presented in this chapter. It is shown that some reactions may be classified as phase preserving (p) on the ground-state surface, while others are phase inverting (i). The distinction between the two can be made by checking the change in the spin pairing of the electrons that are exchanged in the reaction. A complete loop around a point in configuration space may be constmeted using a number of consecutive elementaiy reactions, starting and ending with the reactant A. The smallest possible loop typically requires at least three reactions two other molecules must be involved in order to complete a loop they are the desired product B and another one C, so that the complete loop is A B C A. Two types of phase inverting loops may be constructed those in which each reaction is phase inverting (an i loop) and those in which one reaction is phase inverting, and the other two phase preserving (an ip loop). At least one reaction must be phase inverting for the complete loop to be phase inverting and thus to encircle a conical intersection and lead to a photochemical reaction. It follows, that if a conical intersection is crossed during a photochemical reaction, in general at least two products are expected, B and C. A single product requires the existence of a two-component loop. This is possible if one of the molecules may be viewed as the out-of-phase combination of a two-state system. The allyl radical (Section IV, cf. Fig. 12) and the triplet state are examples of such systems. We restrict the discussion in this chapter to singlet states only.  [c.329]

In a performance-based approach to quality assurance, a laboratory is free to use its experience to determine the best way to gather and monitor quality assessment data. The quality assessment methods remain the same (duplicate samples, blanks, standards, and spike recoveries) since they provide the necessary information about precision and bias. What the laboratory can control, however, is the frequency with which quality assessment samples are analyzed, and the conditions indicating when an analytical system is no longer in a state of statistical control. Furthermore, a performance-based approach to quality assessment allows a laboratory to determine if an analytical system is in danger of drifting out of statistical control. Corrective measures are then taken before further problems develop.  [c.714]

Caraway Seed. This spice is the dried ripe fmit of Carum carvi L. (UmbeUiferae). It is a biennial plant cultivated extensively in the Netherlands and Hungary, Denmark, Egypt, and North Africa. The seed is brown and hard, about 0.48 cm long, and is curved and tapered at the ends. It is perhaps the oldest condiment cultivated in Europe. The odor is pleasant and the flavor is aromatic, warm, and somewhat sharp (carvone). Caraway is used in dark bread, potatoes, sauerkraut, kuemmel Hqueurs, cheese, applesauce, and cookies.  [c.28]

Wet Foa.m Spherical Bubbles. If there are sufficiently strong repulsive interactions, such as from the electric double-layer force, then the gas bubbles at the top of a froth coUect together without bursting. Furthermore, their interfaces approach as closely as these repulsive forces allow typically on the order of 100 nm. Thus bubbles on top of a froth can pack together very closely and stiH allow most of the Hquid to escape downward under the influence of gravity while maintaining their spherical shape. Given sufficient Hquid, such a foam can resemble the random close-packed stmcture formed by hard spheres. With less Hquid, depending on the distribution of bubble sizes, the bubbles must distort from their spherical shapes. For example, spheres of identical size can pack to fill at most tt/v Ts 0.74 of space this occurs if they are packed into a crystalline lattice. A foam with a monodisperse size distribution but less than 26% Hquid is thus composed of bubbles which are not spherical but are noticeably squashed together. Typical foams, as in Figure  [c.428]

The lower section of a column with downward flow must have a distributor system that not only collects Hquid evenly over the cross-sectional area, but also supports the resin bed and prevents resin from leaving the column. The traditional method has been to place a network of pipes with small holes drilled in them (a distributor) in a bed of graded gravel, sand, or anthracite coal, which supports the resin bed. While that practice continues, the trend has been toward other approaches. In one modification, the underbed is eliminated by securely wrapping the pipe elements with small mesh, noncorrosive screening. The size of the screen openings must be sufficiently smaller than the resin particles to avoid plugging. Blockage of the openings increases pressure drop and contributes to uneven or channeled flow. Special pipes formed by spirally winding triangular wire around supports, while carefully controlling the space between the flat side of the wire as it is wound, is another approach that is gaining acceptance. Perforated plates separating the resin from the distributor are used in other installations. Careful design of the distributor is essential, especially for the larger diameter units (see Fluid mechanics). If the linear flow rate near the wall of the column is substantially less than the midsection of the column, premature breakthrough, more frequent regeneration, and incomplete utilization of the rated operating capacity for the resin result.  [c.381]

This approach to Hquid waste treatment requires synergy between different process stages or between various industries (industrial ecology). Suitably matched soHd adsorbents and waste Hquids must be available so that selective wetting and agglomeration take place. Combinations of acceptable feed streams are specific to a given industry or geographic location, but some examples are obvious, such as the emulsions from oil production operations scavenged by thermal coal, eg, in Alberta, Canada oily sludges from coking/steelmaking operations treated by coal or coke in chemical/petrochemical complexes, reject or off-spec solvents and other oily wastes treated by soHd wastes such as mbber cmmb or shredded plastics and purification of oily wastewater by soot peUetization in an oil gasification plant, used commercially by SheU (95).  [c.123]

As primary alkaline fuel cells were developed for space appHcations, consideration was given to separate stack rechargeable designs. In these approaches, the product water formed during the discharge of a primary fuel cell is stored, then fed to a separate electrolizer stack during charge. The hydrogen and oxygen gas generated during charge is stored in separate pressure vessels. This approach overcomes the stabiHty problem of the bifunctional oxygen electrode and the respective stacks can be optimized for thek function. This system is stiH rather complex and bulky and has not yet been appHed.  [c.566]

Process synthesis is the step in design when the chemical engineer selects component parts and the interconnection between them to create the flow sheet. This formal approach to design includes developing a representation of the synthesis problem, using a means to evaluate alternatives, and following a strategy to search the almost infinitely large space of possible alternatives. Effective solutions depend heavily on the nature of the synthesis problem being addressed (51,52).  [c.80]

In a performance-based approach to quality assurance, a laboratory is free to use its experience to determine the best way to gather and monitor quality assessment data. The quality assessment methods remain the same (duplicate samples, blanks, standards, and spike recoveries) since they provide the necessary information about precision and bias. What the laboratory can control, however, is the frequency with which quality assessment samples are analyzed, and the conditions indicating when an analytical system is no longer in a state of statistical control. Furthermore, a performance-based approach to quality assessment allows a laboratory to determine if an analytical system is in danger of drifting out of statistical control. Corrective measures are then taken before further problems develop.  [c.714]

An alternative strategy for calculating systemwide averages is to follow the motion of a single point tlirough phase space instead of averaging over the whole phase space all at once. That is, in this approach the motion of a single point (a single molecular state) through phase space is followed as a function of time, and the averages are calculated only over those points that were visited during the excursion. Averages calculated in this way are called dynamic averages. The motion of a single point through phase space is obtained by integrating the system s equation of motion. Starting from a point r(0), p(0), the integration procedure yields a trajectory that is the set of points r(t), p(it)  [c.41]

The original particle mesh (P3M) approach of Hockney and Eastwood [42] treats the reciprocal space problem from the standpoint of numerically solving the Poisson equation under periodic boundary conditions with the Gaussian co-ion densities as the source density p on the right-hand side of Eq. (10). Although a straightforward approach is to  [c.110]

One important class of integral equation theories is based on the reference interaction site model (RISM) proposed by Chandler [77]. These RISM theories have been used to smdy the confonnation of small peptides in liquid water [78-80]. However, the approach is not appropriate for large molecular solutes such as proteins and nucleic acids. Because RISM is based on a reduction to site-site, solute-solvent radially symmetrical distribution functions, there is a loss of infonnation about the tliree-dimensional spatial organization of the solvent density around a macromolecular solute of irregular shape. To circumvent this limitation, extensions of RISM-like theories for tliree-dimensional space (3d-RISM) have been proposed [81,82],  [c.144]

The methods in this class begin by generating many constraints or restraints on the structure of the target sequence, using its alignment to related protein structures as a guide. The restraints are generally obtained by assuming that the corresponding distances between aligned residues in the template and the target structures are similar. These homology-derived restraints are usually supplemented by stereochemical restraints on bond lengths, bond angles, non-bonded atom-atom contacts, etc., which are obtained from a molecular mechanics force field. The model is then derived by minimizing the violations of all the restraints. This can be achieved by either distance geometry or real-space optimization. For example, an elegant distance geometry approach constructs all-atom models from lower and upper bounds on distances and dihedral angles [76,77]. Lower and upper bounds on Ca Ca and main chain-side chain distances, hydrogen bonds, and conserved  [c.281]

A multihearth furnace is typically used in the regeneration process. The multihearth furnace employs a simple design approach. It consists of a steel sheet lined with refractory inside. This refractory can be a castable as used in the 30-inch units, or brick as used in the larger sizes. The latter can also have 4.5 inches of insulating blocks which make the walls a total of 9 inches thick for the small furnaces or 13.5 inches on the larger furnaces where high temperatures are used. The interior space of the furnace is divided by horizontal brick arches into separate compartments called hearths. Alternate hearths have holes at the periphery or at the center for the carbon to drop through from one hearth to the next. Through the center of the furnace goes a rotating shaft driven at the bottom by a speed reducer with variable-speed drive. It is scaled at the top and bottom by special sand seals to prevent air or gas leakage. The shaft is hollow and has sockets where arms, called rabble arms, are fitted. An inner tube in each arm and in the shaft provides the means for air cooling of both to prevent damage by the intense heat. This cooling air is blown in through a special connection at the bottom of the shaft. The arms are, in turn, fitted with rabble teeth, placed at an angle, and impart a motion to the carbon when the  [c.315]

Most measurements of the strength of a material are based on uniaxial stress states. However, the general practical design problem involves at least a biaxial if not a triaxial state of stress. Thus, a logical method of using uniaxial strength information obtained in principal material coordinates is required for analysis of multiaxial loading problems. Obtaining the strength characteristics of a lamina at all possible orientations is physically impossible, so a method must be determined for obtaining the characteristics at any orientation in terms of characteristics in the principal material coordinates. In such an extension of the information obtained in principal material coordinates, the well-known concepts of principal stresses and principal strains are of no value. A multitude of possible microscopic failure mechanisms exists, so a tensor transformation of strengths is very difficult. Moreover, tensor transformations of strength properties are much more complicated than the tensor transformation of stiffness properties. (The strength tensor, if one even exists, must be of higher order than the stiffness tensor.) Nevertheless, tensor transformations of strength are performed and used as a phenomenological failure criterion (phenomenological because only the occurrence of failure is predicted, not the actual mode of failure). A somewhat empirical approach will be adopted the actual failure envelopes in stress space will be compared with simplified failure envelopes.  [c.102]

Most measurements of the strength of a material are based on uniaxial stress states. However, the general practical design problem involves at least a biaxial if not a triaxial state of stress. Thus, a logical method of using uniaxial strength information obtained in principal material coordinates is required for analysis of multiaxial loading problems. Obtaining the strength characteristics of a lamina at all possible orientations is physically impossible, so a method must be determined for obtaining the characteristics at any orientation in terms of characteristics in the principal material coordinates. In such an extension of the information obtained in principal material coordinates, the well-known concepts of principal stresses and principal strains are of no value. A multitude of possible microscopic failure mechanisms exists, so a tensor transformation of strengths is very difficult. Moreover, tensor transformations of strength properties are much more complicated than the tensor transformation of stiffness properties. (The strength tensor, if one even exists, must be of higher order than the stiffness tensor.) Nevertheless, tensor transformations of strength are performed and used as a phenomenological failure criterion (phenomenological because only the occurrence of failure is predicted, not the actual mode of failure). A somewhat empirical approach will be adopted the actual failure envelopes in stress space will be compared with simplified failure envelopes.  [c.102]

The second category of process-synfliesis strategies is structural. This technique involves the development of a framework that embeds all potential configurations of interest. Examples of these frameworks include process graphs, state-space representations and superstructures (e.g., Friedler et al., 1995 Bagajewicz and Manousiouthakis, 1992 Floudas et ai, 1986). The mathematical representation used in this approach is typically in the form of mixed-integer nonlinear programs, (MINLPs). The objective of these programs is to identify two types of variables integer and continuous. The integer variables correspond to the existence or absence of certain technologies and pieces of equipment in the solution. For instance, a binary integer variable can assume a value of one when a unit i.s  [c.4]

Studies of wave packet motion in excited electronic states of molecules with tliree and four atoms were conducted by Schinke, Engel and collaborators, among others, mainly in the context of photodissociation dynamics from the excited state [142. 143 and 144] (for an introduction to photodissociation dynamics, see [7], and also more recent work [145, 146, 147. 148 and 149] with references cited therein). In these studies, the dissociation dynamics is often described by a time-dependent displacement of the Gaussian wave packet in the multidimensional configuration space. As time goes on, this wave packet will occupy different manifolds (from where the molecule possibly dissociates) and this is identified with IVR. The dynamics may be described within the Gaussian wave packet method [150], and the vibrational dynamics is then of the classical IVR type (CIVR [M])- The validity of this approach depends on the dissociation rate on the one hand, and the rate of delocalization of the wave packet on the other hand. The occurrence of DIVR often receives less attention in the discussions of photodissociation dynamics mentioned above. In [148], for instance, details of the wave packet motion by means of snapshots of the probability density are missing, but a delocalization of the wave packet probably takes place, as may be concluded from inspection of figure 5 therein.  [c.1063]

On this occasion, we want also to refer to an incorrect statement that we made more than once [72], namely, that the (1,2) conical intersection results indicate that for any value of ri and r2 the two states under consideration form an isolated two-state sub-Hilbert space. We now know that in fact they do not form an isolated system because the second state is coupled to the thud state via a conical intersection as will be discussed next. Still, the fact that the series of topological angles, as calculated for the various values of r and r2, are either multiples of it or zero indicates that we can form, for this adiabatic two-state system, single-valued diabatic potentials. Thus if for some numerical heatment only the two lowest adiabatic states are required, the results obtained here suggest that it is possible to foiTn from these two adiabatic surfaces singlevalued diabatic potentials employing the line-integral approach. Indeed, recently Billing et al. [104] carried out such a photodissociation study based on the two lowest adiabatic states as obtained from ab initio calculations. The complete justification for such a study was presented in Section XI.  [c.706]

Most reactions are too slow on a time scale of direct simulation, and the evaluation of reaction rates then requires the identification of a transition state (saddle point) in a reduced space of a few degrees of freedom (reaction coordinates), together with the assumption of equilibration among all other degrees of freedom. What is needed is the evaluation of the potential of mean force in this reduced space, using any of the available techniques to compute free energies. This defines the probability that the system resides in the transition-state region. Then the reactive flux in the direction of the unstable mode at the saddle point must be calculated. If friction for the motion over the barrier is neglected, a rate according to Eyrings transition-state theory is obtained. In general, the rate is smaller due to unsuccessful barrier crossing, as was first described by Kramers [94]. The classical transition rate can be properly treated by the reactive flux method [95], see also the extensive review by Hanggi [96]. The reactive flux can be obtained from MD simulations that start from the saddle point. An illustrative and careful application of the computational approach to classical barrier crossing, including a discussion of the effects due to the Jacobian of the transformation to reaction coordinates, luis recently been described by den Otter and Briels [47].  [c.15]

This algorithm alternates between the electronic structure problem and the nuclear motion It turns out that to generate an accurate nuclear trajectory using this decoupled algoritlun th electrons must be fuUy relaxed to the ground state at each iteration, in contrast to Ihe Car-Pairinello approach, where some error is tolerated. This need for very accurate basis se coefficients means that the minimum in the space of the coefficients must be located ver accurately, which can be computationally very expensive. However, conjugate gradient rninimisation is found to be an effective way to find this minimum, especially if informatioi from previous steps is incorporated [Payne et cd. 1992]. This reduces the number of minimi sation steps required to locate accurately the best set of basis set coefficients.  [c.635]

The second type of biomedical appHcation utilizes the versatile chemistry of polyphosphazenes to generate bioactive polymers. Two approaches have been developed one is to tie or physically entrap biologically active molecules using the phosphazene backbone as the carrier or encapsulant. The other is to attach bioactive molecules to a hydrolyzable (degradable) phosphazene backbone that releases the active species on breakdown of the backbone to harmless species that can be metabolized or directly excreted. Thus the first method has been used to attach a polymer-bound equivalent of the weU-known anticancer agent cisplatin, heparin, dopamine, various enzymes, local anesthetics such as benzocaine, and a number of other bioactive molecules (19—26). The second approach utilizes cleverly designed polyphosphazenes that completely hydrolyze in water to small molecules. Thus, phosphazene polymers containing amino acid ester (eg, ethylglycinato) or imidazolyl substituents hydrolyze at body temperature and blood pH conditions to phosphate, ammonia, ethanol, and the corresponding amino acid or imidazole (27—30). The rate of hydrolysis can often be controUed by the presence of another substituent on phosphoms that does not allow ready hydrolysis. In this manner, bioactive agents can be released in a controUed fashion. Successful release of steroids, the antitumor agent melphan, and of naproxen has been obtained in in vitro and in vivo studies (31—36). The attachment of oligopeptides to a speciaUy designed side-group on polyphosphazenes has also been reported (37), and could lead to the development of useful biomaterials.  [c.257]

Hydrogen Pipelines. The manufactured gas distributed by early gas distribution systems contained up to 50% hydrogen, and at least two large-scale operations in the 1990s have evaluated 10—20% hydrogen mixtures with natural gas, but actual experience with long-distance hydrogen pipelines is rather limited. The oldest hydrogen pipeline (started in 1938) is the Chemische Werke Hbls AG 220-km, 150—300-mm dia system in the German Ruhr Valley that transports 100 x 10 m of hydrogen aimuaHy to multiple users at a nominal pressure of 1.55 MPa (225 psi). EoUowing its expansion after 1954, some fires have occurred but no hydrogen embrittlement or explosions (12,13). Other shorter H2 pipelines include a 340-km network in Prance and Belgium, an 80-km pipeline in South Africa, and two short pipelines in Texas that supply hydrogen to industrial users (14). NASA has piped H2 through short pipelines at their space centers for several years.  [c.46]

The design of sorption systems is based on a few underlying principles. First, knowledge oi soiption equilihrium is required.. This equilibrium, between solutes in the fluid phase and the solute-enriched phase of the solid, supplants what in most chemical engineering separations is a flmd-fluid equilibrium. The selection of the sorbent material with an understanding of its eqmlibrium properties (i.e., capacity and selectivity as a function of temperature and component concentrations) is of primary importance. Second, because sorption operations take place in batch, in fixed beds, or in simulated moving beds, the processes have dynamical character. Such operations generally do not run at steady state, although such operation may be approached in a simulated moving bed. Fixed-bed processes often approach a periodic condition called a periodic state or cyclic steady state, with several different feed steps constituting a cycle. Thus, some knowledge of how transitions travel through a bed is required. This introduces both time and space into the an ysis, in contrast to many chemical engineering operations that can be analyzed at steady state with only a spatial dependence. For good design, it is crucial to understand fixed-bed performance in relation to adsorption equilibrium and rate behavior. Finally, many practical aspects must be included in design so that a process starts up and continues to perform well, and that it is not so overdesigned that it is wasteful. While these aspects are process-specific, they include an understanding of dispersive phenomena at the bed scale and, for regenerative processes, knowledge of aging charac teristics of the sorbent material, with consequent changes in sorption equilibrium.  [c.1497]

The gas turbine was designed shortly after World War II and introduced to the market in the early 1950s. The early heavy-duty gas turbine design was largely an extension of steam turbine design. Restrictions of weight and space were not important factors for these ground-based units, so the design characteristics included heavy-wall casings split on horizontal centerlines, hydrodynamic (tilting pad) bearings, large-diameter combustors, thick airfoil sections for olades and stators, and large frontal areas. The overall pressure ratio of these units varied from 5 1 for the earlier units to 30 1 for the units in the 1990s. Turbine inlet temperatures have been increased and run as high as 2300° F (1260° C) on some of these units. Projected temperatures approach 3000° F (1649° C) and, if achieved, would make the gas turbine even more efficient. The industrial heavy-duty gas turbines most widely used employ axial-flow compressors and turbines. In most U.S. designs combustors are can-annular combustors. Single-stage side combustors are used in European designs. The combustors used in industrial gas turbines have heavy walls and are veiy durable.  [c.2507]

Chemical reactions, including protein folding, are best understood from the vantage point of their underlying energy landscapes, which are theoretical manifestations of the interactions that contribute to the chemical processes. An energy landscape is a surface defined over confonnation space indicating the potential energy of each and every possible conformation of the molecule. Similar to regular topographic landscapes, valleys in an energy landscape indicate stable low energy conformations and mountains indicate unstable high energy conformations. However, although reactions of small molecules can be characterized directly by the potential energy landscape, the high dimensionality of protein conformation spaces often makes a temperamre-dependent effective energy landscape (or free energy landscape) the theoretical framework of choice. Such a surface corresponds to a Boltzmann weighted average of the accessible energies along an appropriately chosen reaction coordinate (or progress variable). The latter, which describes the approach to the native state, is obtained by averaging over many nonessential degrees of freedom. Such a reaction coordinate describes the progress of the reaction from the initial to the final state but includes the possibility of many different paths on the original high dimensional energy landscape.  [c.373]


See pages that mention the term State-space approach : [c.360]    [c.361]    [c.2222]    [c.2256]    [c.2350]    [c.2573]    [c.44]    [c.498]    [c.469]    [c.537]    [c.679]    [c.707]    [c.727]    [c.333]    [c.95]    [c.409]    [c.2304]    [c.115]    [c.67]   
Advanced control engineering (2001) -- [ c.232 ]