Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Realizability constraints

It may be surprising, then, that Pliva et al. (1980) have been successful in partially removing inherent broadening by operating directly on absorption data U(x)/U0(x). Their success appears to be due in part to the stability enforced by physical-realizability constraints. [Pg.43]

We have shown that the radiant flux spectrum, as recorded by the spectrometer, is given by the convolution of the true radiant flux spectrum (as it would be recorded by a perfect instrument) with the spectrometer response function. In absorption spectroscopy, absorption lines typically appear superimposed upon a spectral background that is determined by the emission spectrum of the source, the spectral response of the detector, and other effects. Because we are interested in the properties of the absorbing molecules, it is necessary to correct for this background, or baseline as it is sometimes called. Furthermore, we shall see that the valuable physical-realizability constraints presented in Chapter 4 are easiest to apply when the data have this form. [Pg.54]

A large number of linear methods have been developed with particular characteristics that tend to suit them to specific deconvolution problems. None of these adaptations shows beneficial results nearly so profound as those resulting from the imposition of the physical-realizability constraints discussed in the next chapter. Furthermore, the present work is not intended... [Pg.87]

Perhaps the benefits of physical-realizability constraints, particularly ordinate bounds such as positivity, have not been sufficiently recognized. Surely everyone agrees in principle that such constraints are desirable. Even the early literature on this subject frequently mentions their potential advantages. For one reason or another, however, the earliest nonlinear constrained methods did not fully reveal the inherent power of constraints. [Pg.96]

This relaxation function is shown in Fig. 1. It yielded what was perhaps the first deconvolution result that demonstrated the real power of the physical-realizability constraint, and possibly the first to make use of both upper and lower bounds. [Pg.104]

The Fourier frequency bandpass of the spectrometer is determined by the diffraction limit. In view of this fact and the Nyquist criterion, the data in the aforementioned application were oversampled. Although the Nyquist sampling rate is sufficient to represent all information in the data, it is not sufficient to represent the estimates o(k) because of the bandwidth extension that results from information implicit in the physical-realizability constraints. Although it was not shown in the original publication, it is clear from the quality of the restoration, and by analogy with other similarly bounded methods, that Fourier bandwidth extrapolation does indeed occur. This is sometimes called superresolution. The source of the extrapolation should be apparent from the Fourier transform of Eq. (13) with r(x) specified by Eq. (14). [Pg.106]

In the formulation of a mesoscale model, the number-density function (NDF) plays a key role. For this reason, we discuss the properties of the NDF in some detail in Chapter 2. In words, the NDF is the number of particles per unit volume with a given set of values for the mesoscale variables. Since at any time instant a microscale particle will have a unique set of microscale variables, the NDF is also referred to as the one-particle NDF. In general, the one-particle NDF is nonzero only for realizable values of the mesoscale variables. In other words, the realizable mesoscale values are the ones observed in the ensemble of all particles appearing in the microscale simulation. In contrast, sets of mesoscale values that are never observed in the microscale simulations are non-realizable. Realizability constraints may occur for a variety of reasons, e.g. due to conservation of mass, momentum, energy, etc., and are intrinsic properties of the microscale model. It is also important to note that the mesoscale values are usually strongly correlated. By this we mean that the NDF for any two mesoscale variables cannot be reconstructed from knowledge of the separate NDFs for each variable. Thus, by construction, the one-particle NDF contains all of the underlying correlations between the mesoscale variables for only one particle. [Pg.18]

Adding together Eqs. (4.71) and (4.72) yields a realizability constraint for the velocity fields, namely Vx Uyoi = 0, where Uyoi = apUp + afUf. As mentioned earlier, this constraint must be incorporated into the conditional source terms in the disperse-phase momentum transport equation. Note that, in general, Uyoi t Umix unless the fluid and the particles have the same material density. [Pg.120]

But the methods have not really changed. The Verlet algorithm to solve Newton s equations, introduced by Verlet in 1967 [7], and it s variants are still the most popular algorithms today, possibly because they are time-reversible and symplectic, but surely because they are simple. The force field description was then, and still is, a combination of Lennard-Jones and Coulombic terms, with (mostly) harmonic bonds and periodic dihedrals. Modern extensions have added many more parameters but only modestly more reliability. The now almost universal use of constraints for bonds (and sometimes bond angles) was already introduced in 1977 [8]. That polarisability would be necessary was realized then [9], but it is still not routinely implemented today. Long-range interactions are still troublesome, but the methods that now become popular date back to Ewald in 1921 [10] and Hockney and Eastwood in 1981 [11]. [Pg.4]

Such quantization (i.e., constraints on the values that physical properties can realize) will be seen to occur whenever the pertinent wavefunction is constrained to obey a so-called boundary condition (in this case, the boundary condition is ( (Q+2k) = iS (Q)). [Pg.46]

In principle, ideal decouphng eliminates control loop interactions and allows the closed-loop system to behave as a set of independent control loops. But in practice, this ideal behavior is not attained for a variety of reasons, including imperfect process models and the presence of saturation constraints on controller outputs and manipulated variables. Furthermore, the ideal decoupler design equations in (8-52) and (8-53) may not be physically realizable andthus would have to be approximated. [Pg.737]

The last two problems have been realized only recently, and additional progress in these research directions may be expected in the near future. At present it is clear that with the standard geometry approximation all time step limitations below 10 fs can be overcome rather easily. This time step increase gives a substantial net increase in performance compared to conventional MD. The possibility of larger step sizes now looks problematic, although it has been demonstrated for small molecules. Larger steps should be possible, however, with constraints beyond the standard geometry approximation. [Pg.123]

Consider a set of n Af-dimensional vectors and a function (p that assigns a value 1 to each element of Af (i.e. 0 is a dichotomy see above). Baum [baumSSa] showed that if Af consists only of vectors such that no subset of N or fewer of them is linearly dependent, the smallest sized multi-layered perception that can realize an arbitrary dichotomy for Af contains one hidden layer consisting of [(n — 1) /N - -1] neurons. The size of this perception can only be decreased by putting on a more stringent constraint on the set Af. [Pg.551]

Now it is realized that there are developing constraints on the utilizable sources of fuel and energy that feed the entire kinetic complex of human society. The prospect of the primary rate constants becoming limiting, diminishing, or even vanishing, places the associated problems high on the... [Pg.440]

The a priori penalty prior(x) oc — log Pr x allows us to account for additional constraints not carried out by the data alone (i.e. by the likelihood term). For instance, the prior can enforce agreement with some preferred (e.g. smoothness) and/or exact (e.g. non-negativity) properties of the solution. At least, the prior penalty is responsible of regularizing the inverse problem. This implies that the prior must provide information where the data alone fail to do so (in particular in regions where the noise dominates the signal or where data are missing). Not all prior constraints have such properties and the enforced a priori must be chosen with care. Taking into account additional a priori constraints has also some drawbacks it must be realized that the solution will be biased toward the prior. [Pg.410]

Inverse problems are very common in experimental and observational sciences. Typically, they are encountered when a large number of parameters (as many as or more than measurements) are to be retrieved from measured data assuming a model of the data - also called the direct model. Such problems are ill-conditioned in the sense that a simple inversion of the direct model applied directly to the data yields a solution which exhibits significant, or even dominant, features which are completely different for a small change of the input data (for instance due to a different realization of the noise). Since the objective constraints set by the data alone are not sufficient to provide a unique and... [Pg.419]

The 6-function makes sure that if two segments and 2 meet on the huge network chain they can form a permanent constraint R( i) = R( 2)- Hence, this process will produce a network junction of functionality/n = 4, usually realized as sulfur bridges in technical elastomers like, for example, tire treads. [Pg.610]

The presented scheme offers several extensions. For example, the model gives a clear route for an additional inclusion of entanglement constraints and packing effects [15]. Again, this can be realized with the successful mean field models based on the conformational tube picture [7,9] where the chains do not have free access to the total space between the cross-links but are trapped in a cage due to the additional topological restrictions, as visualized in the cartoon. [Pg.612]

As is well known, we can consider the ensemble of many molecules of water either at equilibrium conditions or not. To start with, we shall describe our result within the equilibrium constraint, even if we realize that temperature gradients, velocity gradients, density, and concentration gradients are characterizations nearly essential to describe anything which is in the liquid state. The traditional approaches to equilibrium statistics are Monte Carlo< and molecular dynamics. Some of the results are discussed in the following (The details can be found in the references cited). [Pg.243]

These are most important realizations that will guide the evolution of multiple dimension chromatographic systems and detectors for years to come. The exact quantitative nature of specific predictions is difficult because the implementation details of dimensions higher than 2DLC are largely unknown and may introduce chemical and physical constraints. Liu and Davis (2006) have recently extended the statistical overlap theory in two dimensions to highly saturated separations where more severe overlap is found. This paper also lists most of the papers that have been written on the statistical theory of multidimensional separations. [Pg.22]

Dynamic simulations are also possible, and these require solving differential equations, sometimes with algebraic constraints. If some parts of the process change extremely quickly when there is a disturbance, that part of the process may be modeled in the steady state for the disturbance at any instant. Such situations are called stiff, and the methods for them are discussed in Numerical Solution of Ordinary Differential Equations as Initial-Value Problems. It must be realized, though, that a dynamic calculation can also be time-consuming and sometimes the allowable units are lumped-parameter models that are simplifications of the equations used for the steady-state analysis. Thus, as always, the assumptions need to be examined critically before accepting the computer results. [Pg.90]


See other pages where Realizability constraints is mentioned: [Pg.143]    [Pg.355]    [Pg.391]    [Pg.401]    [Pg.143]    [Pg.355]    [Pg.391]    [Pg.401]    [Pg.2365]    [Pg.503]    [Pg.98]    [Pg.286]    [Pg.741]    [Pg.360]    [Pg.128]    [Pg.11]    [Pg.499]    [Pg.305]    [Pg.187]    [Pg.133]    [Pg.383]    [Pg.278]    [Pg.303]    [Pg.740]    [Pg.77]    [Pg.83]    [Pg.168]    [Pg.153]    [Pg.63]    [Pg.86]    [Pg.299]    [Pg.344]   
See also in sourсe #XX -- [ Pg.18 , Pg.27 , Pg.120 , Pg.133 , Pg.259 , Pg.311 , Pg.330 , Pg.340 , Pg.342 , Pg.343 , Pg.346 , Pg.347 , Pg.348 , Pg.351 , Pg.352 , Pg.353 , Pg.354 , Pg.355 , Pg.356 , Pg.357 , Pg.364 , Pg.365 , Pg.376 , Pg.382 , Pg.386 , Pg.391 , Pg.399 , Pg.401 , Pg.428 , Pg.435 , Pg.437 ]




SEARCH



Realizability

Realizable

Realization

Realizers

© 2024 chempedia.info