Assumption A smaUest multiple coronoid can be found as a catacondensed naphthalenic system, where the naphthalene holes are paraUel. [Pg.67]

The assumption about naphthalene holes is very plausible since, loosely speaking, a larger corona hole would imply a waste of hexagons. It is also clear that the holes should be packed [Pg.67]

The following condition is obviously a necessary one for the smallest coronoids. [Pg.68]

Condition If any hexagon from a tuple coronoid is deleted, then the system is no longer a tuple coronoid. [Pg.68]

Our approach is unashamedly simplistic, and is based on the twin assumptions that [Pg.1074]

Not only in solid state physics, but also in the physics of atoms and molecules, it is well known that the chemical bond between atoms is essentially determined by the valence electrons of the atoms, i.e. the electrons in the outer shells. The electrons in the fully occupied shells, i.e. the core electrons, essentially do not contribute to the chemical bond. The periodicity in the table of the chemical elements is based on the fact that the core electrons are (to a first approximation) chemically inactif. [Pg.45]

This does not mean that the core electrons are unimportant in atomic, molecular and solid state physics. They determine the actual states and energies of the valence electrons. Thus, in a quantum mechanical description of the electronic structure of atoms, solids or molecules, these core electrons have to be included in the hamiltonian. Consider e.g. an atom of charge Za.e, with M electrons. The hamiltonian then contains a sum of M kinetic terms, M attractive terms between the nucleus and the electrons, and M(Af —1)/2 terms for the Coulomb repulsion of the electrons [Pg.45]

If one has the programs to solve the Schrodinger equation in atoms, one can think about a quantum mechanical description of simple chemical systems. In NaCl e.g., one already has to deal with 28 electrons, and only 8 out of them determine the chemical bond. In more complex systems, the total number of electrons becomes fairly large, and the number of valence electrons is substantially lower. This is a rather frustrating situation, if one realizes that the core electrons are essentially chemically inactif, and remain intimately bound to their nucleus. [Pg.46]

Given the relative stability of the core electrons, irrespective of the environment of the atom, one of course tries to avoid the calculation of the core states over and over again, especially in calculations for complex molecules and solids. One of the important techniques to take advantage of this core stability is the pseudopotential method. In the last decennia, it has been applied to a variety of chemistry and solid state problems, e.g. the study [Pg.46]

The pseudopotential concept has strongly evolved together with its applications, and so many models have been developed, that it is not always easy to recover the basic ideas and assumptions. [Pg.46]

The thermodynamic properties of clathrates can be derived from a simple model which corresponds to the three-dimensional generalization of ideal localized adsorption. In ref. 52 the deriva- [Pg.10]

Let us consider a clathrate crystal consisting of a cage-forming substance Q and a number of encaged compounds ( solutes ) A, B,. . ., M. The substance Q has two forms a stable modification, which under given conditions may be either crystalline (a) or liquid (L), and a metastable modification (ft) enclosing cavities of different types 1,. . ., n which acts as host lattice ( solvent ) in the clathrate. The number of cavities of type i per molecule of Q is denoted by vt. For hydroquinone v — for gas hydrates of Structure I 1/23 and v2 = 3/23, for those of Structure II vx = 2/17 and v2 = 1/17. [Pg.11]

The assumptions (a) - (d) are believed to give an adequate description of the physical situation in the great majority of clathrates. Assumption (a) implies that the spectrum of the host lattice is not affected by the presence of the solute molecules. Little is known about this, but since the host lattice in general is a com- [Pg.11]

As our system we choose a clathrate crystal containing NQ molecules of Q and occupying a volume V at the temperature T. We further suppose that it has been crystallized while in equilibrium with the solutes A,. . ., J,. . M, having absolute activities XA). . ., kM, i.e., chemical potentials [Pg.12]

This system is most conveniently described by the independent variables [Pg.12]

In the Miller-Abrahams formalism, hopping between sites / and j is determined by the product of a prefactor, an electronic wavefunction overlap [Pg.292]

Emin (1974), and more recently by Kenkre and Dunlap (1992) and Dunlap (1995). [Pg.294]

A further assumption is that the pattern and intensity of this leakage also provides information on preferential pathways that the leakage follows, and as such can be combined with additional geologic information to predict broad subsurface hydrocarbon fairways. In fact, in some instances it has been claimed that such data can identify areas of reservoired hydrocarbons. This last claim is often the subject of heated debate, however, commonly depending in which camp (for or against geochemistry) the explorationist resides. [Pg.143]

The physical state of the hydrocarbons during transport is not well known see Matthews (1996a) and Matthews (1996b) for a full discussion. Nevertheless, most of the models proposed for the transport of these fluids from source to reservoir (aqueous transport, micellular, discrete oil-phase transport, gaseous transport, etc.) are applicable to the continued transport of hydrocarbons from these source beds and/or reservoirs to the near-surface environment. An additional constraint on land is that the last stage of transport is generally above the water table. The physics of transport can be subdivided into two categories, effusion and diffusion. [Pg.143]

According to the features of customer demand, it is pointed out that customer [Pg.41]

In a fixed period of time, when the supply chain node enterprise provides the raw materials, intermediate products and firrished products, the customer demand has a limited growth, so there is a potential limit value K. [Pg.41]

The customer demand changing rate of growth at time t is proportional to enterprise product market potential capacity K, the demand difference of this product at time and composite elements X t). [Pg.41]

The customer demand function Y = Y t) is continuous and second-order derivative function. [Pg.41]

The authors start from the experimentally weU-estabhshed fact that relatively dilute micellar solutions are characterized by two well-separated relaxation processes. They attribute the fast process to the exchange of a surfactant A between aggregates (micelles) Ag and Ag.i as in reaction (3.4), with the rate constants of association (entry), k, and dissociation (exit), kg [Pg.81]

They also assign the slow process to the micelle formation/breakdown (global reaction (3.2)) and assume that this reaction takes place via a series of stepwise reactions (3.4). [Pg.81]

Reactions (3.3) of fragmentation/coagulation are excluded. The contribution of the counterions is not included. [Pg.82]

Aniansson and Wall pointed out the analogy existing between the response of a micellar system to a sudden perturbation and heat conduction (through diffusion) in a system constituted by two metal blocks (oligomers and micelles proper) connected by a thin wire (aggregates aromid the distribution ciirve minimum). When heat is provided to the system, a fast thermal equilibration occurs within each block, owing to their large heat conductivity. Then the thermal equi- [Pg.83]

These equations are formally identical to Pick s laws for diffusion in a tube. The space coordinate, the diffusion coefficient, the concentration, and the section of the tube identify [Pg.84]

The kinetic theory of transport processes in gases rests upon three basic assumptions. [Pg.671]

The basic assumption here is the existence over the inelastic scattering region of a connnon classical trajectory R(t) for the relative motion under an appropriately averaged central potential y[R(t)]. The interaction V r, / (t)] between A and B may then be considered as time-dependent. The system wavefiinction therefore satisfies... [Pg.2051]

Consider a molecule consisting of more than three atoms, with an even number of valence elections, 2n (n > 2). The basic assumption of the model is that the... [Pg.390]

Among the main theoretical methods of investigation of the dynamic properties of macromolecules are molecular dynamics (MD) simulations and harmonic analysis. MD simulation is a technique in which the classical equation of motion for all atoms of a molecule is integrated over a finite period of time. Harmonic analysis is a direct way of analyzing vibrational motions. Harmonicity of the potential function is a basic assumption in the normal mode approximation used in harmonic analysis. This is known to be inadequate in the case of biological macromolecules, such as proteins, because anharmonic effects, which MD has shown to be important in protein motion, are neglected [1, 2, 3]. [Pg.332]

Harmonic analysis is an alternative approach to MD. The basic assumption is that the potential energy can be approximated by a sum of quadratic terms in displacements. [Pg.334]

A basic assumption in such additivity schemes is that the interactions between the atoms of a molecule are of a rather short-range nature. This fact can be expressed in a more precise manner The law of additivity can be expressed in a chemical equation [1]. Let us consider the atoms (or groups) X and Y attached to a common skeleton, S, and also the redistribution of these atoms on that skeleton as ejqjressed by Eq. (1). [Pg.320]

The correction term in Eq. (9) shows that the basic assumption of additivity of the fragmental constants obviously does not hold true here. Correction has to be appHed, e.g., for structural features such as resonance interactions, condensation in aromatics or even hydrogen atoms bound to electronegative groups. Astonishingly, the correction applied for each feature is always a multiple of the constant Cu, which is therefore often called the magic constant . For example, the correction for a resonance interaction is +2 Cj, or per triple bond it is -1 A detailed treatment of the Ef system approach is given by Mannhold and Rekker [5]. [Pg.493]

Understanding the distribution allows us to calculate the expected values of random variables that are normally and independently distributed. In least squares multiple regression, or in calibration work in general, there is a basic assumption that the error in the response variable is random and normally distributed, with a variance that follows a ) distribution. [Pg.202]

It would be difficult to over-estimate the extent to which the BET method has contributed to the development of those branches of physical chemistry such as heterogeneous catalysis, adsorption or particle size estimation, which involve finely divided or porous solids in all of these fields the BET surface area is a household phrase. But it is perhaps the very breadth of its scope which has led to a somewhat uncritical application of the method as a kind of infallible yardstick, and to a lack of appreciation of the nature of its basic assumptions or of the circumstances under which it may, or may not, be expected to yield a reliable result. This is particularly true of those solids which contain very fine pores and give rise to Langmuir-type isotherms, for the BET procedure may then give quite erroneous values for the surface area. If the pores are rather larger—tens to hundreds of Angstroms in width—the pore size distribution may be calculated from the adsorption isotherm of a vapour with the aid of the Kelvin equation, and within recent years a number of detailed procedures for carrying out the calculation have been put forward but all too often the limitations on the validity of the results, and the difficulty of interpretation in terms of the actual solid, tend to be insufficiently stressed or even entirely overlooked. And in the time-honoured method for the estimation of surface area from measurements of adsorption from solution, the complications introduced by... [Pg.292]

The basic assumption related to the function is that the subset... [Pg.297]

The time constant R /D, and hence the diffusivity, may thus be found dkecdy from the uptake curve. However, it is important to confirm by experiment that the basic assumptions of the model are fulfilled, since intmsions of thermal effects or extraparticle resistance to mass transfer may easily occur, leading to erroneously low apparent diffusivity values. [Pg.260]

A kinetic study typically prepares some initial Z not equal to and describes the subsequent evolution of each of the concentrations. A basic assumption is that each component evolves according to some differential equation where t represents time. [Pg.507]

For any given protein, the number of possible conformations that it could adopt is astronomical. Yet each protein folds into a unique stmcture totally deterrnined by its sequence. The basic assumption is that the protein is at a free energy minimum however, calometric studies have shown that a native protein is more stable than its unfolded state by only 20—80 kj/mol (5—20 kcal/mol) (5). This small difference can be accounted for by the favorable... [Pg.209]

It can be assumed that P,Jp, and for the cascade have been specified, and that the cost of feed and the cost per unit of separative work, the product of separative capacity and time, are known. The basic assumption is that the unit cost of separative work remains essentially constant for small changes ia the total plant size. The cost of the operation can then be expressed as the sum of the feed cost and cost of separative work ... [Pg.78]

The concept of corresponding states was based on kinetic molecular theory, which describes molecules as discrete, rapidly moving particles that together constitute a fluid or soHd. Therefore, the theory of corresponding states was a macroscopic concept based on empirical observations. In 1939, the theory of corresponding states was derived from an inverse sixth power molecular potential model (74). Four basic assumptions were made (/) classical statistical mechanics apply, (2) the molecules must be spherical either by actual shape or by virtue of rapid and free rotation, (3) the intramolecular vibrations are considered identical for molecules in either the gas or Hquid phases, and (4) the potential energy of a coUection of molecules is a function of only the various intermolecular distances. [Pg.239]

The F distribution, similar to the chi square, is sensitive to the basic assumption that sample values were selected randomly from a normal distribution. [Pg.494]

The basic assumption of continuum mechanics is that the motion is smooth, i.e., differentiable as many times as needed, and that the Jacobian of the motion is nonzero and positive so that (A.l) is uniquely invertible in X... [Pg.171]

The basic assumptions of fracture mechanics are (1) that the material behaves as a linear elastic isotropic continuum and (2) the crack tip inelastic zone size is small with respect to all other dimensions. Here we will consider the limitations of using the term K = YOpos Ttato describe the mechanical driving force for crack extension of small cracks at values of stress that are high with respect to the elastic limit. [Pg.494]

The classic model that describes chain scission in elastomers was proposed many years ago by Lake and Thomas [26J. The aim of the model is to calculate the energy dissipated in breaking all the polymer strands that have adjacent cross-links on either side of the crack plane. The basic assumption of this model is that all the main chain bonds in any strand that breaks must be strained to the dissociation... [Pg.237]

The effects due to the finite size of crystallites (in both lateral directions) and the resulting effects due to boundary fields have been studied by Patrykiejew [57], with help of Monte Carlo simulation. A solid surface has been modeled as a collection of finite, two-dimensional, homogeneous regions and each region has been assumed to be a square lattice of the size Lx L (measured in lattice constants). Patches of different size contribute to the total surface with different weights described by a certain size distribution function C L). Following the basic assumption of the patchwise model of surface heterogeneity [6], the patches have been assumed to be independent one of another. [Pg.269]

The basic assumption is that the individual always has the choice of whether or not to behave in an unsafe manner. The implication of this assumption is that the responsibility for accident prevention ultimately rests with the individual worker. It also implies that as long as management has expended reasonable efforts to persuade an individual to behave responsibly, has provided training in safe methods of work, and has provided appropriate guarding of hazards or personal protection equipment, then it has discharged its responsibilities for accident prevention. If these remedies fail, the only recourse is disciplinary action and ultimately dismissal. [Pg.47]

Which addition is more favorable thermodynamically Assuming that the difference is entirely due to different 7t-bond energies, then which contains the stronger % bond, the alkyne or the alkene What flaws might there be in the basic assumption ... [Pg.115]

The parameterization process may be done sequentially or in a combined fashion. In the sequential method a certain class of compound, such as hydrocarbons, is parameterized first. These parameters are held fixed, and a new class of compound, for example alcohols and ethers, is then parameterized. Tins method is in line with the basic assumption of force fields parameters are transferable. The advantage is that only a fairly small number of parameters are fitted at a time. The ErrF is therefore a relatively low-dimensional function, and one can be reasonably certain that a good minimum has been found (although it may not be the global minimum). The disadvantage is that the final set of parameters necessarily provides a poorer fit (as defined from the value of the ErrF) than if all the parameters are fitted simultaneously. [Pg.33]

A different approach is adopted here. Within the LMTO-ASA method, it is possible to vary the atomic radii in such a way that the net charges are non-random while preserving the total volume of the system . The basic assumption of a single-site theory of electronic structure of disordered alloys, namely that the potential at any site R depends only on the occupation of this site by atom A or B, and is completely independent of the occupation of other sites, is fulfilled, if the net charges... [Pg.134]

This method is a modification of the earlier method [30] by Reference [26], as follow s, and can be less conservative [26] than the original method [30]. A basic assumption is that particles must rise/fall through one-half of the drum vertical cross-sectional area [26]. [Pg.241]

Finiteness is the basic assumption a finite total volume of space-time and a finite amount of information in a finite volume of space-time. We require universality, of course, since we know that without it nothing much of interest can happen. We can also take a strong cue from our own universe, which allows us to build universal computers. If the underlying micro-physics was not universal we would not be able to do this. Reversibility is desirable because it ensures a strict conservation of information and can be used to create systems that conserve various quantities such as energy and angular momentum despite underlying anisotropies. [Pg.666]

For concreteness, let us suppose that the universe has a temporal depth of two to accommodate a Fi edkin-type reversibility i.e. the present and immediate past are used to determine the future, and from which the past can be recovered uniquely. The RUGA itself is deterministic, is applied synchronously at each site in the lattice, and is characterized by three basic dimensional units (1) digit transition, D, which represents the minimal informational change at a given site (2) the length, L, which is the shortest distance between neighboring sites and (3) an integer time, T, which, while locally similar to the time in physics, is not Lorentz invariant and is not to be confused with a macroscopic (or observed) time t. While there are no a priori constraints on any of these units - for example, they may be real or complex - because of the basic assumption of finite nature, they must all have finite representations. All other units of physics in DM are derived from D, L and T. [Pg.666]

© 2019 chempedia.info