Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Macroscopic computational

We carry out computer simulations in the hope of understanding bulk, macroscopic properties in temis of the microscopic details of molecular structure and interactions. This serves as a complement to conventional experiments, enabling us to leam something new something that cannot be found out in other ways. [Pg.2239]

Computer simulations act as a bridge between microscopic length and time scales and tlie macroscopic world of the laboratory (see figure B3.3.1. We provide a guess at the interactions between molecules, and obtain exact predictions of bulk properties. The predictions are exact in the sense that they can be made as accurate as we like, subject to the limitations imposed by our computer budget. At the same time, the hidden detail behind bulk measurements can be revealed. Examples are the link between the diffiision coefficient and... [Pg.2239]

G. Ramachandran and T. Schlick. Beyond optimization Simulating the dynamics of supercoiled DNA by a macroscopic model. In P. M. Pardalos, D. Shal-loway, and G. Xue, editors. Global Minimization of Nonconvex Energy Functions Molecular Conformation and Protein Folding, volume 23 of DIM ACS Series in Discrete Mathematics and Theoretical Computer Science, pages 215-231, Providence, Rhode Island, 1996. American Mathematical Society. [Pg.259]

The speed of the method comes from two sources. First, all of the macroscopic cells of the same size have exactly the same internal structure, as they are simply formed of tessellated copies of the original cell, thus each has exactly the same multipole expansion. We need compute a new multipole expansion only once for each level of macroscopic agglomeration. Second, the structure of the periodic copies is fixed we can precompute a single transfer... [Pg.461]

The results in the prior two sections were for the Macroscopic multipole and PME solvers in isolation. A complete MD simulation involves much more than these routines. In addition to computing the short range interactions from bonding forces, etc., the particle positions and velocities need to be updated each timestep. Additionally, efficient MD programs recognize that the... [Pg.465]

The salient comparisons are between the bars marked P3-Dk, our initial parallel PME implementation, and DP-4, the macroscopic multipole method with four levels of macroscopic boxes. Though it is difficult to create a completely fair comparison in terms of the relative accuracy of the potentials and forces as computed by the two methods, the parameters for these simulations were tuned to give comparable overall accuracy. PME is clearly... [Pg.468]

Isolated gas phase molecules are the simplest to treat computationally. Much, if not most, chemistry takes place in the liquid or solid state, however. To treat these condensed phases, you must simulate continuous, constant density, macroscopic conditions. The usual approach is to invoke periodic boundary conditions. These simulate a large system (order of 10 molecules) as a continuous replication in all directions of a small box. Only the molecules in the single small box are simulated and the other boxes are just copies of the single box. [Pg.200]

Very recently, people who engage in computer simulation of crystals that contain dislocations have begun attempts to bridge the continuum/atomistic divide, now that extremely powerful computers have become available. It is now possible to model a variety of aspects of dislocation mechanics in terms of the atomic structure of the lattice around dislocations, instead of simply treating them as lines with macroscopic properties (Schiotz et al. 1998, Gumbsch 1998). What this amounts to is linking computational methods across different length scales (Bulatov et al. 1996). We will return to this briefly in Chapter 12. [Pg.50]

It is important to note that we assume the random fracture approximation (RPA) is applicable. This assumption has certain implications, the most important of which is that it bypasses the real evolutionary details of the highly complex process of the lattice bond stress distribution a) creating bond rupture events, which influence other bond rupture events, redistribution of 0(microvoid formation, propagation, coalescence, etc., and finally, macroscopic failure. We have made real lattice fracture calculations by computer simulations but typically, the lattice size is not large enough to be within percolation criteria before the calculations become excessive. However, the fractal nature of the distributed damage clusters is always evident and the RPA, while providing an easy solution to an extremely complex process, remains physically realistic. [Pg.380]

A key problem in the equilibrium statistical-physical description of condensed matter concerns the computation of macroscopic properties O acro like, for example, internal energy, pressure, or magnetization in terms of an ensemble average (O) of a suitably defined microscopic representation 0 r ) (see Sec. IVA 1 and VAl for relevant examples). To perform the ensemble average one has to realize that configurations = i, 5... [Pg.21]

There are cases where non-regular lattices may be of advantage [36,37]. The computational effort, however, is substantially larger, which makes the models less flexible concerning changes of boundary conditions or topological constraints. Another direction, which may be promising in the future, is the use of hybrid models, where for example local attachment kinetics are treated on a microscopic atomistic scale, while the transport properties are treated by macroscopic partial differential equations [5,6]. [Pg.859]

Computer simulation generates information at the microscopic level, and the conversion of this information into macroscopic terms is the province of statistical thermodynamics. An experimentally observable property A is just the time average of A(F) taken over a long time interval,... [Pg.59]

Suppose we offset this motion by applying a Galilean transformation x = x +Pt ). In the new reference frame, the system will move just as it did in the old reference frame but, because a — /pqt = / i P )t/A, its diffusion is slowed down by a Lorentz-Fitzgerald-like time factor 1-/3. Intuitively, as some of the resources of the random walk computer are shifted toward producing coherent macroscopic motion (uniform motion of the center of mass), fewer resources will remain available for the task of producing incoherent motion (diffusion). [tofI89]... [Pg.670]

Other early designs of classical reversible computers included Landauer s Bag and Pipes Model [land82a] (in which pipes are used as classical mechanical conduits of information carried by balls). Brownian motion reversible computers ([benn88], [keyesTO]) and Likharev s model based on the Josephson junction [lik82]. One crucial drawback to these models (aside from their impracticality), however, is that they are all decidedly macroscopic. If we are to probe the microscopic limits of computation, we must inevitably deal with quantum phenomena and look for a quantum mechanical reversible computer. [Pg.673]

Two properties, in particular, make Feynman s approach superior to Benioff s (1) it is time independent, and (2) interactions between all logical variables are strictly local. It is also interesting to note that in Feynman s approach, quantum uncertainty (in the computation) resides not in the correctness of the final answer, but, effectively, in the time it takes for the computation to be completed. Peres [peres85] points out that quantum computers may be susceptible to a new kind of error since, in order to actually obtain the result of a computation, there must at some point be a macroscopic measurement of the quantum mechanical system to convert the data stored in the wave function into useful information, any imperfection in the measurement process would lead to an imperfect data readout. Peres overcomes this difficulty by constructing an error-correcting variant of Feynman s model. He also estimates the minimum amount of entropy that must be dissipated at a given noise level and tolerated error rate. [Pg.676]

Thermodynamic, statistical This discipline tries to compute macroscopic properties of materials from more basic structures of matter. These properties are not necessarily static properties as in conventional mechanics. The problems in statistical thermodynamics fall into two categories. First it involves the study of the structure of phenomenological frameworks and the interrelations among observable macroscopic quantities. The secondary category involves the calculations of the actual values of phenomenology parameters such as viscosity or phase transition temperatures from more microscopic parameters. With this technique, understanding general relations requires only a model specified by fairly broad and abstract conditions. Realistically detailed models are not needed to un-... [Pg.644]

Chapter 10, the last chapter in this volume, presents the principles and applications of statistical thermodynamics. This chapter, which relates the macroscopic thermodynamic variables to molecular properties, serves as a capstone to the discussion of thermodynamics presented in this volume. It is a most satisfying exercise to calculate the thermodynamic properties of relatively simple gaseous systems where the calculation is often more accurate than the experimental measurement. Useful results can also be obtained for simple atomic solids from the Debye theory. While computer calculations are rapidly approaching the level of sophistication necessary to perform computations of... [Pg.686]

The three representations that are referred to in this study are (1) macroscopic representations that describe the bulk observable properties of matter, for example, heat energy, pH and colour changes, and the formation of gases and precipitates, (2) submicroscopic (or molecular) representations that provide explanations at the particulate level in which matter is described as being composed of atoms, molecules and ions, and (3) symbolic (or iconic) representations that involve the use of chemical symbols, formulas and equations, as well as molecular structure drawings, models and computer simulations that symbolise matter (Andersson, 1986 Boo, 1998 Johnstone, 1991, 1993 Nakhleh Krajcik, 1994 Treagust Chittleborough, 2001). [Pg.152]


See other pages where Macroscopic computational is mentioned: [Pg.257]    [Pg.173]    [Pg.257]    [Pg.173]    [Pg.691]    [Pg.2382]    [Pg.240]    [Pg.469]    [Pg.317]    [Pg.311]    [Pg.1]    [Pg.182]    [Pg.459]    [Pg.195]    [Pg.419]    [Pg.433]    [Pg.465]    [Pg.481]    [Pg.167]    [Pg.95]    [Pg.21]    [Pg.51]    [Pg.259]    [Pg.484]    [Pg.748]    [Pg.863]    [Pg.8]    [Pg.486]    [Pg.672]    [Pg.839]    [Pg.291]    [Pg.95]    [Pg.176]    [Pg.146]   


SEARCH



© 2024 chempedia.info