The Statistical Averaging Period


The Statistical Averaging Period  [c.318]

The Statistical Averaging Period  [c.318]

HyperChem includes a number of time periods associated with a trajectory. These include the basic time step in the integration of Newton s equations plus various multiples of this associated with collecting data, the forming of statistical averages, etc. The fundamental time period is Atj s At, the integration time step stt in the Molecular Dynamics dialog box.  [c.318]

Our purpose in this introduction is not to trace the history of polymer chemistry beyond the sketchy version above, instead, the objective is to introduce the concept of polymer chains which is the cornerstone of all polymer chemistry. In the next few sections we shall introduce some of the categories of chains, some of the reactions that produce them, and some aspects of isomerism which multiply their possibilities. A common feature of all of the synthetic polymerization reactions is the random nature of the polymerization steps. Likewise, the twists and turns the molecule can undergo along the backbone of the chain produce shapes which are only describable as averages. As a consequence of these considerations, another important part of this chapter is an introduction to some of the statistical concepts which also play a central role in polymer chemistry.  [c.2]

As of this writing, the only practical approach to solving turbulent flow problems is to use statistically averaged equations governing mean flow quantities. These equations, which are usually referred to as the Reynolds equations of motion, are derived by Reynold s decomposition of the Navier-Stokes equations (18). The randomly changing variables are represented by a time mean and a fluctuating part  [c.101]

Now we take up the calculation of the rate constant for the decay of a metastable state. In principle it can be done by statistically averaging E from (3.32), but there is a more elegant and general way which relates the rate constant with the imaginary part of free energy. Recalling (3.19) we write the rate constant as  [c.43]

Although the absolute numbers vary depending on the source of statistics and period of time examined, there is no doubt about the effects of chemical accidents on human life. Every year, large numbers of people are killed and injured. Added to these imprecise numbers must be those long-term consequences of exposure that are not immediately discemable and may not be reflected in studied databases. The low-level exposure to some chemicals may result in debilitating diseases that appear only years later. During the years 1988 through 1992, 6%, or 2,070, of the 34,500 accidents that occurred resulted in immediate death, injury and/or evacuation an average of two chemical-related injuries occurred every day during those five years National Environmental Law Center et al. 1994). Between 1982 and 1986, the EPA s Acute Hazard Events (AHE) database, which contains information only for chemical accidents having acute hazard potential, recorded 11,048  [c.57]

Chemical reactions are processes in which reactants are transformed into products. In some processes, the change occurs directly and the complete description of the mechanism of the reaction present can be relatively obtained. In complex processes, the reactants undergo a series of step-like changes and each step constitutes a reaction in its own right. The overall mechanism is made up of contributions from all the reactions that are sometimes too complex to determine from the knowledge of the reactants and products alone. Chemical kinetics and reaction mechanism as reviewed in Chapter 1 can often provide a reasonable approach in determining the reaction rate equations. Chemical kinetics is concerned witli analyzing tlie dynamics of chemical reactions. The raw data of chemical kinetics are the measurements of the reactions rates. The end product explains these rates in terms of a complete reaction mechanism. Because a measured rate shows a statistical average state of the molecules taking part in the reaction, chemical kinetics does not provide information on the energetic state of the individual molecules.  [c.109]

The bulk associating fluids have been intensively studied using computer simulation [12-20]. In many cases simulation has been performed to verify theoretical predictions. The studies of Chapman and Zhang [17,18], in particular, have been concerned with a comparison of the results of a traditional Metropolis Monte Carlo simulation for dimerization of methanol with the prior predictions of the theory of Wertheim. It appeared, however, that in several applications the traditional Monte Carlo method does not provide reasonable statistical averages. At high degrees of association bonded configurations represent a small part of the configuration space. In the traditional method, the configurational space is randomly sampled, thus the small spatial volume of bonded configurations may be difficult to sample adequately in the framework of the common algorithm. In order to overcome this difficulty, Busch et al. [19] have proposed the association-biased Monte Carlo method. The canonical ensemble algorithm biases sampling to the regions of configuration space where the association or dissociation of particles is Ukely to occur. This is an efficient simulation technique for associating fluids in a wide range of densities. Unfortunately, the application of the method to nonuniform associating fluids requires a great deal of numerical effort.  [c.169]

The statistical error can thus be reduced by averaging over a larger ensemble. How well the calculated average (from eq. (16.9)) resembles the true value, however, depends on whether the ensemble is representative. If a large number of points is collected from a small part of the phase space, the property may be calculated with a small statistical error, but a large systematic error (i.e. the value may be precise, but inaccurate). As it is difficult to establish that the phase space is adequately sampled, this can be a very misleading situation, i.e. the property appears to have been calculated accurately but may in fact be significantly in error.  [c.375]

The first phase of a run will, in general, be used to allow the system to move away from high-energy and thus statistically insignificant configurations. During this period, the potential energy may either increase or decrease, depending on the starting point its instantaneous value should be monitored. The number of steps required before equilibration has been achieved and equilibrium averages can be accumulated is strongly dependent on the system, the starting point, the step size, and the temperature of the simulation. Typical values for Lennard-Jones liquids are 500-1000 steps.  [c.98]

Analytically, the computation of ensemble averages along this route is a formidable task, even if microscopically small representations of the system of interest are considered, because f r X) is generally a very complicated function of the spatial arrangement of the N molecules. However, with the advent of large-scale computers some forty years ago the key problem in statistical physics became tractable, at least numerically, by means of computer simulations. In a computer simulation the evolution of a microscopically small sample of the macroscopic system is determined by computing trajectories of each molecule for a microscopic period of observation. An advantage of this approach is the treatment of the microscopic sample in essentially a first-principles fashion the only significant assumption concerns the choice of an interaction potential [25]. Because of the power of modern supercomputers which can literally handle hundreds of millions of floating point operations per second, computer simulations are nowadays  [c.21]

It is important to realize that MC simulation does not provide a way of calculating the statistical mechanical partition function instead, it is a method of sampling configurations from a given statistical ensemble and hence of calculating ensemble averages. A complete sum over states would be impossibly time consuming for systems consisting of more than a few atoms. Applying the trapezoidal rule, for instance, to the configurational part of 2 vT Mihails discretizing each atomic coordinate on a fine grid then the dimensionality of the integral is extremely high, since there are 3N such coordinates, so the total number of grid points is astronomically high. The MC integration method is sometimes used to estimate multidimensional mtegrals by randomly sampling points. This is not feasible here, since a very small proportion of all pomts would be sampled in a reasonable time, and very few, if any, of these would have a large enough Boltzmaim factor to contribute significantly to the partition fimction. MC simulation differs from such methods by sampling points in a noiumifomi way, chosen to favour the important contributions.  [c.2256]

HyperChem in eludes a num ber of tim e periods associated with a trajectory. These include the basic time step in the integration of Newton s equations plus various imiltiples of this associated with collecting data, Ihc forining of statistical averages, etc. fhe fundamental time period is At]. At, the integration limestc p set in the Molecular Hyn am ics dialog box.  [c.318]

Ab-initio studies of alloy phase stability are usually based on the Ising-type Hamiltonian, whose parameters, often called the effective cluster interactions (ECI), serve as an input in determination of the equilibrium properties of the system via the methods of statistical mechanics. An alternative formulation, which completely avoids determination of ECIs by calculating the electronic structure and solving the statistical part of the problem in one step, is employed in the concentration-wave method. It is based on the grandcanonical potential fl configurationally averaged within the coherent potential approximation (CPA) and on a mean-field solution of the statistical part of the problem including usually also the Onsager cavity field.  [c.39]

Due to the noncrystalline, nonequilibrium nature of polymers, a statistical mechanical description is rigorously most correct. Thus, simply hnding a minimum-energy conformation and computing properties is not generally suf-hcient. It is usually necessary to compute ensemble averages, even of molecular properties. The additional work needed on the part of both the researcher to set up the simulation and the computer to run the simulation must be considered. When possible, it is advisable to use group additivity or analytic estimation methods.  [c.309]

U.S. Departmeat of Transportatioa (DOT) statistics oaflquids pipelines operated uader the Code of Federal Regulations (49) iadicate that corrosioa was the secoad largest coatributor to accideats and fadures for the period from 1982 to 1991. These statistics covered an average of 344,575 km of Hquids pipelines and were derived from required reports to DOT on all pipeline accidents involving loss of at least 7.95 m of Hquid, death or boddy harm to any person, fire or explosion, loss of at least 0.8 m of highly volatile Hquid, or property damage of 5000 or more (50). Similar results were also reported for 1991 ia the 1992 DOT/OPS report oa both od and gas pipeline iacidents 62 out of 210 od pipeline iacidents were due to corrosion, of which 74% were due to external corrosion (43). For gas pipelines, 16 of all 71 reported iacidents were due to corrosion, of which 63% were reported as due to internal corrosion however, internal corrosion of gas pipelines is likely ordy if CO2 and H2O and/or H2S are present, as with unprocessed gas ia gathering lines.  [c.50]

A few configurations (usually not more than three) picked up from the productive part of the NVT run have been chosen as configurations for matrix species. These fixed configurations are used in the following independent GCMC runs for fluid hard sphere adsorption. The only requirement imphed at the NVT step is that the configurations of matrix chains are statistically independent with respect to each other. We have chosen matrix configurations such that they are distanced by not less than 10 Monte Carlo steps in the NVT runs. The input data for the GCMC simulations of adsorption of a fluid in a matrix media are the chemical potential, the volume of the simulation box used in the NVT ensemble simulation of chains, and the configurations of matrix species. The results of independent GCMC runs for each matrix configuration.have been averaged at the end of the procedure. We have observed that, for the model in question, the results for adsorbed density are weakly dependent on the number of matrix configurations used for averaging.  [c.320]

One way of eliminating the problem with conformationally dependent charges is to add additional constraints, for example forcing the three hydrogens in a methyl group to have identical charges or averaging over different conformations. The more fundamental problem (which probably is also part of the conformational problem) is that the fitting procedure in general is statistically underdetermined. The difference between the true electrostatic potential and that generated by a set of atomic charges on say 80% of the atoms, is not significantly reduced by having fitting parameters on all atoms. The electrostatic potential experienced outside the molecule is mainly determined by the atoms near the surface, and consequently the charges on atoms buried within a molecule cannot be assigned with any great confidence. Even for a medium sized molecule it may only be statistically valid to assign charges to perhaps half the nuclei. A full set of atomic charges thus forms a redundant set, many different sets of charges may be chosen, all of which are capable of reproducing the true electrostatic potential to almost the same accuracy. It has furthermore been shown that increasing the number of sampling points, or the quality of the wave function, does not alleviate the problem. Although a very large number of sampling points (several thousands) may be chosen to be fitted by relatively few (perhaps 20-30) parameters, the fact that the sampling points are highly correlated makes the problem underdetermined.  [c.221]


See pages that mention the term The Statistical Averaging Period : [c.57]    [c.2485]    [c.98]    [c.398]    [c.555]   
See chapters in:

Hyperchem computation chemistry  -> The Statistical Averaging Period