Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

ACCURACY ISSUES

We mentioned in the discussion on calculating the PES from MO theory that the infinite basis electronic wavefunction 1 was approximated by a finite basis set wavefunction O . The wavefunction O may be, as another layer of approximation, separated into a product of functions dependent only on coordinates of a single electron. These single electron coordinate functions are called molecular orbitals P, and may be approximated by a linear combination of atomic orbitals (LCAO), thus [Pg.517]

Basis-set effects testing is rather routine in most MO studies. Furthermore, new types of basis-sets are actively being developed and introduced (e.g., de Castro and Jorge 1998 Mitin et al. 1996) and therefore accuracy comparisons and calibrations regularly [Pg.517]


The many approaches to the challenging timestep problem in biomolecular dynamics have achieved success with similar final schemes. However, the individual routes taken to produce these methods — via implicit integration, harmonic approximation, other separating frameworks, and/or force splitting into frequency classes — have been quite different. Each path has encountered different problems along the way which only increased our understanding of the numerical, computational, and accuracy issues involved. This contribution reported on our experiences in this quest. LN has its roots in LIN, which... [Pg.256]

Currently, the preferred method for the analysis of liquefied petroleum gas, and indeed for most petroleum-related gases, is gas chromatography (ASTM D2163 IP 264). This technique can be used for the identification and measurement of both primary and trace constituents. However, there may be some accuracy issues that arise in the measurement of the higher-boiling constituents due to relative volatility under the conditions in which the sample is held. [Pg.249]

Accuracy issues that arise from hardware single precision (SP) limitations need to be controlled in a way that is acceptable to the scientific algorithm being simulated. Approaches to this include sorting floats by size prior to addition and making careful use of double precision (DP) where needed [15]. [Pg.9]

Sudduth KA, Drummond ST, Kitchen NR (2001) Accuracy issues in electromagnetic induction sensing of soil electrical conductivity for precision agriculture. Comp Electron Agric 31 239-264... [Pg.57]

Temperature Sensors The traceability/accuracy issue of temperature sensors is well understood. A common policy is to claim an uncertainty in the measured value (the blackbody temperature in our case) no better than 4 or 10 times the uncertainty in their instrumentation. For example, if we believe the calibration of our temperature sensor is within 0.5 K, the metrology group would not be willing to certify the blackbody temperature to better than 2 K (4 1) or 5 K (10 1). [Pg.274]

From about the year 2000 onward, a second class of drag correlations became available. These correlations were constructed using data obtained from fuUy resolved flow simulations (commonly referred to as DNS). WeU-known examples are the correlations of HiU et al (2001) and Beetstra et al (2007). Compared to experiments, simulations can be much better controlled, but also these correlations have accuracy issues mainly due to relative low grid resolutions that were attainable at that time. Currently, much more accurate simulations are available that provide more accurate drag correlations such as presented by Tenneti et al (2011) and Tang et al (2015a). [Pg.160]

Day, Chia P. "Robot Accuracy Issues and Methods of Improvement,"... [Pg.446]

Before developing some general functions for solving systems of differential equation, the next section will consider some stabihty and accuracy issues with simple differential equations with known solutions. This will provide some important insight into the features needed for solving differential equations. [Pg.469]

Exploring Stability and Accuracy Issues with Simple Examples... [Pg.469]

A More Detailed Look at Accuracy Issues with the TP Algorithm... [Pg.497]

Returning now to the issue of the accuracy of various electronic structure predictions, it is natural to ask why... [Pg.2159]

A number of issues need to be addressed before this method will become a routine tool applicable to problems as the conformational equilibrium of protein kinase. E.g. the accuracy of the force field, especially the combination of Poisson-Boltzmann forces and molecular mechanics force field, remains to be assessed. The energy surface for the opening of the two kinase domains in Pig. 2 indicates that intramolecular noncovalent energies are overestimated compared to the interaction with solvent. [Pg.75]

Computational issues that are pertinent in MD simulations are time complexity of the force calculations and the accuracy of the particle trajectories including other necessary quantitative measures. These two issues overwhelm computational scientists in several ways. MD simulations are done for long time periods and since numerical integration techniques involve discretization errors and stability restrictions which when not put in check, may corrupt the numerical solutions in such a way that they do not have any meaning and therefore, no useful inferences can be drawn from them. Different strategies such as globally stable numerical integrators and multiple time steps implementations have been used in this respect (see [27, 31]). [Pg.484]

A second issue is the practice of using the same set of exponents for several sets of functions, such as the 2s and 2p. These are also referred to as general contraction or more often split valence basis sets and are still in widespread use. The acronyms denoting these basis sets sometimes include the letters SP to indicate the use of the same exponents for s andp orbitals. The disadvantage of this is that the basis set may suffer in the accuracy of its description of the wave function needed for high-accuracy calculations. The advantage of this scheme is that integral evaluation can be completed more quickly. This is partly responsible for the popularity of the Pople basis sets described below. [Pg.79]

Another related issue is the computation of the intensities of the peaks in the spectrum. Peak intensities depend on the probability that a particular wavelength photon will be absorbed or Raman-scattered. These probabilities can be computed from the wave function by computing the transition dipole moments. This gives relative peak intensities since the calculation does not include the density of the substance. Some types of transitions turn out to have a zero probability due to the molecules symmetry or the spin of the electrons. This is where spectroscopic selection rules come from. Ah initio methods are the preferred way of computing intensities. Although intensities can be computed using semiempirical methods, they tend to give rather poor accuracy results for many chemical systems. [Pg.95]

Size-consistency and size-extensivity are issues that should be considered at the outset of any study involving multiple molecules or dissociated fragments. As always, the choice of a computational method is dependent on the accuracy desired and computational resource requirements. Correction formulas are so simple to use that several of them can readily be tried to see which does best for... [Pg.225]

Accurate, precise isotope ratio measurements are used in a variety of applications including dating of artifacts or rocks, studies on drug metabolism, and investigations of environmental issues. Special mass spectrometers are needed for such accuracy and precision. [Pg.426]

National Institute of Standards and Technology (NIST). The NIST is the source of many of the standards used in chemical and physical analyses in the United States and throughout the world. The standards prepared and distributed by the NIST are used to caUbrate measurement systems and to provide a central basis for uniformity and accuracy of measurement. At present, over 1200 Standard Reference Materials (SRMs) are available and are described by the NIST (15). Included are many steels, nonferrous alloys, high purity metals, primary standards for use in volumetric analysis, microchemical standards, clinical laboratory standards, biological material certified for trace elements, environmental standards, trace element standards, ion-activity standards (for pH and ion-selective electrodes), freezing and melting point standards, colorimetry standards, optical standards, radioactivity standards, particle-size standards, and density standards. Certificates are issued with the standard reference materials showing values for the parameters that have been determined. [Pg.447]

We have presented a simple protocol to run MD simulations for systems of interest. There are, however, some tricks to improve the efficiency and accuracy of molecular dynamics simulations. Some of these techniques, which are discussed later in the book, are today considered standard practice. These methods address diverse issues ranging from efficient force field evaluation to simplified solvent representations. [Pg.52]

Whereas Freeman and Lewis reported the first comprehensive analysis of hydroxymethylation of phenol, they were not the last to study this system. A number of reports issued since their work have confirmed the general trends that they discovered while differing in some of the relative rates observed [80,84-99], Gardziella et al. have summarized a number of these reports ([18], pp. 29-35). In addition to providing new data under a variety of conditions, the other studies have improved on the accuracy of Freeman and Lewis, provided activation parameters, and added new methodologies for measuring product development [97-99],... [Pg.901]

The life cycle cost of a process is the net total of all expenses incurred over the entire lifetime of a process. The choice of process chemistry can dramatically affect this life cycle cost. A quantitative life cycle cost cannot be estimated with sufficient accuracy to be of practical value. There is benefit, however, in making a qualitative estimate of the life cycle costs of competing chemistries. Implicit in any estimate of life cycle cost is the estimate of risk. One alternative may seem more attractive than another until the risks associated with product liability issues, environmental concerns, and process hazards are given due consideration. Value of life concepts and cost-benefit analyses (CCPS, 1995a, pp. 23-27 and Chapter 8) are useful in predicting and comparing the life cycle costs of alternatives. [Pg.65]

There are two systems used for maintaining the accuracy and integrity of measuring devices a calibration system and a verification system. The calibration system determines the accuracy of measurement and the verification system determines the integrity of the device. If accuracy is important then the device should be included in the calibration system. If accuracy is not an issue but the device s form, properties, or function is important then it should be included in the verification system. You need to decide the system in which your devices are to be placed under control and identify them accordingly. [Pg.403]

From these initial results we have seen that this approach has exciting practical issues. However, we have also found that it does not match the accuracy of a database structure search, and the latter will certainly continue to be the best approach for CSP prediction for separation of a particular structure. [Pg.122]


See other pages where ACCURACY ISSUES is mentioned: [Pg.266]    [Pg.319]    [Pg.56]    [Pg.144]    [Pg.517]    [Pg.81]    [Pg.236]    [Pg.469]    [Pg.497]    [Pg.1020]    [Pg.266]    [Pg.319]    [Pg.56]    [Pg.144]    [Pg.517]    [Pg.81]    [Pg.236]    [Pg.469]    [Pg.497]    [Pg.1020]    [Pg.307]    [Pg.39]    [Pg.232]    [Pg.1]    [Pg.66]    [Pg.456]    [Pg.71]    [Pg.50]    [Pg.111]    [Pg.1]    [Pg.188]    [Pg.391]    [Pg.59]    [Pg.163]    [Pg.227]   


SEARCH



© 2024 chempedia.info