Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Accuracy of results

Practically all CNDO calculations are actually performed using the CNDO/ 2 method, which is an improved parameterization over the original CNDO/1 method. There is a CNDO/S method that is parameterized to reproduce electronic spectra. The CNDO/S method does yield improved prediction of excitation energies, but at the expense of the poorer prediction of molecular geometry. There have also been extensions of the CNDO/2 method to include elements with occupied d orbitals. These techniques have not seen widespread use due to the limited accuracy of results. [Pg.34]

The Gl method is seldom used since G2 yields an improved accuracy of results. G2 has proven to be a very accurate way to model small organic molecules, but gives poor accuracy when applied to chlorofiuorocarbons. At... [Pg.38]

The consistent force field (CFF) was developed to yield consistent accuracy of results for conformations, vibrational spectra, strain energy, and vibrational enthalpy of proteins. There are several variations on this, such as the Ure-Bradley version (UBCFF), a valence version (CVFF), and Lynghy CFF. The quantum mechanically parameterized force field (QMFF) was parameterized from ah initio results. CFF93 is a rescaling of QMFF to reproduce experimental results. These force fields use five to six valence terms, one of which is an electrostatic term, and four to six cross terms. [Pg.54]

UFF was originally designed to be used without an electrostatic term. The literature accompanying one piece of software recommends using charges obtained with the Q-equilibrate method. Independent studies have found the accuracy of results to be signihcantly better without charges. [Pg.56]

There are a few variations on this procedure called importance sampling or biased sampling. These are designed to reduce the number of iterations required to obtain the given accuracy of results. They involve changes in the details of how steps 3 and 5 are performed. For more information, see the book by Allen and Tildesly cited in the end-of-chapter references. [Pg.63]

A basis set is a set of functions used to describe the shape of the orbitals in an atom. Molecular orbitals and entire wave functions are created by taking linear combinations of basis functions and angular functions. Most semiempirical methods use a predehned basis set. When ah initio or density functional theory calculations are done, a basis set must be specihed. Although it is possible to create a basis set from scratch, most calculations are done using existing basis sets. The type of calculation performed and basis set chosen are the two biggest factors in determining the accuracy of results. This chapter discusses these standard basis sets and how to choose an appropriate one. [Pg.78]

For many projects, a basis set cannot be chosen based purely on the general rules of thumb listed above. There are a number of places to obtain a much more quantitative comparison of basis sets. The paper in which a basis set is published often contains the results of test calculations that give an indication of the accuracy of results. Several books, listed in the references below, contain extensive tabulations of results for various methods and basis sets. Every year, a bibliography of all computational chemistry papers published in the previous... [Pg.89]

Quasiclassical calculations are similar to classical trajectory calculations with the addition of terms to account for quantum effects. The inclusion of tunneling and quantized energy levels improves the accuracy of results for light atoms, such as hydrogen transfer, and lower-temperature reactions. [Pg.168]

This technique has not been used as widely as transition state theory or trajectory calculations. The accuracy of results is generally similar to that given by pTST. There are a few cases where SACM may be better, such as for the reactions of some polyatomic polar molecules. [Pg.168]

In order to reduce the amount of computation time, some studies are conducted with a smaller number of solvent geometries, each optimized from a different starting geometry. The results can then be weighted by a Boltzmann distribution. This reduces computation time, but also can affect the accuracy of results. [Pg.207]

An improved version, called COSMO for realistic solvents (COSMO-RS), has also been created. This method has an improved scheme for modeling nonelectrostatic effects. It can be adapted for modeling the behavior of molecules in any solvent and, gives increased accuracy of results as compared to COSMO. [Pg.212]

There is no one best method for describing solvent effects. The choice of method is dependent on the size of the molecule, type of solvent effects being examined, and required accuracy of results. Many of the continuum solvation methods predict solvation energy more accurately for neutral molecules than for ions. The following is a list of preferred methods, with those resulting in the highest accuracy and the least amount of computational effort appearing first ... [Pg.213]

X-ray fluorescence (XRF) analysis is successfully used to determine chemical composition of various geological and ecological materials. It is known that XRF analysis has a high productivity, acceptable accuracy of results, developed theory and industrial analytical equipment sets. Therefore the complex methods of XRF analysis have to be constituent part of basis data used in ecological and geochemical investigations... [Pg.234]

A number of general methods and methods particularly suited to and needed for ventilation in large industrial rooms are presented in the previous sections. They differ in complexity of effort and accuracy of results and therefore are applied at different stages of the design process. [Pg.1056]

Semi-empirical and ab initio methods differ in the trade-off made between computational cost and accuracy of result. Semi-empirical calculations are relatively inexpensive and provide reasonable qualitative descriptions of molecular systems and fairly accurate quantitative predictions of energies and structures for systems where good parameter sets exist. [Pg.6]

Model performance tests focusing on solution time and accuracy of results... [Pg.214]

Ask the laboratory to maintain assayed samples that are of particular importance if questions arise as to the accuracy of results it might be possible to retest the original samples. [Pg.804]

When performing dissolution testing, there are many ways that the test may generate erroneous results. The testing equipment and its environment, handling of the sample, formulation, in situ reactions, automation and analytical techniques can all be the cause of errors and variability. The physical dissolution of the dosage form should be unencumbered at all times. Certain aspects of the equipment calibration process may show these errors as well as close visual observation of the test. The essentials of the test are accuracy of results and robustness of the method. Aberrant and unexpected results do occur, however, and the analyst should be well trained to examine all aspects of the dissolution test and observe the equipment in operation. [Pg.58]

A problem almost universally encountered in continuous-flow systems is that the instrument response for a given sample assay-value tends to vary with time. This effect, known as drift, affects the accuracy of results. It may be due to several causes, in particular variable performance of analyser components and variations in chemical sensitivity of the method used. It is manifest in two forms, baseline drift and peak-reading drift, which is due to sensitivity changes. The baseline drift may be detected visually if a... [Pg.53]

This work describes the design, operation and application of the continuous GPC viscosity detector for the characterization of the molecular weight distribution of polymers. Details of the design and factors affecting the precision and accuracy of results are discussed along with selected examples of polymers with narrow and broad molecular weight distribution. [Pg.281]

The PC JVS PDF method [23, 24] requires remarkable computational power for attaining reasonable accuracy of results. Hence an alternate mathematical method is needed to determine interrelations between various governing parameters. This method should have acceptable accuracy to obtain necessary estimates relatively fast and to test the efficiency of various approaches to passive and active combustion control. The corresponding estimates obtained would be used for more detailed studies by the PC JVS PDF method. [Pg.186]

Method validation seeks to quantify the likely accuracy of results by assessing both systematic and random effects on results. The properly related to systematic errors is the trueness, i.e. the closeness of agreement between the average value obtained from a large set of test results and an accepted reference value. The properly related to random errors is precision, i.e. the closeness of agreement between independent test results obtained under stipulated conditions. Accnracy is therefore, normally studied as tmeness and precision. [Pg.230]

It is impairment we want to test for - not lifestyle preference. This is where performance impairment tests comes in. Impairment tests use a computer to assess the employees hand-eye coordination, and a variety of other variables that are related to the task, not the lifestyle of the employee. The test only takes 30 seconds. It is superior to drug testing in terms of cost, timeliness and accuracy of results, and overall liability [Fine]. [Pg.71]

The accuracy of results (the interlaboratory reproducibility of measurements) was poor or even dramatically low. The scatter in M and values obtained for identical samples in the different participating laboratories reached several hundred of percent (2000% for poly(acrylic acid). [Pg.476]

Neither the effect of the used instrumentation (the pumps, columns and detectors) nor the software applied for the data processing was evaluated from the point of view of accuracy of results. It is evident, that the worldwide standardization of both measurement and data processing are badly needed in SEC. At least the experimental conditions within the same laboratory should be kept constant. This may prove, however, difficult as the detectability of various samples differs substantially and the Cj/Vj is to be adjusted for each kind of sample. [Pg.476]

While medium pore zeolites such as ZSM-5 do not deactivate significantly during hexane cracking at 538°C, large pore zeolites usually do. For maximum accuracy of results in these cases we found it advisable to use a low hexane partial pressure of about 10 torr. This not only completely eliminates catalyst deactivation during the test (Fig. 7),... [Pg.264]

As mentioned the size of the QM region and the level of theory employed for it are crucial for the reliability and accuracy of results. The minimal size selection of solute and first solvation... [Pg.146]

These factors strongly influence the computational effort and the accuracy of results. The application of QM methods to an entire simulation cube containing several hundred molecules by ab initio methods is far beyond the capability of present computational equipment. CPMD simulation introduces a compromise between accuracy and computational effort by utilizing density functionals of the GGA type and reducing the system size to (and sometimes below) the necessary minimum. [Pg.156]


See other pages where Accuracy of results is mentioned: [Pg.27]    [Pg.33]    [Pg.39]    [Pg.45]    [Pg.45]    [Pg.45]    [Pg.46]    [Pg.79]    [Pg.129]    [Pg.240]    [Pg.249]    [Pg.308]    [Pg.4]    [Pg.175]    [Pg.557]    [Pg.211]    [Pg.17]    [Pg.368]    [Pg.30]    [Pg.45]    [Pg.78]    [Pg.38]   
See also in sourсe #XX -- [ Pg.50 , Pg.52 , Pg.53 , Pg.55 ]




SEARCH



Accuracy of analytical results

Spectrometers—accuracy and precision of reported results

© 2024 chempedia.info