Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Computational supercomputers

Today, we see a very different world of computing. Supercomputers abound. The computing center is augmented and in some cases even pushed aside to make way for distributed workstation environments, where everyone has a powerful computer either on his or her desk or in the next room. Everything is networked or rapidly becoming networked large files are readily transferred from New York to Munich to Trondheim to Tokyo. Students routinely log into supercomputers across the country to run large computations. Electronic mail makes worldwide communications direct and... [Pg.500]

It is clear that the new generation of computers - supercomputers when used in conjunction with theoretical developments such as the linked diagram theory or the unitary group theory, is going to have a significant impact on the accuracy attainable in quantum chemistry and on the size of problem which may be treated. [Pg.41]

We have described some of the software that already exists and which already is applicable to investigations of protein structure. But if these calculations can already be accomplished, why do we need supercomputers In some cases, such as molecular dynamics, the question is one of brute number-crunching power — a faster computer will permit simulations of the motions of larger molecules over longer time intervals. But in other cases the question is not one of feasibility vs. infeasibility of the calculation, but in the speed with which the calculation can be completed relative to the time scale of interactive computing. Supercomputers may, in some cases, be able to convert certain tasks that must currently be run in batch mode to tasks that can be run interactively. This opens new doors. [Pg.158]

Bailey, D. H. Twelve ways to fool the masses when giving performance results on parallel computers. Supercomputing Review, pp. 54-55, June 11,1991. [Pg.91]

Theoretical work concerning structure and energy of solids could not represent major achievements until the advent of high power computers (supercomputers). [Pg.112]

Nelson, M., Humphrey, W., Gursoy, A., Dalke, A., Kale, L., Skeel, R.D., Schul-ten, K. NAMD - A parallel, object-oriented molecular dynamics program. Int. J. Supercomputing Applications and High Performance Computing 10 (1996) 251-268. [Pg.32]

TES gratefully acknowledges the Alfred P. Sloan Foundation for support and the National Science Foundation for support (CHE-9632236) and computational resources at the National Center for Supercomputing Applications (CHE-960010N). [Pg.211]

K. Schulten. NAMD—a parallel, object-oriented molecular dynamics program. Inti. J. Supercomput. Applies. High Performance Computing, 10 251-268, 1996. [Pg.330]

Supported by NSF ASC-9318159, NSF CDA-9422065, NTH Research Resource RR08102, and computer time from the North Carolina Supercomputing Center. An earlier version of this paper was presented at the Eighth SIAM Conference on Parallel Processing for Scientific Computing. [Pg.459]

Lin, M., Hsieh, J., Du, D. H. C., Thomas, J. P., MacDonald, J. A. Distributed network computing over local ATM networks. In Proceedings of Supercomputing 94. IEEE Computer Society Press, Los Alamitos, California, 1994. Greengard, L., Rokhlin, V. A fast algorithm for particle simulation. J. Comp. Phys. 73 (1987) 325-348. [Pg.481]

Quantum mechanics gives a mathematical description of the behavior of electrons that has never been found to be wrong. However, the quantum mechanical equations have never been solved exactly for any chemical system other than the hydrogen atom. Thus, the entire held of computational chemistry is built around approximate solutions. Some of these solutions are very crude and others are expected to be more accurate than any experiment that has yet been conducted. There are several implications of this situation. First, computational chemists require a knowledge of each approximation being used and how accurate the results are expected to be. Second, obtaining very accurate results requires extremely powerful computers. Third, if the equations can be solved analytically, much of the work now done on supercomputers could be performed faster and more accurately on a PC. [Pg.3]

Mass-produced workstation-class CPUs are much cheaper than traditional supercomputer processors. Thus, a larger amount of computing power for the dollar can be purchased by buying a parallel supercomputer that might have hundreds of workstation CPUs. [Pg.132]

This book grew out of a collection of technical-support web pages. Those pages were also posted to the computational chemistry list server maintained by the Ohio Supercomputer Center. Many useful comments came from the subscribers of that list. In addition, thanks go to Dr. James F. Harrison at Michigan State University for providing advice born of experience. [Pg.399]

The secondary stmcture elements are then identified, and finally, the three-dimensional protein stmcture is obtained from the measured interproton distances and torsion angle parameters. This procedure requites a minimum of two days of nmr instmment time per sample, because two pulse delays are requited in the 3-D experiment. In addition, approximately 20 hours of computing time, using a supercomputer, is necessary for the calculations. Nevertheless, protein stmcture can be assigned using 3-D nmr and a resolution of 0.2 nanometers is achievable. The largest protein characterized by nmr at this writing contained 43 amino acid units (51). However, attempts ate underway to characterize the stmcture of interleukin 2 [85898-30-2] which has over 150 amino acid units. [Pg.396]

The years since pubHcation of the third edition of the Eniyclopedia (1978—1984) have brought the rise and fall of the minicomputer, the worldwide ascendancy of microprocessor-based personal computers, the emergence of powerhil scientific work stations, the acceptance of scientific visualization, further advances with supercomputers, the rise and fall of the rninisupercomputer, and the realization that the future Hes in parallel computing. [Pg.87]

A common acronym is MFLOPS, millions of floating-point operations per second. Because most scientific computations are limited by the speed at which floating point operations can be performed, this is a common measure of peak computing speed. Supercomputers of 1991 offered peak speeds of 1000 MFLOPS (1 GFLOP) and higher. [Pg.88]

Vector Computers. Most computers considered supercomputers are vector-architecture computers. The concept of vector architecture has been a source of much confusion. [Pg.88]

Supercomputers from vendors such as Cray, NEC, and Eujitsu typically consist of between one and eight processors in a shared memory architecture. Peak vector speeds of over 1 GELOP (1000 MELOPS) per processor are now available. Main memories of 1 gigabyte (1000 megabytes) and more are also available. If multiple processors can be tied together to simultaneously work on one problem, substantially greater peak speeds are available. This situation will be further examined in the section on parallel computers. [Pg.91]

Many of the faster work stations can provide throughput similar to that observed on a crowded, shared supercomputer, especially for codes that do not benefit greatly from vectorization. The availabihty of such machines for less than 50,000 (much less for academic users) has once again changed concepts of what is computationally feasible. Many more people can perform computations that a few years ago were the sole domain of those with access to large-scale computing faciUties, and this trend is expected to continue. [Pg.93]

Many institutions have hundreds, or even thousands, of powerful work stations that are idle for much of the day. There is often vastiy more power available in these machines than in any supercomputer center, the only problem being how to harness the power already available. There are network load-distribution tools that allocate individual jobs to unused computers on a network, but this is different from having many computers simultaneously cooperating on the solution of a single problem. [Pg.95]

MIMD Multicomputers. Probably the most widely available parallel computers are the shared-memory multiprocessor MIMD machines. Examples include the multiprocessor vector supercomputers, IBM mainframes, VAX minicomputers. Convex and AUiant rninisupercomputers, and SiUcon... [Pg.95]

Because of their relatively low cost and simplicity. Transputer systems can be built readily. Many systems are marketed as appHcation accelerator boards for personal computers and work stations. A single board that turns a standard work station into a "supercomputer" (for one appHcation at least) can be very attractive, especially in appHcation-specific situations (26). Transputer-based systems typically cost between 1000 and 200,000, depending on the number of nodes, among other things. [Pg.96]

These codes have stressed the current supercomputer, whether it was the CDC 6600 in the 1970s, the Grays in the 1980s, or the massively parallel computers of the 1990s. Multimillion cell calculations are routinely performed at Sandia National Laboratories with the CTH [1], [2] code, yet... [Pg.324]


See other pages where Computational supercomputers is mentioned: [Pg.503]    [Pg.818]    [Pg.503]    [Pg.818]    [Pg.2277]    [Pg.78]    [Pg.194]    [Pg.355]    [Pg.395]    [Pg.28]    [Pg.139]    [Pg.568]    [Pg.363]    [Pg.161]    [Pg.101]    [Pg.165]    [Pg.234]    [Pg.162]    [Pg.301]    [Pg.88]    [Pg.90]    [Pg.91]    [Pg.91]    [Pg.93]    [Pg.93]    [Pg.93]    [Pg.94]    [Pg.95]    [Pg.359]   
See also in sourсe #XX -- [ Pg.146 ]




SEARCH



Computing: supercomputing

Supercomputer Computations Research Institute

Supercomputers

© 2024 chempedia.info