Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Processor speed

Some stuttering and performance problems may be caused by more complex factors involving the interaction of the various parts of your computer. One of the first places you should start to troubleshoot these sorts of problems is on the Audio tab of the Preferences dialog box. [Pg.280]

Rendering speed is also processor dependent, although a typical render to any uncompressed format should take only a few tens of seconds. More complex and highly compressed formats may require significantly longer rendering times, sometimes measured in minutes instead of seconds. [Pg.280]

The Preferences dialog box (click the Options menu and select Preferences) contains a host of options to customize and optimize ACID. Most of these options are covered in the relevant sections of this book  [Pg.280]

Each of the following options can be selected or checked in the dialog box. Items that are selected by default are marked with an x on the following list. [Pg.280]

X Create undos for FX parameter changes—FX plug-ins can be considered as separate applications from ACID and operate somewhat independently. Undos (Ctrl -I- Z) for both ACID and FX plug-ins use up memory (RAM). If you are running short of RAM, you can deselect this option to recover a small amount. [Pg.281]


Over time, the market has demanded increasingly sophisticated software. Each successive enhancement in processor speed has been consumed by software that is more complex, even if only in creating a more user-friendly interface. In the past, computer time was expensive relative to labor costs. That situation is now reversed, and spending more for a more user-friendly computer can often be easily justified in order to enhance the productivity of the vasdy more expensive human being. [Pg.87]

The GL2 structure suggests that one can generate arbitrary n-tuply continuous structures. It is only necessary to set the cell length sufficiently large. We have not attempted to generate such structures because, due to the limits imposed by computer memory and processor speed, the lattice spacing would be too big for a given size of the lattice to obtain a reasonable accuracy. [Pg.709]

In the Direct SCF method, we do. not store the two-electron integrals over the basis functions, we recalculate them on demand every cycle of the HF procedure At first sight, this may seem wasteful, but Conventional methods rely on disk input/output transfer rates whilst Direct methods rely on processor power. There is obviously a balance between processor speed and disk I/O. Just for the record my calculation on aspirin (73 basis functions) took 363 s using the Direct method and 567 s using the Conventional method. [Pg.180]

All methods mentioned in Table 1 operate (typically) in the frequency domain a monochromatic optical wave is usually considered. Two basically different groups of modeling methods are currently used methods operating in the time domain, and those operating in the spectral domain. The transition between these two domains is generally mediated by the Fourier transform. The time-domain methods became very popular within last years because of their inherent simplicity and generality and due to vast increase in both the processor speed and the memory size of modem computers. The same computer code can be often used to solve many problems with rather... [Pg.73]

Increases in processor speeds and storage capacity allowed these system to acquire and process data rapidly. Many fourth-generation systems became nodes in laboratory computer UMS networks. They communicate with host computers to receive instructions for analyses and for transferring results. Programs and values of parameters for specific analytical methods can be stored in memory and recalled by the analyst as needed. While the analyst found interaction with these systems easier, he or she became further removed from the system components and often more dependent on the vendor s software. Tailoring requirements to individual user requirements was often not viable with this approach. [Pg.232]

In general, the processor speeds and memory capacities of modem personal computers are sufficient to support most systems (e.g., milliseconds timing resolution and response monitoring requirements). However, running these systems on older computers having a slower processing speed and limited memory capacity could have a detrimental impact on the accuracy of the test, so caution is recommended. [Pg.105]

Depending on available computational resources, it may be useful to do a short full simulation run to determine the optimal number of processors to use, since beyond a certain number of processors, speed of the simulation is not improved and resources are wasted. [Pg.124]

Correct folder/directory structure created and files installed within folders/directories Software configuration completed satisfactorily Site and system identification User access groups Security configuration Menu/display access configuration Logical device connections Inspection of critical hardware components Servers in correct locations Processor speed Cache size ROM bios Memory capacity Peripherals Storage devices Input devices... [Pg.723]

System performance (processor speed, memory, disk space, BIOS, etc.)... [Pg.868]

As part of their efforts to employ parallel computers for molecular dynamics simulations, Schulten and co-workers generated a series of MD benchmarks based on their own program on a wide range of machines, including an Apple Macintosh II, a Silicon Graphics 320 VGX, a 32K-processor Conneaion Machine CM-200, a 60-node INMOS Transputer system, and a network of Sun workstations (using Linda).2 8 j e benchmarks demonstrated that the program runs very efficiently on many platforms (e.g., at sevenfold Cray 2 processor speed on the CM-200 and at Cray 2 processor speed on the Transputer system). [Pg.272]

A technique that is increasingly popular is molecular dynamics. This enables the study of free energies and of the effects of changing temperature and pressure. This technique is notoriously computer resource-hungry but increases in storage capacity, memory and processor speed have made it more feasible and it is now possible to combine ab initio and molecular dynamics calculations. The next section is devoted to this and related topics. [Pg.119]

Processor Speeds (MHz) Socket pins Voltage Cache... [Pg.81]

When installing a processor into a motherboard, you must set both the processor speed and bus speed with a jumper. Typically, the bus speed is set to 66MHz, lOOMHz, or 133MHz plus a multiplier. For example, when you have a 450 MHz processor, you would set the processor speed jumper to 450MHz, the expansion bus speed to lOOMHz, and the multiplier to 4.5 (4.5 x lOOMHz = 450MHz). Processors below lOOMHz generally set their speeds without a multiplier. [Pg.86]

Some motherboards have either processor speed jumpers or bus speed/multiplier jumpers, or both. To see which your motherboard uses, check the documentation that comes with your motherboard. [Pg.86]

To run BugView a machine with a processor speed of at least 500 MHz— extremely modest by contemporary standards—is recommended, although the performance of the sequence-comparison functions within BugView is appreciably enhanced on machines with faster processors. The free RAM requirement is more difficult to quantify, but on older machines insufficient RAM can limit the size of genome file that can be loaded (see Note 2). [Pg.111]

In hardware sizing and capacity planning, two sources of data are used. The first is the server capacity, often given as a SPEC mark, or the time in which a standard set of procedures is executed. A less-accurate measure of computing capacity is processor speed in cycles per second. Memory size may play a role in some calculations. BLAST queries, for example, are limited by the fetches from the disk the rule is, the more memory the better. Optimally, the entire nonredundant GenBank database can be stored in memory rather than on disk. The second component of the capacity analysis is the computation load, modeled as a typical workload factors. [Pg.406]

This original FO-approximation algorithm to provide population estimates has proven surprisingly adequate for many pharmacokinetic-pharmacodynamic problems and was also important in the past when computer processor speed was slower than it is today. However, more recent algorithms are developed around the marginal likelihood. The joint probability distribution for Y and t can be written as... [Pg.226]

Increased processor speed permitting more complicated analyses with large deformations and complex constitutive behavior. [Pg.359]

Realistic treatment of most crystallizers requires modeling turbulent high concentration two phase flow, which is at the cutting edge of CFD development. Recent advances both in constitutive relations and computer processor speed have advanced this field, but there are still significant limitations. New developments in user interfaces and the general decrease in cost of computational... [Pg.194]

A third time-conversion technique uses sine-wave signals for time measurement. Two orthogonal sine-wave signals are sampled with the start and the stop pulses. The phase difference between start and stop is used as time information [313]. Currently the sine-wave technique is inferior to the TAC-ADC principle and the TDC principle in terms of count rate. It is not used in single-board TCSPC devices. However, with the fast progress in ADC and signal processor speed the sine wave technique may become competitive with the other techniques. The principle is shown in Fig. 4.16. [Pg.59]

The calculations are expensive. With current processor speeds, computations of trajectories for systems with more than 1000 atoms require a parallel resource... [Pg.20]

Processor speed - Cache size - ROM bios Documentation... [Pg.221]

An important factor in the progress of bioinformatics has been the constant increase in computer speed and memory capacity of desktop computers and the increasing sophistication of data processing techniques. The computation power of common personal computers has increased within 12 years approximately 100-fold in processor speed, 250-fold in RAM memory space and 500-fold or more in hard disk space, while the price has nearly halved. This enables acquisition, transformation, visuahsation and interpretation of large amounts of data at a fraction of the cost compared to 12 years ago. Presently, bioanalytical databases are also growing quickly in size and many databases are directly accessible via the Internet One of the first chemical databases to be placed on the Internet was the Brookha-ven protein data bank, which contains very valuable three-dimensional structural data of proteins. The primary resource for proteomics is the ExPASy (Expert Protein Analysis System) database, which is dedicated to the analysis of protein sequences and structures and contains a rapidly growing index of 2D-gel electrophoresis maps. Some primary biomolecular database resources compiled from spectroscopic data are given in Tab. 14.1. [Pg.605]


See other pages where Processor speed is mentioned: [Pg.95]    [Pg.219]    [Pg.231]    [Pg.79]    [Pg.12]    [Pg.12]    [Pg.123]    [Pg.131]    [Pg.175]    [Pg.118]    [Pg.264]    [Pg.522]    [Pg.815]    [Pg.476]    [Pg.405]    [Pg.350]    [Pg.431]    [Pg.35]    [Pg.82]    [Pg.2]    [Pg.156]    [Pg.157]    [Pg.30]    [Pg.220]    [Pg.37]    [Pg.32]   


SEARCH



Processors

© 2024 chempedia.info