Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Intel Xeon

The MINLP-model instances comprised 200 binary variables, 588 continuous variables and 1038 constraints. The linearization not only eliminates the nonlinearity but also leads to a reduced number of398 continuous variables and 830 constraints (the number of 200 binary variables is unchanged). The MINLP-problems were solved by the solver architecture DICOPT/CONOPT/CPLEX, and the MILP problems were solved by CP LEX, both on a Windows machine with an Intel Xeon 3 GHz CPU and 4 GB RAM. [Pg.157]

Computational analysis. Calculations were performed on an Intel Xeon computer running Linux, as well as the VRANA-5 and VRANA-8 clusters at the Center for Molecular Modeling of the National Institute of Chemistry (Ljubljana,... [Pg.16]

We also performed a simulation of the HIV accessory protein using the adapted parallel tempering method [13]. We used 20 processors of an INTEL XEON PC cluster and ran the simulation for a total of 30 x 10 energy evaluations for each configuration, which corresponds to approximately 500 CPU hours on an 2.4 GHz INTEL XEON processor. All simulations were started... [Pg.565]

The previous considered methods usually depend on linear methods (MLR, PLS) to establish structure-solubility correlations for prediction of solubility of molecules. The work of Goller et al. [51] used a neural network ensemble to predict the apparent solubility of Bayer in-house organic compounds. The solubility was measured in buffer at pH 6.5, which mimics the medium in the human gastrointestinal tract. The authors used the calculated distribution coefficient log/1 (at several pH values), a number of 3D COSMO-derived parameters and some 2D descriptors. The final model was developed using 4806 compounds (RMSE = 0.72) and provided a similar accuracy (RMSE = 0.73) for the prediction of 7222 compounds that were not used to develop the model. The method, however, is quite slow, and it takes about 15 seconds to screen one molecule on an Intel Xeon 2.8 GHz CPU. [Pg.249]

A Linux duster cxjnsisting of nodes with two single-core 3.06 GHz Intel Xeon processors (each with 512 KiB of L2 cache) connected via a 4x Single Data Rate InfiniBand network with a full fat tree topology. [Pg.91]

We calculated all EFMs up to a cardinality of in six metabolic models of various sizes ranging from small-scale to genome-scale. The key properties of these models are listed in Table 3.1. All networks were compressed [23] before the EFMA. We used a computer with two Intel Xeon CPUs (each with six cores, 2.67 GHz) running Ubuntu 14.04. Both programs were allowed to use up to eight parallel threads during the execution. [Pg.793]

In this study, the time step used in transient calculation is 0.001 s for both ANSYS POLYFLOW model and hybrid model (FLUENT/POLYFLOW). Numerical computations were run with 8 cores on a workstation with Intel Xeon CPUs ES-2640 2.5GHz. [Pg.192]

Figure 25.1 Supercomputer JuRoPA. Hardware characteristics 2208 compute nodes with two Intel Xeon X5570 (Nehalem.EP) quad-core processors, 2.93 GHz, and 24GB memory, in total 17 764 cores total resulting... Figure 25.1 Supercomputer JuRoPA. Hardware characteristics 2208 compute nodes with two Intel Xeon X5570 (Nehalem.EP) quad-core processors, 2.93 GHz, and 24GB memory, in total 17 764 cores total resulting...
Table 14.5 Computation times (in seconds) of Agj3 for the evaluation of relativistic Hamiltonians and for one SCF iteration within a scalar-relativistic approach. HF-SCF denotes a single SCF iteration of a Hartree-Fock calculation. The data were taken from Ref. [647], which also described the Turbomole implementation of the schemes discussed above (computation times on the Intel Xeon E5430 central processing unit [serial version, one core]). Table 14.5 Computation times (in seconds) of Agj3 for the evaluation of relativistic Hamiltonians and for one SCF iteration within a scalar-relativistic approach. HF-SCF denotes a single SCF iteration of a Hartree-Fock calculation. The data were taken from Ref. [647], which also described the Turbomole implementation of the schemes discussed above (computation times on the Intel Xeon E5430 central processing unit [serial version, one core]).
Five hundred particles with a mean size of 2 cm (in diameter) and with a normal size distribution of 0.2 cm standard deviation were generated inside a cylindrical drum with a radius and a width of 25 and 10 cm, respectively. The geometry of the drum was given a rotational motion with a constant rotational speed of 20 rpm around the y-axis. The simulation was performed with a time step equal to 20% of Rayleigh time step for a total time of 3 s. The Target Save Interval was set to 0.01 s. This simulation took about 2 min to complete using 2 cores of a workstation having two Intel Xeon qnad core (3.0 GHz) processors. [Pg.270]

In the second approach, a single CPU was used but with the help of GPU for factorization given by Eqs. (3) and (17). The entries in Table 1 are the respective timings for the evaluation of aU integrals and their derivatives that were needed for computation of differential cross sections for aU vibrational modes. The CPU execution times, t, shown in Table 1 on the first line were obtained by using 12 cores of Intel Xeon 3 GHz workstation. The entries on the second line are execution times on a single CPU core code where the evaluation steps shown in Eqs. (3) and (17)... [Pg.24]

Fig. 3a shows the training results and fitness values for eCos/baseline after 3 40" of optimization (on a 32-core Intel Xeon E5-4650) Each point represents an individual (the machine-state projection vector, cf. Sect. 3.2), which partitions the training set into... [Pg.25]

The parallel computer available in the present calculation is a part of Cray XC30 and it consists of 32 CPU nodes with the same architecture (Intel Xeon E5 2.30 GHz), where each node is equipped with 28 cores. By employing 64,128, 256, and 512 cores of this machine we first examined the parallel efficiency of flat MPl. The speedup S p) in the case of MPl using p cores is given as... [Pg.161]

We examined the efficiencies of the OMP parallel executed for a water molecule and a benzene dimer. The computer node used was the Intel Xeon (E5-2680, 2.7 GHz) with 16 cores. As a specific treatment to expedite the force calculation, we reduced the number of grids from 64 to 32 along each direction by taking average of the polarization density An in Eq. (6.42). In our measurement on a single core for a QM water molecule in 499 MM solvent water molecules we found 73 % of the total computational time was devoted to the construction of the electrostatic potential Vpc, and the residual time was used mostly for the evaluation of the PT2... [Pg.187]

Figure 14 Illustrative timings comparing Schwarz (QQ) and MBIE screening for calculating the exchange matrix for a series of DNA molecules n — 1,2,4,8,16) with up to 1052 atoms (10674 basis functions). All calculations were performed within the LinK method and with a 6-31G basis at a threshold of 10 on an Intel Xeon 3.6 GHz machine. Figure 14 Illustrative timings comparing Schwarz (QQ) and MBIE screening for calculating the exchange matrix for a series of DNA molecules n — 1,2,4,8,16) with up to 1052 atoms (10674 basis functions). All calculations were performed within the LinK method and with a 6-31G basis at a threshold of 10 on an Intel Xeon 3.6 GHz machine.

See other pages where Intel Xeon is mentioned: [Pg.95]    [Pg.318]    [Pg.42]    [Pg.43]    [Pg.91]    [Pg.146]    [Pg.164]    [Pg.178]    [Pg.233]    [Pg.45]    [Pg.404]    [Pg.90]    [Pg.71]    [Pg.209]    [Pg.121]    [Pg.72]    [Pg.246]    [Pg.724]    [Pg.299]    [Pg.593]    [Pg.235]    [Pg.1673]   
See also in sourсe #XX -- [ Pg.249 ]




SEARCH



Intel

Intelence

© 2024 chempedia.info