Home Page


 



| Home | Research Groups |User Documentation | System Resources |

System Resources

The machine has 28 16-core nodes; a master/login node, a storage (I/O) node, 12 thin (32GB RAM) nodes, 5 medium (64GB RAM) nodes, 8 bigmem (128GB RAM) nodes and 1 GPU-containing (16 core CPU + 2 GPU) node.

  • Master/login node: 16 Intel E5-2660 cores, 64GB RAM, 1 TB mirrored disk
  • Storage (I/O) node: 16 Intel E5-2660 cores, 64GB RAM, 17 TB disk array
  • Bigmem compute nodes (node1-8): 16 Intel E5-2660 cores, 128GB RAM, 2 TB striped disk
  • Thin compute nodes ( node9 - 21 ): 16 Intel E5-2660 cores, 32 GB RAM, 1 TB disk
  • GPU node (node22) : 16 Intel E5-2660 cores, 64GB RAM, 2 Nvidia Tesla K20 GPUs

Marcy (master)

  • (2) Intel Sandy Bridge, E5-2660, 2.20GHz, Eight-Core, 95Watt Processor(s)
  • 64GB Memory (8GB x 8) 4GB of memory per core
  • (2) 1TB Hard Drive(s) (mirrored RAID 1)

This is the main queue server system. All jobs should be submitted from this host.

This is also the machine that should be used to compile software and push any needed changes out to the cluster nodes.

io

  • (2) IntelTM Sandy Bridge, E5-2660, 2.20GHz, Eight-Core, 95Watt Processor(s)
  • 64GB Memory (8GB x 8) 4GB of memory per core
  • (27) 900GB 10K SAS HDD (RAID 6, 20TB Usable Storage)

This is the home directory and file storage server. This is where your home directory actually lives. All the other machines have permission to mount your home directory from this machine. This is one aspect of what enables your single sign on to multiple computers. You should never need to log into this machine and do any “work” on it. Overloading this server with other user processes will degrade the performance of the rest of the computers in the cluster. The only exception to this is the first time logging into the system so that your user “home” directory gets created. If for some reason your home directory were removed… logging back into this server would re-create a missing home directory.

compute nodes

The system has 12 thin (32GB RAM) nodes, 8 bigmem (128GB RAM) nodes and 2 GPU-containing (16 core CPU + 1 GPU) nodes

Bigmem nodes

8 nodes (called node1,node2, node3, node4, node5, node6, node7, node8)

  • (2) Intel Sandy Bridge, E5-2660, 2.20GHz, Eight-Core, 95Watt Processor(s)
  • 128GB Memory (16GB x 8) 8GB of memory per core
  • (2) 1TB Hard Drive(s) (RAID 0 - striped)

medium-size nodes

5 nodes (called node21, node23, node24, node25, node26)

  • (2) Intel Sandy Bridge, E5-2660-v2, 2.60GHz, Eight-Core, 95Watt Processor(s)
  • 64GB Memory (8GB x 8) 4GB of memory per core
  • (1) 1TB Hard Drive ()

thin nodes

13 nodes (called node9, node10, node11, node12, node13, node14, node15, node16, node17, node18, node19, node20, node21)

  • (2) Intel Sandy Bridge, E5-2660, 2.20GHz, Eight-Core, 95Watt Processor(s)
  • 32GB Memory (4GB x 8) 2GB of memory per core
  • (1) 1TB Hard Drive ()

GPU-containing node

1 node (called node22)

  • (2) Intel Sandy Bridge, E5-2660, 2.20GHz, Eight-Core, 95Watt Processor(s)
  • (2) NVIDIA K20 GPU CARDs
  • 64GB Memory (8GB x 8) 4GB of memory per core
  • (2) 1TB Hard Drive (RAID0 striped)

Specs of each Nvidia Tesla K20 GPU

Features Tesla K20
Number and Type of GPU 1 Kepler GK110
Peak double precision floating point performance 1.17 Tflops
Peak single precision floating point performance 3.52 Tflops
Memory bandwidth (ECC off) 208 GB/sec
Memory size (GDDR5) 5 GB
CUDA cores 2496

(Each non-GPU node has 16 CPU cores with peak performance of 280 - 380 Gflops compared to 2496 cores with a peak double precision performance of 1170 Gflops for each Nvidia Tesla K20!)

Software Resources

The machine has a lot of software stored in /usr/local/Dist and more will be added as necessary. Most of this software is added to your path at login, but some need to be loaded up using the 'modules' tool. Here are the most commonly used packages.

Chemistry

  1. Gaussian09 A.02 + D.01
  2. NWchem 6
  3. PSI 4
  4. Gamess
  5. Cfour
  6. AMBER 12 (+cuda)
  7. NAMD 2.9 (+cuda)
  8. GROMACS
  9. LAMMPS
  10. OpenMM
  11. ORCA 2.9, 3.0, 3.0.1
  12. Openbabel 2.3.2

General Tools

  1. Intel Compilers (12, 13)
  2. Intel MKL libraries (10.3, 13)
  3. OpenMPI (1.3.3, 1.6.4)
  4. Mvapich2 (1.9)
  5. Fftw (3.1.5, 3.3.3)
  6. Python (2.6, 2.7, 3.3)

Modules

  • Modules loaded at login (execute 'module list' at login)
  1. modules
  2. torque-maui
  3. mvapich2

* Other available modules (execute 'module avail')

documentation/resources.txt · Last modified: 2014/02/13 16:08 by btemelso



| Home | Research Groups |User Documentation | System Resources |



Sponsored by the Mercury Consortium.

Please direct any questions to: support@mercuryconsortium.org

 
Recent changes RSS feed Donate Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki Site Design by Sly Media Networks LLC