Home Page


 



| Home | Research Groups |User Documentation | System Resources |

Marcy @ MERCURY User Guide

General Information

User Documentation

Research Groups

System Resources

System Resources

  • Master/login node: 16 Intel E5-2660 cores, 64GB RAM, 1 TB mirrored disk
  • Storage (I/O) node: 16 Intel E5-2660 cores, 64GB RAM, 17 TB disk array
  • Bigmem compute nodes (node1-8): 16 Intel E5-2660 cores, 128GB RAM, 2 TB striped disk
  • Mediummem compute nodes ( node23 - 26 ): 16 Intel E5-2650-v3 cores, 64 GB RAM, 1 TB disk
  • Smallmem compute nodes ( node9 - 20 ): 16 Intel E5-2660 cores, 32 GB RAM, 1 TB disk
  • GPU node (node22) : 16 Intel E5-2660 cores, 64GB RAM, 2 Nvidia Tesla K20 GPU

The machine has 30 16-core nodes; a master/login node, a storage (I/O) node, 12 thin (32GB RAM) nodes, 5 medium (64GB RAM) nodes, 8 bigmem (128GB RAM) nodes and 1 GPU-containing (16 core CPU + 2 GPU) node.

  • Master/login node: 16 Intel E5-2660 cores, 64GB RAM, 1 TB mirrored disk
  • Storage (I/O) node: 16 Intel E5-2660 cores, 64GB RAM, 17 TB disk array
  • Bigmem compute nodes (node1-8): 16 Intel E5-2660 cores, 128GB RAM, 2 TB striped disk
  • Mediummem compute nodes ( node23 - 26 ): 16 Intel E5-2650-v3 cores, 64 GB RAM, 1 TB disk
  • Smallmem compute nodes ( node9 - 20 ): 16 Intel E5-2660 cores, 32 GB RAM, 1 TB disk
  • GPU node (node22) : 16 Intel E5-2660 cores, 64GB RAM, 2 Nvidia Tesla K20 GPUs

Marcy (master)

  • (2) Intel Sandy Bridge, E5-2660, 2.20GHz, Eight-Core, 95Watt Processor(s)
  • 64GB Memory (8GB x 8) 4GB of memory per core
  • (2) 1TB Hard Drive(s) (mirrored RAID 1)

This is the main queue server system. All jobs should be submitted from this host.

This is also the machine that should be used to compile software and push any needed changes out to the cluster nodes.

io

  • (2) IntelTM Sandy Bridge, E5-2660, 2.20GHz, Eight-Core, 95Watt Processor(s)
  • 64GB Memory (8GB x 8) 4GB of memory per core
  • (27) 900GB 10K SAS HDD (RAID 6, 20TB Usable Storage)

This is the home directory and file storage server. This is where your home directory actually lives. All the other machines have permission to mount your home directory from this machine. This is one aspect of what enables your single sign on to multiple computers. You should never need to log into this machine and do any “work” on it. Overloading this server with other user processes will degrade the performance of the rest of the computers in the cluster. The only exception to this is the first time logging into the system so that your user “home” directory gets created. If for some reason your home directory were removed… logging back into this server would re-create a missing home directory.

compute nodes

The system has 12 thin (32GB RAM) nodes, 8 bigmem (128GB RAM) nodes and 2 GPU-containing (16 core CPU + 1 GPU) nodes

Bigmem nodes

8 nodes (called node1,node2, node3, node4, node5, node6, node7, node8)

  • (2) Intel Sandy Bridge, E5-2660, 2.20GHz, Eight-Core, 95Watt Processor(s)
  • 128GB Memory (16GB x 8) 8GB of memory per core
  • (2) 1TB Hard Drive(s) (RAID 0 - striped)

medium-size nodes

5 nodes (called node21, node23, node24, node25, node26)

  • (2) Intel Sandy Bridge, E5-2660-v2, 2.60GHz, Eight-Core, 95Watt Processor(s)
  • 64GB Memory (8GB x 8) 4GB of memory per core
  • (1) 1TB Hard Drive ()

thin nodes

13 nodes (called node9, node10, node11, node12, node13, node14, node15, node16, node17, node18, node19, node20, node21)

  • (2) Intel Sandy Bridge, E5-2660, 2.20GHz, Eight-Core, 95Watt Processor(s)
  • 32GB Memory (4GB x 8) 2GB of memory per core
  • (1) 1TB Hard Drive ()

GPU-containing node

1 node (called node22)

  • (2) Intel Sandy Bridge, E5-2660, 2.20GHz, Eight-Core, 95Watt Processor(s)
  • (2) NVIDIA K20 GPU CARDs
  • 64GB Memory (8GB x 8) 4GB of memory per core
  • (2) 1TB Hard Drive (RAID0 striped)

Specs of each Nvidia Tesla K20 GPU

Features Tesla K20
Number and Type of GPU 1 Kepler GK110
Peak double precision floating point performance 1.17 Tflops
Peak single precision floating point performance 3.52 Tflops
Memory bandwidth (ECC off) 208 GB/sec
Memory size (GDDR5) 5 GB
CUDA cores 2496

(Each non-GPU node has 16 CPU cores with peak performance of 280 - 380 Gflops compared to 2496 cores with a peak double precision performance of 1170 Gflops for each Nvidia Tesla K20!)

Software Resources

The machine has a lot of software stored in /usr/local/Dist and more will be added as necessary. Most of this software is added to your path at login, but some need to be loaded up using the 'modules' tool. Here are the most commonly used packages.

Chemistry

  1. Gaussian09 A.02 + D.01
  2. Gaussian03 A.01
  3. NWchem 6.5, 6.3
  4. PSI 4
  5. Gamess 2013(+cuda), 2011
  6. Cfour 1.0
  7. AMBER 9, 12 (+cuda)
  8. NAMD 2.9 (+cuda)
  9. GROMACS 5.0
  10. LAMMPS
  11. OpenMM
  12. ORCA 3.0.2/3.0.3
  13. Openbabel 2.3.2
  14. libEFP
  15. Espresso
  16. Siesta 3.2
  17. cp2k 2.5
  18. cpmd 3.17
  19. cluster 1.0/1.1
  20. dftb+

General Tools

  1. Intel Compilers (12, 13)
  2. Intel MKL libraries (10.3, 13)
  3. OpenMPI (1.3.3, 1.6.4)
  4. Mvapich2 (1.9)
  5. Fftw (3.1.5, 3.3.3)
  6. Python (2.6, 2.7, 3.3)
  7. cmake 2.8
  8. cuda 5.5, 5.0, 4.2, 6.0
  9. Jre 1.8.0
  10. swift

Modules

  • Modules loaded at login (execute 'module list' at login)
  1. modules
  2. torque-maui
  3. mvapich2

* Other available modules (execute 'module avail')

Benchmarks

Running Calculations

Queue Policies

Jobs must be submitted to the 'mercury' queue and they are routed to the appropriate execution queue based on their walltime and memory requests.

Note that higher PR (Priority) means jobs will be picked up faster relative to those with lower PR.

Submit
Queue
Execution
Queue
Nodes Available
Cores
Max
Wallclock
Max
Memory
Run Limit per User Restrictions
mercury small node3-node22 1-8 120:00:00 32GB 8 None
mercury short-smallmem node9-21 1-240 12:00:00 32GB - None
mercury medium-smallmem node9-21 1-240 48:00:00 32GB - None
mercury long-smallmem node9-21 1-240 400:00:00 32GB 5 None
mercury short-bigmem node1-8 1-128 12:00:00 128GB - None
mercury medium-bigmem node1-8 1-128 48:00:00 128GB - None
mercury long-bigmem node3-8 1-128 400:00:00 128GB 3 None
gpu gpu node22 1-2 400:00:00 64GB 1 None
bucknell bucknell node23-26 1-64 400:00:00 64GB - Bucknell Only

The nodes are divided based on their specs as follows.

Example Runs

The directories here show software test runs performed both interactively and through our queue management system (Torque+Maui ~ PBS).
You can find examples of batch submission files typically named as either run.pbs or *.pbs.

Index of /~software_test

[ICO]NameLast modifiedSizeDescription

[DIR]Parent Directory  -  
[DIR]AMBER/04-Nov-2014 11:04 -  
[DIR]CPMD/26-Feb-2014 21:43 -  
[DIR]Desmond/04-Nov-2014 11:13 -  
[DIR]G03/04-Nov-2014 10:28 -  
[DIR]G09/28-Apr-2014 10:46 -  
[DIR]Gamess/04-Nov-2014 11:18 -  
[DIR]MOPAC/08-Sep-2014 10:31 -  
[DIR]Molpro/10-Oct-2013 17:27 -  
[DIR]NAMD/08-Aug-2013 09:08 -  
[DIR]ORCA/20-Jun-2013 22:44 -  
[DIR]Ogolem/10-Nov-2014 11:13 -  
[DIR]OpenMM/26-Sep-2014 12:32 -  
[DIR]Psi4/09-Oct-2014 15:20 -  
[DIR]Torque+Maui_Queue/03-Feb-2015 10:49 -  
[DIR]cfour/16-Apr-2014 09:44 -  
[DIR]cluster1.0/04-Nov-2014 11:25 -  
[DIR]cp2k/24-Feb-2014 14:57 -  
[DIR]dftb+/03-Apr-2014 11:31 -  
[DIR]espresso/31-Aug-2014 14:06 -  
[DIR]lammps/04-Nov-2014 10:28 -  
[DIR]libEFP/04-Nov-2014 11:28 -  
[DIR]nwchem/15-Sep-2014 09:34 -  
[DIR]siesta/01-Sep-2014 11:19 -  
[DIR]swift/09-Jun-2014 15:29 -  
[DIR]tinker/06-Apr-2014 13:31 -  

Using Modules

You can execute

module avail

to see available modules and get documentation about running these modules through

module show module_name

user@master[~] module avail

-------------------------- /usr/local/Modules/versions ---------------------------------------------------------------------------------------
3.2.10

--------------------- /usr/local/Modules/3.2.10/modulefiles ----------------------------------------------------------------------------------
amber12/default                   cuda/default                      intel/12.1.4                      module-info                       openbabel/2.3.2
amber12/intel-cuda                dftb+/1.2                         intel/13.1.2                      modules                           openbabel/default
amber9/default                    dot                               intel/default                     mpi/default                       openmm/5.4
cfour/1.0                         espresso/5.1                      jre/1.8.0                         mpi/mvapich2-1.9_gnu-4.4.7_ib     openmm/6.1
cfour/2.0                         fftw/3.3.3                        lammps/2013                       mpi/mvapich2-1.9_intel-13.1.2_ib  psi4/default
cmake/2.8                         g03/A01                           lammps/default                    mpi/openmpi-1.6.4_gnu-4.4.7_ib    python/2.7.6
cp2k/2.5                          g09/A02                           lib/atlas-3.10.1                  mpi/openmpi-1.6.4_intel-13.1.2_ib python/3.3.0
cpmd/3.17.1                       g09/D01                           libefp/1.2.1                      namd/2.9                          siesta/3.2
cpmd/default                      gamess/2011                       libint/1.1.4                      namd/2.9-cuda                     swift
cuda/4.2                          gamess/2013                       mkl/10.3                          null                              torque-maui
cuda/5.0                          gamess/2013-gpu                   mkl/13.1.2                        nwchem/6.3                        use.own
cuda/5.5                          gamess/default                    mkl/default                       nwchem/6.5
cuda/6.0                          gromacs/5.0-beta                  module-git                        nwchem/default

user@master[~] module show nwchem/6.5

-------------------------------------------------------------------
/usr/local/Modules/3.2.10/modulefiles/nwchem/6.5:

module-whatis     Adds `/usr/local/Dist/nwchem-6.3' to your 'PATH/LD_LIBRARY/MANPATH' environment 
module-whatis     To run NWChem calculations, use the runnwchem.csh script. 
module-whatis     Usage: runnwchem.csh inputfile ScratchDirName NumberOfProcesses/CoresToUse 
module-whatis        Eg: runnwchem.csh test.nw myScrDir 16
 
module-whatis     Or adapt the script (/usr/local/Dist/bin/runnwchem.csh for your purposes. 
module       load mkl/13.1.2 
module       load mpi/openmpi-1.6.4_intel-13.1.2_ib 
module       load cuda/5.5 
prepend-path    PATH /usr/local/Dist/nwchem-6.5/bin/LINUX64 
prepend-path    LD_LIBRARY_PATH /usr/local/Dist/nwchem-6.5/lib/LINUX64 
-------------------------------------------------------------------

System Administration Documentation

Performance Monitoring

Performance Monitoring: Ganglia Performance Monitoring

CPU Temps: CPU Temp Monitoring

start.txt · Last modified: 2016/12/21 15:10 by btemelso



| Home | Research Groups |User Documentation | System Resources |



Sponsored by the Mercury Consortium.

Please direct any questions to: support@mercuryconsortium.org

 
Recent changes RSS feed Donate Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki Site Design by Sly Media Networks LLC