ITC computational resources are managed by Research Computing (RC). ITC users have access to the FAS (Faculty of Arts and Sciences) Odyssey cluster which has over 18,000 cores available for use. Information on the general use queues for Odyssey can be found on the RC website. The software that is available on Odyssey can also be found there.
In addtion to Odyssey, the ITC has purchased 2 additional clusters for its own dedicated use. The ITC Cluster consists of 22 nodes each with 4 AMD 16-core 6274 Interlagos processors with 4 GB of RAM per core, for a total of 256 GB of RAM per node. This gives a total of 1408 cores available for use and 5.5 TB of RAM. The nodes are interconnected with QDR Infiniband.
There is one LSF queue for the ITC cluster named itc_cluster. This queue has a run time limit of 4 days but has no limit to the number of cores that can be requested. This queue is subject to normal fairshare rules. This queue is for parallel work only and requires jobs larger than 16 cores. Serial work should be sent to the other ITC queues or the general purpose serial queues.
There is also a general purpose serial queue named itc_serial. This queue runs on Lars Hernquist's keck nodes and consist of the same hardware as the Cerberus cluster. It has 792 cores spread over 127 nodes each with 32 GB of RAM. The time limit for this queue is 4 days.
The Cerberus cluster is the old 256 core cluster. Each node of the 32 nodes on this cluster has 2 Intel Quad Core E5410 Harpertown processors with 4 GB of RAM per core, for a total of 32 GB of RAM per node. The nodes are interconnected with DDR Infiniband.
There are three LSF queues which access the Cerberus cluster. Each queue has no limit to the number of cores it can request. These queues are subject to normal fairshare rules. The major difference between the queues are maximum allowed run times and priority. The queues are as follows:
- itc_short: 4 hours, high priority
- itc_normal: 2 days, medium priority
- itc_long: 14 days, low priority, maximum 128 cores
Jobs from the higher priority queues will execute faster than those on the low priority queues. Thus a job in the short queue with the same fairshare as one in the long queue will execute first.
In addition to the clusters the ITC has purchased storage beyond the normal allotted for Odyssey users. The ITC has 34 TB of scratch disk space on /n/itc1 which is not backed up. Additionally there is 40 TB of space on /n/itcbackup1 and itcbackup2 which are backed up nightly. ITC users get 2 TB on the backed up space and 1 TB on the scratch space.
Groups within the ITC have also purchased resources beyond those provided by FAS and ITC. These resources vary from group to group and may include machines not managed by RC. If you desire more information on the resources held by a specific group please contact them directly.
To help ITC members utilize the cluster and the computational resources, the ITC has a member of Research Computing, Paul Edmon (pedmon@cfa), on staff. He is a trained astronomer with a background in high performance computing, HPC, and computational astrophysics. He is availible to help with computational astrophysics questions as well as general HPC concerns.
Research Computing staff are also available for help on software installation, debugging, and problems with the cluster. Please contact RC at email@example.com if help is needed. For more information, please look at the FAS Research Computing website at http://rc.fas.harvard.edu. To gain access to Odyssey and the ITC cluster, fill out the web form at http://rc.fas.harvard.edu/request. Specify that you are a member of the ITC and include the name of your PI to receive access to ITC queues and storage. Please also include information about any additional resources that you have access to from your group.
For those who want to run calculations that do not fit on the ITC resources there is the Extreme Science and Engineering Discovery Environment (XSEDE) program. XSEDE coordinates access to 16 supercomputers, high-end visualization, and data analysis resources across the country. While full usage proposals can be quite substantial, startup requests, on the order 50-100k hours, are easily obtained using the step-by-step instruction page at https://portal.xsede.org/web/guest/new-allocation.