Tags:
create new tag
, view all tags

What's New

  • Nov 19th 2014:
  • Oct 21 2014:
  • Sep 22nd 2014:
    • Twelve new compute nodes (64c/256GB) have been acquired to upgrade Hydra's hardware.
    • Hydra's software will be updated (to Rocks 6.1.X) and Lattice will be merged with Hydra.
    • Upgrade plans for FY2015 are more compute nodes and a new NetApp filer to double disk capacity and improve I/O throughput.
  • Mar 15th 2014:
    • The OCIO/ORIS has purchased two new nodes with 24 cores and 504GB of memory each. They are currently configured as a separate machine, called Lattice.
      • These nodes are intended for jobs that requires a lot of memory.
      • Add'l information on Lattice is available here.

  • Nov 1st 2013:
    • The old compute nodes (4 cores/node, mostly in rack 0) will be decommissioned be over the next month or two
    • They will be replaced by a few new compute nodes with extra memory to accommodate large memory jobs (with a higher limit)
  • Oct 2nd 2013:
    • mvapich2 v1.9 installed for PGI and GNU compilers (solves the zombie problem, but you must recompile your MPI/IB codes)
  • Jul 11th 2013:
    • New information on memory use:
    • Updated disk space info
    • Other updates/fixes
  • Apr 2nd 2013:
    • vT*.q: very-long time limit queues added (CPU limit is 108 days)
  • Mar 11th 2013:
    • last three R815 have been installed in rack 0, for a total of 10, bringing the cluster to 3294 cores.
  • Feb 11th 2013:
  • Jan 11th 2013:

  • Oct 1st 2012:
    • Set up proof-of-concept parallel file system (PFS, using PVFS2) of 1.3TB.
    • Plan to upgrade the 32 nodes x 4 cores in rack #0 to 10 nodes x 64 cores on IB.
  • Jul 30th 2012:
    • InfiniBand (IB) fabric deployed on 55 nodes for 844 cores (will eventually grow to 952 cores on 60 nodes)
    • The obsolete queues ([sml]T[sml]N.q) have been deleted.
  • Jul 18th 2012:
    • gnu compilers ver 4.7.1 are now available on hydra (serial codes only)
  • Jul 12th 2012:
    • Queues updated: simplified and new ones for IB. Read the section available queues in the primer.
    • The documentation has been updated
    • The old queues will be disabled as of Mon 7/23, and then deleted.
  • Jun 27th 2012:
    • InfiniBand (IB) fabric deployed on 39 nodes for 572 cores (more to come soon, for a total of 952 cores on 60 nodes)
    • PGI compilers upgraded to 12.5, including the CDK (Cluster Dev. Kit) with support for IB (mvapich)
    • Intel compilers upgraded to 12.0.2.137, including the ICS (Intel Cluster Studio) with support for IB (mvapich)
    • IB support for openmpi (mvapich-1.2.0 for gcc)
  • Apr 5th 2012: PGI compilers upgraded to 12.3 (details in the relevant primer).
  • March 16th 2012:
    • Apr 5th 2012: PGI compilers upgraded to 12.3 (details in the relevant primer).
    • Hydra has been upgraded to Rocks 5.4.3,
    • New queues and PEs are available, read the write up to learn more,
    • These new queues will become the only available as of Wed Mar 21th at noon.
    • A 10th rack, additional memory and some InfiniBand cards have been added to the cluster.
    • The status of the current h/w and s/w changes was updated (Mar 16th).
  • March 9th 2012:
    • Hydra has been upgraded to Rocks 5.4.3, although the upgrade is not yet completed.
    • New queues and PEs are available, read the write up learn more,
    • These new queues will become the only available once the upgrade is deemed completed.
  • February 29th 2012:
    • Hydra will be upgraded Wed March 7th to Rocks 5.4.3,
    • New queues and PEs will be implemented, read the RFC (request for comments) to learn more.
  • Febuary 14th 2012:
    • NetApp filer upgraded, additional disk space will come on-line soon,
    • openmp PE (parallel environment) added to all.q,
    • changed the all.q queue property to limit virtual memory to 16G/20GB (s_vmem/h_vmem).
  • January 27th 2012: More primers have been written, and GE man pages are available as html, ps or pdf files.
  • January 20th 2012: A new Wiki page for primers has been created with some primers written.
  • January 19th 2012: news about upcoming h/w and s/w changes.
  • January 4th 2012: What's New page added (see note below).

  • December 22th, 2001: visit to Herndon (VA) was successful, we can expect:
    • h/w upgrades (add'l nodes, a better disk filer, and, some racks connected via InfiniBand)
    • s/w upgrades (Rocks 5.4, Lustre 1.8, and, a structured queue configuration) to come soon (Q1 of 2012).
  • December 12th, 2011: the FAQs are being added, more to come, stay tuned.
  • November 2011: CfA-wide meeting announcement.

Note

  • The purpose of this page is to have a single place to go and look for what has changed on this Wiki.
  • You can be subscribed to receive automatically an email when this page changes (or any page on the Wiki).
  • Email to hpc@cfa to request such subscription.

- SylvainKorzennikHPCAnalyst - created 05 Jan 2012 - last updated 25 Jan 2012

Topic revision: r27 - 2014-11-19 - SylvainKorzennikHPCAnalyst
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2015 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback