System Upgrades (including Slurm) to occur July 13 - 19
We have scheduled our Slurm and RHEL7 upgrade to occur on Monday, July 13 through Sunday, July 19. During this time, our HPC and Spear systems will be unavailable.
We have scheduled system maintenance to occur on Monday, July 13 through Sunday, July 19. During this time, our HPC and Spear systems will be unavailable.
We will drain jobs from the HPC system starting in the evening of Sunday, July 12th and will shut off all login and compute nodes at 9am on Monday, July 13. We will also shutoff all Spear nodes at 9am on Monday, July 13. All HPC nodes will be back online on Monday morning July 20th; spear nodes might be back earlier.
There will be two major upgrades as a result of this maintenance:
- We will replace the MOAB Scheduler software with Slurm Scheduler
- HPC and Spear systems will be upgraded from Red Hat Enterprise Linux Version 6 to Version 7.1.
MOAB to Slurm Upgrade
We have been preparing to replace our primary scheduler software on the HPC from MOAB/Torque to Slurm. This change will provide a more robust scheduling platform, but will require changes to your submission scripts and to job management commands. To aid in this transition, we have provided some resources:
- MOAB to Slurm Transition Guide
- MOAB to Slurm Cheat Sheet
- Slurm Test Cluster
- MOAB to Slurm Workshop - June 4
Site Visits
If you would like an RCC staff member to visit your group and give a custom presentation or help your staff with transitioning, please let us know.
Red Hat Enterprise Linux v7.1 Upgrade
In order to stay current with the latest software developments, we are upgrading our Red Hat Linux cluster from Version 6.5 to Version 7.1. Most HPC and Spear users will not notice any major changes as a result of this upgrade. However, this will allow us to stay current with the latest version of major software packages, including popular compilers and libraries.
If you are interested in seeing what has changed in RHEL7, refer to the RedHat 7 Release Notes.
If you have any questions or issues, or if we can help with your transition to Slurm, please let us know