Slurm version 24.05.0 is now available
We are pleased to announce the availability of Slurm version 24.05.0.
To highlight some new features in 24.05:
- Isolated Job Step management. Enabled on a job-by-job basis with the –stepmgr option, or globally through SlurmctldParameters=enable_stepmgr.
- Federation – Allow for client command operation while SlurmDBD is unavailable.
- New MaxTRESRunMinsPerAccount and MaxTRESRunMinsPerUser QOS limits.
- New USER_DELETE reservation flag.
- New Flags=rebootless option on Features for node_features/helpers which indicates the given feature can be enabled without rebooting the node.
- Cloud power management options: New “max_powered_nodes=<limit>” option in SlurmctldParamters, and new SuspendExcNodes=<nodes>:<count> syntax allowing for <count> nodes out of a given node list to be excluded.
- StdIn/StdOut/StdErr now stored in SlurmDBD accounting records for batch jobs.
- New switch/nvidia_imex plugin for IMEX channel management on NVIDIA systems.
- New RestrictedCoresPerGPU option at the Node level, designed to ensure GPU workloads always have access to a certain number of CPUs even when nodes are running non-GPU workloads concurrently.
The Slurm documentation has also been updated to the 24.05 release. (Older versions can be found in the archive, linked from the main documentation page.)
Downloads are available here.