Don’t take our word for it: read the reviews and praise we’ve received over the years for the Slurm support we offer and how we help streamline process management for our clients.
Having support from SchedMD has been very valuable for getting bugs and problems fixed, and for help with configuration questions. It allows us to spend more time on running the systems and less on debugging them.
University of Oslo
SchedMD support’s various service offerings remove much of the burden of building and maintaining storage and infrastructure.
Recursion
“The Technical University of Denmark, Department of Physics, has been a Slurm support customer with SchedMD since 2017. The comfort of having a “lifeline” in case of Slurm issues means that we can operate with a very slim local support organization.
During our 7 years as a customer, we’ve always received timely responses from SchedMD’s support staff, even in our European time zone. The quality of the responses from support has invariably been very high and satisfactory. This has ensured that we have had an extremely stable and highly reliable cluster productivity, which is much appreciated by our demanding users.”
Technical University of Denmark
Without Slurm, customers lose out on industry best solutions for HPC.
Coreweave
Slurm can be quite tricky to debug, but I can always count on the SchedMD team for timely support!
Lawrence Berkeley National Laboratory
“Since 2006, BSC has relied on Slurm for job scheduling in our HPC clusters. Slurm has helped us implement and execute our workflows, and we are also pleased to contribute some plugins to the Slurm codebase.”
Barcelona Supercomputing Center
What are the Benefits of Working with SchedMD?
The basis of Slurm is to allocate resources, manage pending work, and execute jobs, but it’s the details of Slurm’s architecture that make it the leading work management system in a number of industry trends.
Massive Scalability & Throughput
Slurm can easily manage performance requirements for small cluster, large cluster, and exascale computer needs. Slurm outperforms competitive schedulers with compute rates at 100K+ nodes/GPU, 17M+ jobs/day, and 120M+ jobs/week.
First Class GPUs
With first class resource management for GPUs, Slurm allows users to request GPU resources alongside CPUs. This flexibility ensures that jobs are executed quickly and efficiently, while maximizing resource utilization.
Resource Allocation
Slurm’s advanced scheduling algorithms ensure efficient resource allocation, maximizing the utilization of your computing resources. Slurm intelligently balances the workload across your cluster, accelerating job execution and improving end user productivity.
Plugin Based Architecture
Slurm can map to complex business rules and existing organizational priorities. Our plugin-based architecture makes Slurm adaptable to a variety of conditions that fit your individual organization needs.
On-Prem & Cloud
Slurm supports on-prem, cloud, and hybrid HPC environments. This means that Slurm works with HPC clusters that are meant to be built and expanded over time, can be deployed based on need, or a combination of the two.
Open Source
As an open source workload manager, Slurm is available without the hassle of licenses and lock-ins. Open source benefits include transparent code, active development, efficient cost, agile innovation, and a strong user community.
Organize Your Workload Efficiently & Smoothly with SchedMD
Take your efficiency to the next level with Slurm from SchedMD. We can’t wait to do amazing things with you.