After a month of struggle, AWS has now introduced a solution to address the surging demand for GPU compute capacity for machine learning workloads. The company announced the general availability of Amazon Elastic Compute Cloud (EC2) Capacity Blocks for ML, offering an innovative consumption model for customers, powered by NVIDIA H100 GPUs.
With this new offering, customers can access highly sought-after GPU compute capacity on a flexible, short-term basis, removing the need for long-term commitments.
“This is an innovative new way to schedule GPU instances where you can reserve the number of instances you need for a future date for just the amount of time you require,” said Channy Yun.
EC2 Capacity Blocks allow customers to reserve GPU capacity for durations ranging from one to 14 days, with advanced scheduling up to eight weeks. These capacity blocks are deployed in EC2 UltraClusters with low-latency, high-throughput connectivity, offering the flexibility to scale up to hundreds of GPUs. This new solution is ideal for training and fine-tuning ML models, short experimentation runs, and handling temporary surges in inference demand for product launches.
David Brown, the vice president of compute and networking at AWS, highlighted the importance of this innovation in democratising access to generative AI capabilities. AWS and NVIDIA have collaborated for over a decade to deliver scalable GPU solutions, and the introduction of Amazon EC2 Capacity Blocks is a significant step in broadening access to GPU capacity for generative AI applications.
The EC2 Capacity Blocks are available for reservation in the AWS US East (Ohio) Region, with plans for expansion to additional AWS Regions and Local Zones. This development is a game-changer for startups and organisations looking to harness the power of generative AI without making long-term capital commitments.
The launch of EC2 Capacity Blocks has received positive feedback from industry leaders and organisations, such as Amplify Partners, Canva, Leonardo.Ai, and OctoML. These stakeholders believe that this new solution will provide the predictability and timely access to GPU compute capacity necessary to drive innovation and meet customer demands in today’s supply-constrained environment.