Ask HN: How are teams sourcing long-term GPU capacity outside hyperscalers?
5 points
15 hours ago
| 0 comments
| HN
I’ve been talking to a growing number of teams training and serving large models who are no longer relying solely on on-demand hyperscaler GPUs.

Instead, they’re locking in reserved capacity (often 6–36 months) across a mix of providers and regions to get predictable pricing and guaranteed availability. In practice, this raises a bunch of questions:

• How do you evaluate datacenter quality and network topology across providers?

• What tradeoffs have you seen between price, geography, and interconnect?

• How much does “same GPU, different system” actually matter in real workloads?

• Any lessons learned around contracts, delivery risk, or scaling clusters over time?

Context: I work on a marketplace that helps teams source long-term GPU capacity across providers, so I’m seeing this pattern frequently and wanted to sanity-check it with the community.

No one has commented on this post.