Show HN: Serve 100 Large AI models on a single GPU with low impact to TTFT
5 points
6 hours ago
| 1 comment
| github.com
| HN
I wanted to build an inference provider for proprietary AI models, but I did not have a huge GPU farm. I started experimenting with Serverless AI inference, but found out that coldstarts were huge. I went deep into the research and put together an engine that loads large models from SSD to VRAM up to ten times faster than alternatives. It works with vLLM, and transformers, and more coming soon.

With this project you can hot-swap entire large models (32B) on demand.

Its great for:

Serverless AI Inference

Robotics

On Prem deployments

Local Agents

And Its open source.

Let me know if anyone wants to contribute :)

billconan
3 hours ago
[-]
can you hot swap a portion of an ai model, if my gpu is not large enough to hold the entire model? so that I can run half model first and load the other half.
reply