AISBF is now in BETA. It provides a single API endpoint that can route requests to OpenAI, Anthropic, Google, Ollama, and many other providers, with features like request splitting, semantic response caching, adaptive rate limiting, provider‑native caching, and OAuth2 integrations (Claude, Kilo, Codex, Qwen).
• Hosted demo (no setup): https://aisbf.cloud (https://aisbf.cloud/)
• Self‑host: pip install aisbf
• Source code: https://git.nexlab.net/nexlab/aisbf.git
We’d love feedback from the community—especially anyone juggling multiple LLM APIs who wants to simplify routing and reduce costs.