Show HN: Polymcp Implements Ollama for Local and Cloud Model Execution
2 points
1 hour ago
| 0 comments
| HN
We’ve integrated Ollama into Polymcp to simplify local and cloud-based execution of large language models like gpt-oss:120b, Kimi K2, and Nemotron.

With Ollama as an agent, we can easily orchestrate MCP servers and manage models in a seamless, straightforward way.

Here’s a quick example:

from polymcp.polyagent import PolyAgent, OllamaProvider

def main(): agent = PolyAgent(llm_provider=OllamaProvider(model="gpt-oss:120b"), mcp_servers=["http://localhost:8000/mcp"]) response = agent.run("What is the capital of France?") print(response)

if __name__ == "__main__": main()

Why this is useful: • Orchestration made easy: Ollama handles the complexity of orchestrating MCP servers and models. • Local & cloud execution: Switch between local and cloud environments with no extra setup. • Multiple models supported: From gpt-oss:120b to Kimi K2, run the models you need.

It’s a quick way to streamline model execution in your projects, whether on your own hardware or in the cloud.

Looking forward to hearing how you use this!

Repo: https://github.com/poly-mcp/Polymcp

No one has commented on this post.