With Ollama as an agent, we can easily orchestrate MCP servers and manage models in a seamless, straightforward way.
Here’s a quick example:
from polymcp.polyagent import PolyAgent, OllamaProvider
def main(): agent = PolyAgent(llm_provider=OllamaProvider(model="gpt-oss:120b"), mcp_servers=["http://localhost:8000/mcp"]) response = agent.run("What is the capital of France?") print(response)
if __name__ == "__main__": main()
Why this is useful: • Orchestration made easy: Ollama handles the complexity of orchestrating MCP servers and models. • Local & cloud execution: Switch between local and cloud environments with no extra setup. • Multiple models supported: From gpt-oss:120b to Kimi K2, run the models you need.
It’s a quick way to streamline model execution in your projects, whether on your own hardware or in the cloud.
Looking forward to hearing how you use this!