Show HN: PolyMCP Skills – Scalable Tool Organization for MCP-Based AI Agents
1 points
1 hour ago
| 1 comment
| HN
Hi everyone, I added a skills system to PolyMCP to solve a common issue with MCP servers at scale.

When the number of tools grows: • Agents burn tokens loading raw schemas • Tool discovery becomes noisy • Different agents need different subsets of tools • Orchestration logic leaks into prompts

Skills are curated, structured sets of MCP tools with documentation. Agents load only the skills they need instead of full tool schemas.

Skills are generated by discovering tools from MCP servers and auto-categorizing them.

Example: generate skills from a Playwright MCP server:

polymcp skills generate --servers "npx @playwright/mcp@latest"

HTTP MCP servers:

polymcp skills generate \ --servers "http://localhost:8000/mcp" \ --output ./mcp_skills

Stdio MCP servers:

polymcp skills generate \ --stdio \ --servers "npx -y @playwright/mcp@latest" \ --output ./mcp_skills

Enable skills in an agent:

agent = UnifiedPolyAgent( llm_provider=llm, skills_enabled=True, skills_dir="./mcp_skills", )

Benefits: • Smaller agent context • Scales to large tool sets • Reusable capabilities across agents • Tool access control without prompt changes • Works with HTTP and stdio MCP servers

Repo: https://github.com/poly-mcp/Polymcp

storystarling
33 minutes ago
[-]
Curious how this integrates with LangGraph, specifically if these skills map to subgraphs. I've been managing context bloat by filtering tools manually, so I am wondering if the dynamic loading here actually lowers latency compared to that standard approach.
reply
justvugg
23 minutes ago
[-]
Hi thanks for the comment, Skills don’t map to LangGraph subgraphs; they just scope tool schemas per task. Latency gains come from smaller prompts (no large MCP schema serialization); if you already filter tools manually, the main benefit is simpler, reusable orchestration.
reply