OpenSkills – Stop bloating your LLM context with unused agent instructions
1 points
1 hour ago
| 0 comments
| HN
Hello HN,

I’ve been building AI agents lately and ran into a common "Context Bloat" problem. When an agent has 20+ skills, stuffing every system prompt, reference doc, and tool definition into a single request quickly hits token limits and degrades model performance (the "lost in the middle" problem).

To solve this, I built OpenSkills, an open-source SDK that implements a Progressive Disclosure Architecture for agent skills.

The Core Concept: Instead of loading everything upfront, OpenSkills splits a skill into three layers:

Layer 1 (Metadata): Light-weight tags and triggers (always loaded for discovery).

Layer 2 (Instruction): The core SKILL.md prompt (loaded only when the skill is matched).

Layer 3 (Resources): Heavy reference docs or scripts that are conditionally loaded based on the specific conversation context.

Why this matters:

Scalability: You can have hundreds of skills without overwhelming the LLM's context window.

Markdown-First: Skills are defined in a simple SKILL.md format. It’s human-readable, git-friendly, and easy for the LLM to parse.

Conditional Resources: For example, a "Finance Skill" only pulls in the tax-code.pdf reference if the query actually mentions tax compliance.

Key Features:

Python 3.10+ SDK.

Automatic skill matching and invocation.

Support for script execution (via [INVOKE:script_name] syntax).

Multimodal support (Images via URL/base64).

GitHub: [https://github.com/twwch/OpenSkills] PyPI: pip install openskills-sdk

No one has commented on this post.