Ask HN: Is OpenAPI enough for LLM-based API integrations?
1 points
1 hour ago
| 0 comments
| HN
We've been running into a recurring issue while using LLMs (Cursor, agents, etc.) to build real API integrations.

OpenAPI is excellent at describing request/response shape, but it doesn't capture execution semantics that matter in production, such as: - retries and backoff rules - idempotency guarantees - auth and token refresh behavior - SDK-specific constraints - "what must not be done" during execution

In addition, large OpenAPI specs tend to be hard for LLMs to consume incrementally: they don't chunk cleanly by operation or behavior, and important constraints get lost when context is truncated.

We've open-sourced an experiment called "Wreken spec": a small, explicit file that lives alongside OpenAPI / SDKs and encodes execution rules and constraints in a way that's intentionally: - operation-scoped - chunkable / retrievable independently - designed for machines (LLMs), not humans

This is very early, and we're not confident this is the right abstraction.

We'd love feedback on: - whether this should exist as a separate file at all - if this belongs as OpenAPI extensions instead - whether we're reinventing something that already exists - failure modes or scaling issues we may be overlooking

Spec + examples: https://gitlab.com/swytchcode/wrekenfile

Context / motivation: https://wreken.com

Genuinely looking for critique—happy to be told this is the wrong direction.

No one has commented on this post.