MCP tools with dependent types
69 points
9 hours ago
| 6 comments
| vlaaad.github.io
| HN
dvse
5 hours ago
[-]
This is already supported via listChanged. The problem is that >90% of clients currently don’t implement this - including Anthropic’s, https://modelcontextprotocol.io/clients
reply
vlaaad
8 hours ago
[-]
I was considering making an MCP SEP (specification enhancement proposal) — https://modelcontextprotocol.io/community/sep-guidelines, though I'm curious if other MCP tinkerers feel the issue exists, should be solved like that, etc. What do you think?
reply
jonfw
4 hours ago
[-]
I have a blog post here that has an example of dynamically changing the tool list- https://jonwoodlief.com/rest3-mcp.html.

In this situation, I would have a tool called "request ability to edit GLTF". I This would trigger an addition to the tool list specifically for your desired GLTF. The model would send the "tool list changed' notification and now the LLM would have access.

If you want to do it without the tool list changed notification ability, I'd have two tools, get schema for GLTF, and edit GLTF with schema. If you note that the get schema is a dependency for edit, the LLM could probably plumb that together on it's own fairly well

You could probably also support this workflow using sampling.

reply
spullara
3 hours ago
[-]
do any of the clients support this? I have some dynamic mcps and it doesn't seem like claude.ai supports for example.
reply
matt-smith
5 hours ago
[-]
The Arazzo specification[0] (from OpenAPI contributors) aims to solve the dependent arguments issue by introducing the concept of a "runtime expressions"[1] within a series of independent tool calls which compose a workflow.

[0] - https://www.openapis.org/arazzo-specification [1] - https://spec.openapis.org/arazzo/v1.0.1.html#runtime-express...

reply
LudwigNagasena
7 hours ago
[-]
> there is no way to tell the AI agent “for this argument, look up a JSON schema using this other tool”

There is a description field, it seems sufficient for most cases. You can also dynamically change your tools using `listChanged` capability.

reply
vlaaad
7 hours ago
[-]
Sure, but the need for accuracy will only increase; there is a difference between suggesting an LLM to put a schema in its context before calling the tool vs forcing the LLM to use a structured output returned from a tool dynamically.

We already have 100% reliable structured outputs if we are making chatbots with LLM integrations directly; I don't want to lose this.

reply
WithinReason
5 hours ago
[-]
And LLMs will get more accurate. What happens when the LLM uses the wrong parameters? If it's an immediate error then it will just try again, no need for protocol changes, just better LLMs.
reply
vlaaad
5 hours ago
[-]
The difference between 99% reliability and 100% reliability is huge in this case.
reply
WithinReason
5 hours ago
[-]
I misunderstood the problem then, I thought it would take only a few seconds for the LLM to issue the call, see the error, fix the call.
reply
jtbayly
5 hours ago
[-]
Last time I used Gemini CLI it still couldn’t consistently edit a file. That was just a few weeks ago. In fact, it would go into a loop attempting the same edit, burning through many thousands of tokens and calls in the process, re-reading the file, attempting the same edit, rinse, repeat until I stopped it.

I didn’t find it entertaining.

reply
wahnfrieden
5 hours ago
[-]
Big waste of context
reply
nmilo
6 hours ago
[-]
I don't think this is a protocol issue, the LLMs simply weren't RLHFed to do that
reply
vlaaad
6 hours ago
[-]
Not true, structured outputs enforce output formats with 100% reliability, e.g., https://platform.openai.com/docs/guides/structured-outputs says "Structured Outputs is a feature that ensures the model will always generate responses that adhere to your supplied JSON Schema, so you don't need to worry about the model omitting a required key, or hallucinating an invalid enum value"
reply