Structured Outputs Create False Confidence
79 points
5 hours ago
| 26 comments
| boundaryml.com
| HN
kgeist
25 minutes ago
[-]
Just a week ago, I rewrote our RAG pipeline to use structured outputs, and the tests showed no significant difference in quality after a few tweaks (under vLLM). What helped was that we have a pipeline where another LLM automatically scores 'question-expected answer' pairs, so what we did was: tweak the schema/prompt => evaluate => tweak again, until we got good results in most cases, just like with free-form prompts.

Several issues were found:

1. A model may sometimes get stuck generating whitespace at the end forever (the JSON schema allows it), which can lock up the entire vLLM instance. The solution was to use xgrammer, because it has a handy feature that disallows whitespace outside of strings.

2. In some cases I had to fiddle with metainformation like minItems/maxItems for arrays, or the model would either hallucinate or refuse to generate anything.

3. Inference engines may reorder the fields during generation, which can impact the quality due to the autoregressive nature of LLMs (like, the "calculation" field must come before the "result" field). Make sure the fields are not reordered.

4. Field names must be as descriptive as possible, to guide the model to generate expected data in the expected form. For example, "durationInMilliseconds" instead of just "duration".

Basically, you can't expect a model to give you good results out of the box with structured outputs if the schema is poorly designed or underspecified.

reply
simonw
3 hours ago
[-]
I'm not 100% convinced by this post. I'd like to see a more extensive formal eval that demonstrates that structured outputs from different providers reduces the quality of data extraction results.

Assuming this holds up, I wonder if a good workaround for this problem - the problem that turning on structured outputs makes errors more likely - would be to do this:

1. Prompt the LLM "extract numbers from this receipt, return data in this JSON format: ..." - without using the structured output mechanism.

2. If the returned JSON does indeed fit the schema then great, you're finished! But if it doesn't...

3. Round-trip the response from the previous call through the LLM again, this time with structured outputs configured. This should give you back the higher quality extracted data in the exact format you want.

reply
ramraj07
50 minutes ago
[-]
Isn't it better to put it in an agent loop, with the structured output json just specified as a tool? The function call can then just return a summary of the parsed input. We can add in the system prompt a validation step to ask the llm to verify it has provided inputs correctly. This will allow the llm itself to self reflect and correct if needed.
reply
hellovai
2 hours ago
[-]
(on of the creators of BAML here) yep! exactly!

that workaround we've found works quite well, but the problem is that its not sufficient to just retry in the case of failed schema matches (its both inefficient and also imo incorrect).

Take these two scenarios for example:

Scenario 1. My system is designed to output receipts, but the user does something malicious and gives me an invoice. during step 2, it fails to fit the schema, but then you try with step 3, and now you have a receipt! Its close, but your business logic is not expecting that. Often when schema alignment fails, its usually because the schema was ambiguous or the input was not valid.

Scenario 2. I ask the LLM to produce this schema:

    class Person {
      name string
      past_jobs string[]
    }
However the person only has ever worked at 1 job. so the LLM outputs: { "name": "Vaibhav", "past_jobs": "Google" }. Technically since you know you expect an array, you could just transform the string -> string[].

thats the algorithm we created: schema-aligned parsing. More here if you're interested: https://boundaryml.com/blog/schema-aligned-parsing

Benchmark wise, when we tested last, it seems to help on top of every model (especially the smaller ones) https://www.reddit.com/r/LocalLLaMA/comments/1esd9xc/beating...

Hope this helps with some of the ambiguities in the post :)

reply
joatmon-snoo
25 minutes ago
[-]
(author here) To be more specific, here's a benchmark that we ran last year, where we compared schema-aligned parsing against constrained decoding (then called "Function Calling (Strict)", the orange ƒ): https://boundaryml.com/blog/sota-function-calling
reply
kemiller
2 hours ago
[-]
That is more or less what BAML does
reply
refulgentis
2 hours ago
[-]
I understand this but A) then they should have done it here B) the idea that you can't get CoT x JSON without sacrificing JSON formatting is flat out wrong with ~any 2025 model. (i.e. reasoning models and their APIs specifically enable this)
reply
throw-qqqqq
3 hours ago
[-]
Interesting! .TXT has the opposite conclusion, that structured output improves performance:

https://blog.dottxt.ai/say-what-you-mean.html

https://blog.dottxt.ai/prompt-efficiency.html

This also matches my own experiences.

reply
flagos10
2 hours ago
[-]
Same for me. Using structured output was much better than without.
reply
Der_Einzige
2 hours ago
[-]
Yup. I instantly linked these because the multiple papers who claim structured outputs harm quality are not just wrong, but fatally damaging to the whole AI ecosystem especially AI agents.

There are places where structured outputs harms creativity, but usually that's a decoding time problem which is similarly solved with better sampling, like they talk about in this paper: https://arxiv.org/abs/2410.01103

Claims of harmed reasoning performance are really evidence that 1. Your structured generation backend is bad or 2. Some shenanigans/interactions with temperature/samplers (this is the most common by far) or 3. You are bad at benchmarking.

reply
supermdguy
3 hours ago
[-]
If your output schema doesn’t capture all correct outputs, that’s a problem with your schema, not the LLM. A human using a data entry tool would run into the wrong issue. Letting the LLM output whatever it wants just makes it so you have to deal with ambiguities manually, instead of teaching the LLM what to do.

I usually start by adding an error type that will be overused by the LLM, and use that to gain visibility into the types of ambiguities that come up in real-world data. Then over time you can build a more correct schema and better prompts that help the LLM deal with ambiguities the way you want it to.

Also, a lot of the chain of thought issues are solved by using a reasoning model (which allows chain of thought that isn’t included in the output) or by using an agentic loop with a tool call to return output.

reply
dhruvbird
3 hours ago
[-]
This ^^^^

While the provided schema has a "quantity" field, it doesn't mention the units.

<code>

class Item(BaseModel):

    name: str

    price: float = Field(description="per-unit item price")

    quantity: float = Field(default=1, description="If not specified, assume 1")

class Receipt(BaseModel):

    establishment_name: str

    date: str = Field(description="YYYY-MM-DD")

    total: float = Field(description="The total amount of the receipt")

    currency: str = Field(description="The currency used for everything on the receipt")

    items: list[Item] = Field(description="The items on the receipt")
</code>

There needs to be a better evaluation and a better provided schema that captures the full details of what is expected to be captured.

> What kind of error should it return if there's no total listed on the receipt? Should it even return an error or is it OK for it to return total = null?

Additionally, the schema allows optional fields, so the LLM is free to skip missing fields if they are specified as such.

reply
pizzathyme
4 hours ago
[-]
The very first example, which is held up as an error, is actually arguably correct. If you asked a human (me) how many bananas were purchased, they clearly purchased one banana.

Yes the banana weighs 0.4 pounds. But the question was not to return the weight or the quantity, the question was to return the quantity.

It seems like more instructions are needed in the prompt that the author is not even aware of.

reply
banandys
2 hours ago
[-]
A very common peeled banana weight is 100g (“metric banana”). This is convenient for calorie counting. 0.4lbs for a single banana as the peeled weight is probably around 125g.

https://www.reddit.com/r/dataisbeautiful/comments/bs741l/oc_...

reply
raw_anon_1111
31 minutes ago
[-]
From what I have found text -> structured text works well. I do a lot of call center based projects where I need to get intents (what API I need to call to fulfill the user’s request) and add slots (the variable part of the message like addresses).

Even Amazon’s cheapest and fastest model does that well - Nova Lite.

But even without using his framework, he did give me an obvious in hindsight method of handling image understanding.

I should have used a more advanced model to describe the image as free text and then used a cheap model to convert text to JSON.

I also had the problem that my process hallucinated that it understood the “image” contained in a Mac .DS_Store file

reply
dcastm
4 hours ago
[-]
While I agree that you must be careful when using structured outputs, the article doesn't provide good arguments:

1. In the examples provided, the author compares freeform CoT + JSON output vs. non-CoT structured output. This is unfair and biases the results towards what they wanted to show. These days, you don't need to include a "reasoning" field in the schema as mentioned in the article; you can just use thinking tokens (e.g., reasoning_effort for OpenAI models). You get the best of both worlds: freeform reasoning and structured output. I tested this, and the results were very similar for both.

2. Let Me Speak Freely? had several methodological issues. I address some of them (and .txt's rebuttal) here: https://dylancastillo.co/posts/say-what-you-mean-sometimes.h...

3. There's no silver bullet. Structured outputs might improve or worsen your results depending on the use case. What you really need to do is run your evals and make a decision based on the data.

reply
Der_Einzige
2 hours ago
[-]
BTW, the structured outputs debate is significantly more complicated than even your own post implies.

You aren't testing structured outputs+model alone, you are testing

1. The structured outputs backend used. There are at least 3 major free ones, outlines, xgrammer, lm-format-enforcer and guidance. OpenAI, Anthropic, Google, and Grok will all have different ones. They all do things SIGNIFICANTLY differently. That's at least 8 different backends to compare.

2. The settings used for each structured output backend. Oh, you didn't know that there's often 5+ settings related to how they handle subtle stuff like whitespaces? Better learn to figure out what these settings do and how to tweak them!

3. The models underlying sampling settings, i.e. any default temperature, top_p/top_k, etc going on. Remember that the ORDER of application of samplers matters here! Huggingface transformers and vLLM have opposite defaults on if temperature happens before sampling or after!

4. The model, and don't forget about differences around quants/variants of the model!

Almost no one who does any kinds of these analysis even talk about these additional factors, including academics.

Sometimes it feels like I'm the only one in this world who actually uses this feature at the extremes of its capabilities.

reply
armcat
2 hours ago
[-]
I really like BAML but this post seems a little too much like a BAML funnel. Here are three methods that worked for me consistently since constrained sampling first came out:

1. Add a validation step (using a mini model) right at the beginning - sub-second response times; the validation will either emit True/False or emit a function call

2. Use a sequence of (1) large model without structured outputs for reasoning/parsing, chained to (2) small model for constrained sampling/structured output

3. Keep your Pydantic models/schemas as flat (not too nested and not too many enumarations) and "help" the model in the system prompt as much as you can

reply
Oras
1 hour ago
[-]
I would like to see a real example, the one given is assuming wanting float and assigning int.

What if you put “float” instead of int to get the required number?

Also the post is missing another use case, enums in structured data. I’ve been using it successfully for a few months now and it’s doing a fantastic job.

reply
swe_dima
4 hours ago
[-]
OpenAI structured outputs are pretty stable for me. Gemini sometimes responds with a completely different structure. Gemini 3 flash with grounding sometimes returns json inside ```json...``` causing parsing errors.
reply
euazOn
3 hours ago
[-]
In case you're using OpenRouter, check out their new Response Healing feature that claims to solve exactly this issue.

https://openrouter.ai/announcements/response-healing-reduce-...

reply
codegladiator
4 hours ago
[-]
https://github.com/josdejong/jsonrepair

might be useful ( i am not the author )

reply
A_SIGINT
3 hours ago
[-]
> Chain-of-thought is crippled by structured outputs

I don't know if this is true. Libraries such as Pydantic AI and I would assume the model provider SDKs stream different events. If COT is needed then a <think> section would be emitted and then later the structured response would occur when the model begins its final response.

Structured outputs can be quite reliable if used correctly. For example, I designed an AST structure that allows me to reliably generate SQL. The model has tools to inspect data-points, view their value distributions (quartiles, medians, etc). Then once I get the AST structure back I can perform semantic validation easily (just walk the tree like a compiler). Once semantic validation passes (or forces a re-prompt with the error), I can just walk the tree again to generate SQL. This helps me reliably generate SQL where I know it won't fail during execution, and have a lot of control over what data-points are used together, and ensuring valid values are used for them.

I think the trick is just generating the right schema to model your problem, and understanding the depth of an answer that might come back.

reply
michaelgiba
3 hours ago
[-]
It’s not surprising that there could be a very slight quality drop off for making the model return its answer in a constrained way. You’re essentially forcing the model to express the actual answer it wants to express in a constrained language.

However I would say two things: 1. I doubt this quality drop couldn’t be mitigated by first letting the model answer in its regular language and then doing a second constrained step to convert that into structured outputs. 2. For the smaller models I have seen instances where the constrained sampling of structured outputs actually HELPS with output quality. If you can sufficiently encode information in the structure of the output it can help the model. It can effectively let you encode simple branching mechanisms to execute at sample time

reply
altmanaltman
2 hours ago
[-]
> You’re essentially forcing the model to express the actual answer it wants to express in a constrained language.

You surely aren't implying that the model is sentient or has any "desire" to give an answer, right?

And how is that different from prompting in general? Isn't using english already a constraint? And isn't that what it is designed for, to work with prompts that provide limits in which to determine the output text? Like there is no "real" answer that you supress by changing your prompt.

So I don't think its a plausible explanation to say this happens because we are "making" the model return its answerr in a "constrained language" at all.

reply
Aurornis
3 hours ago
[-]
Does anyone have more benchmarks or evals with data on this topic? The claimed 20% accuracy reduction is significant.

Structured output was one of the lesser known topics that AI consultants and course writers got a lot of mileage out of because it felt like magic. A lot of management people would use ChatGPT but didn’t know how to bridge the text output into a familiar API format, so using a trick to turn it into JSON felt like the missing link. Now that I think about it, I don’t recall seeing any content actually evaluating the impact of constrained output on quality though.

This blog post blurs the lines between output quality reduction and incorrect error handling, though. I’d like to see some more thorough benchmarking that doesn’t try to include obvious schema issues in the quality reduction measurements.

reply
crystal_revenge
3 hours ago
[-]
(repeating an earlier comment). The team behind Outlines has repeatedly provided evaluations that show constrained decoding improves the outputs:

- https://blog.dottxt.ai/performance-gsm8k.html

- https://blog.dottxt.ai/oss-v-gpt4.html

- https://blog.dottxt.ai/say-what-you-mean.html

reply
cmews
4 hours ago
[-]
Structured outputs work well depending on the tasks. The example mentioned in the blog post output doesn’t say anything because we are missing the prompt/schema definition. Also quantity is quite ambiguous because it could be bananas as a term is readable once on the receipt.

I would love some more detailed and reproducible examples, because the claims don’t make sense for all use cases I had.

reply
rybosome
3 hours ago
[-]
I have heard this argument before, but never actually seen concrete evals.

The argument goes that because we are intentionally constraining the model - I believe OAI’s method is a soft max (I think, rusty on my ML math) to get tokens sorted by probability then taking the first that aligns with the current state machine - we get less creativity.

Maybe, but a one-off vibes example is hardly proof. I still use structured output regularly.

Oh, and tool calling is almost certainly implemented atop structured output. After all, it’s forcing the model to respond with a JSON schema representing the tool arguments. I struggle to believe that this is adequate for tool calling but inadequate for general purpose use.

reply
crystal_revenge
3 hours ago
[-]
> but never actually seen concrete evals.

The team behind the Outlines library has produced several sets of evals and repeatedly shown the opposite: that constrained decoding improves model performance (including examples of "CoT" which the post claims isn't possible). [0,1]

There was a paper that claimed constrained decoding hurt performance, but it had some fundamental errors which they also wrote about [2].

People get weirdly superstitious when it comes to constrained decoding as though t somehow "limiting the model" when it's just a simple as applying a conditional probably distribution to the logits. I also suspect this post is largely to justify the fact that BAML parses the results (since the post is written by them).

0. https://blog.dottxt.ai/performance-gsm8k.html

1. https://blog.dottxt.ai/oss-v-gpt4.html

2. https://blog.dottxt.ai/say-what-you-mean.html

reply
Der_Einzige
2 hours ago
[-]
To be fair, there is "real harm" from constraining LLM outputs related to, for example, forcing lipograms or the letter "E" and a model responding with misspellings of words (deleted E) rather than words that don't actually have the letter "E" at all. This is why some authors propose special decoders to fix that diversity problem. See this paper and most of what it cites around it for examples of this: https://arxiv.org/abs/2410.01103

This is independent from a "quality" or "reasoning" problem which simply does not exist/happen when using structured generation.

Edit (to respond):

I am claiming that there is no harm to reasoning, not claiming that CoT reasoning before structured generation isn't happening.

reply
crystal_revenge
1 hour ago
[-]
> "reasoning" problem which simply does not exist/happen when using structured generation

The first article demonstrates exactly how to implement structured generation with CoT. Do you mean “reasoning” other than traditional CoT (like DeepSeek)? I’ll have to look for an reference but I recall the Outlines team also handling this latter case.

reply
NitpickLawyer
4 hours ago
[-]
A 3rd alternative is to use the best of both worlds. Have the model respond in free-form. Then use that response + structured output APIs to ask it for json. More expensive, but better overall results. (and you can cross-check between your heuristic parsing vs. the structured output, and retry / alert on miss-matches)
reply
theoli
2 hours ago
[-]
I am doing this with good success parsing receipts with ministral3:14b. The first prompt describes the data being sought, and asks for it to be put at the end of the response. The format tends to vary between json, bulleted lists, and name: value pairs. I was never able to find a good way to get just JSON.

The second pass is configured for structured output via guided decoding, and is asked to just put the field values from the analyzer's response into JSON fitting a specified schema.

I have processed several hundred receipts this way with very high accuracy; 99.7% of extracted fields are correct. Unfortunately it still needs human review because I can't seem to get a VLM to see the errors in the very few examples that have errors. But this setup does save a lot of time.

reply
sebazzz
2 hours ago
[-]
If this analysis is sound, I wonder if it can be mitigated by using tools instead of structured outputs.
reply
machinationu
4 hours ago
[-]
or tell it to output the data at the end as markdown and then do a second pass with a cheaper model to build the structured output

also, xml works much better than json, all the model guides say this

reply
dzrmb
5 hours ago
[-]
Interesting read and perspective. I had very good results with structured outputs, both text, images and tool calling. Also a lot of SDKs are using it, including Vercel AI SDK.

Thanks for sharing

reply
noreplydev
3 hours ago
[-]
I don’t know if 0,42 should be the quantity
reply
TZubiri
2 hours ago
[-]
They worked fine for me. Keep working at it until results are positive instead of rabbit holing into a failure mode with a blog post.

It's usually more productive to right about how LLMs work rather than how they don't. In this case especially, there's improvements that can be made to the schema, without forfeiting on the idea of schemas altogether

reply
villgax
2 hours ago
[-]
Skill problem not an LLM problem
reply
refulgentis
2 hours ago
[-]
"CoT x JSON means you can't get JSON" is 2024.

Every model has built-in segmentation between reasoning/CoT + JSON.

reply
Veen
3 hours ago
[-]
Doesn't the Claude APIs recently introduced ability to combine extended thinking with structured outputs overcome this issue? You get the unconstrained(ish) generation in the extended thinking blocks and then structured formatting informed by that thinking in the final output.
reply
Der_Einzige
2 hours ago
[-]
No, structured outputs do NOT degrade output quality, at least not in the ways you claim. How many times do we have to debunk this FUD, old man?

https://blog.dottxt.ai/say-what-you-mean.html

The blog post is doubly bad because any "failures" involving images and image understanding can't necessarily be traced back to structured generation at all!!!

reply
mikert89
4 hours ago
[-]
please i cant take anymore anti ai hot takes.
reply
emp17344
3 hours ago
[-]
Sounds like a you problem. I’m all for people investigating the boundaries of model capability - if you take that as a personal attack, you’re going to have a bad time over the next few years.
reply
Leynos
2 hours ago
[-]
While I suspect "hot take" is apt, this wasn't exactly anti-AI. Rather, the author is advocating for their particular way of doing genAI output parsing (as opposed to constrained decoding structured output, they advocate unconstrained decoding with a permissive parsing framework)
reply
swiftcoder
4 hours ago
[-]
How is this an "anti AI hot take"? It's discussing using one type of LLM output versus another...
reply