I'm also really glad that you're helping more people understand this feature, how it works, and how to use it effectively. I strongly believe that structured outputs are one of the most underrated features in LLM engines, and people should be using this feature more.
Constrained non-determinism means that we can reliably use LLMs as part of a larger pipeline or process (such as an agent with tool-calling) and we won't have failures due to syntax errors or erroneous "Sure! Here's your output formatted as JSON with no other text or preamble" messages thrown in.
Your LLM output might not be correct. But grammars ensure that your LLM output is at least _syntactically_ correct. It's not everything, but it's not nothing.
And especially if we want to get away from cloud deployments and run effective local models, grammars are an incredibly valuable piece of this. For practical examples, I often think of Jart's example in her simple LLM-based spam-filter running on a Raspberry Pi [0]:
> llamafile -m TinyLlama-1.1B-Chat-v1.0.f16.gguf \ > --grammar 'root ::= "yes" | "no"' --temp 0 -c 0 \ > --no-display-prompt --log-disable -p "<|user|> > Can you say for certain that the following email is spam? ...
Even though it's a super-tiny piece of hardware, by including a grammar that constrains the output to only ever be "yes" or "no" (it's impossible for the system to produce a different result), then she can use a super-small model on super-limited hardware, and it is still useful. It might not correctly identify spam, but it's never going to break for syntactic reasons, which gives a great boost to the usefulness of small, local models.
* [0]: https://justine.lol/matmul/
You can build that into your structure, same as you would for allowing error values to be returned from a system.
Some libraries:
- Outlines, a nice library for structured generation
- https://github.com/dottxt-ai/outlines
- Guidance (already covered by FlyingLawnmower in this thread), another nice library - https://github.com/guidance-ai/guidance
- XGrammar, a less-featureful but really well optimized constrained generation library - https://github.com/mlc-ai/xgrammar
- This one has a lot of cool technical aspects that make it an interesting project
Some papers:- Efficient Guided Generation for Large Language Models
- By the outlines authors, probably the first real LLM constrained generation paper
- https://arxiv.org/abs/2307.09702
- Automata-based constraints for language model decoding - A much more technical paper about constrained generation and implementation
- https://arxiv.org/abs/2407.08103
- Pitfalls, Subtleties, and Techniques in Automata-Based Subword-Level Constrained Generation - A bit of self-promotion. We show where constrained generation can go wrong and discuss some techniques for the practitioner
- https://openreview.net/pdf?id=DFybOGeGDS
Some blog posts:- Fast, High-Fidelity LLM Decoding with Regex Constraints
- Discusses adhering to the canonical tokenization (i.e., not just the constraint, but also what would be produced by the tokenizer)
- https://vivien000.github.io/blog/journal/llm-decoding-with-regex-constraints.html
- Coalescence: making LLM inference 5x faster - Also from the outlines team
- This is about skipping inference during constrained generation if you know there is only one valid token (common in the canonical tokenization setting)
- https://blog.dottxt.ai/coalescence.htmlProceeds to list all the libraries already listed in the guide.
Automata-based constraints is fun.
That might not matter to you, but it can be 2-3x slower sometimes.
(I get why you need structured generation for smaller LLMs, that makes sense.)
With that said, the model is pretty good at it.
The model is like: "Here is what I came up with... ```{json}``` and this is why I am proud of it!"
That said:
- Constrained generation yields a different distribution from what a raw LLM would provide. This can be pathologically bad. My go-to example is LLMs having a preference for including ellipses in long, structured objects. Constrained generation forces closing quotes or whatever it takes to recover from that error according to a schema, nevertheless yielding an invalid result. Resampling tends to repeat till the LLM fully generates the data in question, always yielding a valid result which also adheres to the schema. It can get much worse than that.
- The unconstrained "method" has a few possible implementations. Increasing context length by complaining about schema errors is almost always worse from an end quality perspective than just retrying till the schema passes. Effective context windows are precious, and current models bias heavily toward earlier data which has been fed into them. In a low-error regime you might get away with a "try it again" response in a single chat, but in a high-error regime you'll get better results at a lower cost by literally re-sending the same prompt till the model doesn't cause errors.
Another way to do this is to use a hybrid approach. You perform unconstrained generation first, and then constrained generation on the failures.
If the authors or readers are interested in some of the more technical details of how we optimized guidance & llguidance, we wrote up a little paper about it here: https://guidance-ai.github.io/llguidance/llg-go-brrr
edit: Somehow that link doesn't work... It's the diagram on the "constrained method" page
Every commercial model provider is adding structured outputs so will keep updating the guide.
What have folks tried?
So maybe an interesting file to have the LLM generate is instead of the final file, a program that creates the final file? Now there is the problem of security of course, the program the LLM generates would need to be sandboxed properly, and time constrained to prevent DOS attacks or explosive output sizes, not to mention the cpu usage of the final result, but quality wise, would it be better?
XML is better for code, and for code parts in particular I enforce a cdata[[ part so there LLM is pretty free to do anything without escaping.
OpenAI API lets you do regex structured output and it's much better than JSON for code.
A nitpick: that's probably a good idea and I've used it before, but that's not really a lenient json parser, it's a Python literal parser and they happen to be close enough that it's useful.
Code is an example of a mixed case. Getting any mechanistically parsable output from a model is another. Sure, you can format it after the generation, but you still need the generation to be parsable for that. In many cases, using the required format right away will also provide the context for better replies.
Sure the guide presents some alternatives but they're incomparably useless VS real enforced structured output.
I get that some people will run their own models or whatever and will be able to use some of the other techniques, but that's the remaining 1%.
As they point out - this might impact results where deep reasoning is required.
So you might be better off taking the unconstrained approach with feedback.
ESPECIALLY with situations where deep reasoning is required, since those are likely to correlate with longer JSON outputs and therefore more failure points.