▲storystarling32 minutes ago
[-] I'm curious what the actual inference unit economics look like compared to standard autoregressive models. Parallel decoding helps with latency, but does the total compute cost per token make it viable for production workloads yet?
reply