TaxCalcBench: Evaluating Frontier Models on the Tax Calculation Task
70 points
2 days ago
| 11 comments
| arxiv.org
| HN
ofrzeta
2 days ago
[-]
"Calculating US personal income taxes is a task that requires building an understanding of vast amounts of English text and using that knowledge to carefully compute results. ... Our experiment shows that state-of-the-art models succeed in calculating less than a third of federal income tax returns even on this simplified sample set."

Unsurprisingly. Sometimes I feel like I am in a madhouse. Or in an alchemist's laboratory.

reply
stared
2 days ago
[-]
A bare model may lack a lot.

Yet a week ago I used Claude Code for my personal finances (not taxes) - I downloaded over a year’s worth of my bank account data. Since I pay for most things by card, if I buy lunch, it’s there.

With a single prompt (and about 10 minutes), it produced an analysis. It solved all the technical issues by itself (e.g., realizing it wasn’t CSV but TSV) and ran quite a few different explorations with Pandas. It was able to write an overview, find items that were likely misclassified, etc.

Everything I checked by hand was correct.

So, instead of pursuing a project to write an AI tool for personal finance, I ended up concluding: “just use Claude Code.” As a side note, I used 14 months of data due to my mistake - I wanted to analyze 2 months of data, since I didn’t believe it would handle a larger set, but I misclicked the year. The file was 350 KB.

reply
jasonjmcghee
2 days ago
[-]
I hear you, but I'd also rather someone else assume the liability if possible. (Assuming there's a company backing the model)

So until there's umbrella AI insurance...

reply
stared
2 days ago
[-]
Exploratory data analysis is one thing - here the risk is low. If something does not work, it doesn't. Small omissions are not important.

As of now, I would not use automatic AI to make any financial decisions with direct consequences. Unless system is tested and benchmarked against accountants.

reply
michaelrbock
2 days ago
[-]
Hi, author of this paper + repo here. This dataset is particularly hard to come by, so we’re really proud to be open sourcing it.

Let me know if you have any questions, happy to discuss!

reply
antiloper
2 days ago
[-]
> For example, in the prompt for this experiment, the model is bootstrapped with the correct Form 1040 lines and short instructions as part of its context.

Given that only short instructions are in context, I would not have expected even a frontier model to score well on this benchmark. For better results, I'd think that giving the model access to the entire tax code is required (which likely requires RAG due to its sheer size).

reply
michaelrbock
2 days ago
[-]
We tested models with knowledge cutoffs in 2025 so expect them to have knowledge of Tax Year 2024 forms in their weights. We also tested models with ability to do web search to get any other forms it thinks necessary: https://github.com/column-tax/tax-calc-bench

That all being said, we agree, which is what we've built with our internal tax coding agent, Iris: https://www.columntax.com/blog/introducing-iris-our-ai-tax-d... (ability to get just the right Tax form context on a per-line basis to turn the tax law into code).

reply
anticensor
2 days ago
[-]
This topic is so American. In any other country, you wouldn't have had to consult a tax expert to prepare personal tax statements.
reply
anticensor
2 days ago
[-]
Whereas almost every other country tries to make it easier to file taxes, even when the underlying tax schedule is complex.
reply
Rudybega
2 days ago
[-]
I wonder if you could dramatically improve these results with some relatively simple scaffolding and tool access.

If a ton of these mistakes are genuinely simple calculation errors, it seems like giving the models access to a calculator tool would help a fair bit.

reply
Lionga
2 days ago
[-]
The problem is they do not understand what/how to calculate not the actual act of adding or multiplying. I tried asking ChatGPT to calculate some taxes for three countries, two of which I have been filing taxes already. For the two I know ChatGPT gave wildly wrong numbers (not even right ballpark), so I know I could not trust numbers for the third which was what I was mostly interested in.
reply
sails
2 days ago
[-]
I feel like we are already there. I would imagine if you set Claude Code or Codex this task, running in the CLI, you would see a huge improvement, and that is before you start creating task specific guardrails.

I’m surprised they haven’t tried this, I’m running my own in parallel against my accountant in this way.

reply
cjbarber
2 days ago
[-]
reply
throwaway13337
2 days ago
[-]
Useful.

I wonder what an average accountant would score.

I know LLMs have helped me identify many mistakes accountants have made on my behalf. Some mistakes that could have cost me a lot of money if not caught.

reply
topaz0
2 days ago
[-]
Given that they're restricting to very simple situations I'd expect accountants to score 100%.
reply
daft_pink
2 days ago
[-]
I think AI has problems with law related tasks like taxes, because there are so many words of art. Taxes are essentially just laws and because laws and regulators and courts eventually define words in very very specific narrow ways and sometimes in different ways from one code section to another code section, AI has a lot of trouble using these very very narrow definitions.

Honestly, I think humans have trouble with this as well.

reply
i_dont_know_
2 days ago
[-]
I'm actually quite surprized.

From another article today, I discovered the IRS has a github repo with (what seems to be) XML versions of tax questions... surely some combination of LLM and structured data querying could solve this? https://github.com/IRS-Public/direct-file/tree/main

reply
hodgehog11
2 days ago
[-]
Am I missing something or did they only assess this on Google and Anthropic models? If so, all I can ascertain from this is that latest Gemini models outperformed Claude on this particular task, which should be surprising to no-one. What about GPT-5? Open weight models?
reply
topaz0
2 days ago
[-]
Somebody posted the up-to-date leaderboard up thread: https://news.ycombinator.com/item?id=45603113
reply
jgalt212
2 days ago
[-]
I'm surprised that no LLM has a yet found any unresolved cycles in the US tax code.
reply
anticensor
2 days ago
[-]
Oh you mean infinite/zero tax glitches.
reply
jgalt212
1 day ago
[-]
Yes
reply
mrfelipppe
1 day ago
[-]
This is awesome!
reply