Autoresearch for SAT Solvers
145 points
12 hours ago
| 11 comments
| github.com
| HN
stefanpie
11 hours ago
[-]
Prof. Cunxi Yu and his students at UMD is working on this exact topic and published a paper on agents for improving SAT solvers [1].

I believe they are extending this idea to EDA / chip design tools and algorithms which are also computationally challenging to solve. They have an accepted paper on this for logic synthesis which will come out soon.

[1] "Autonomous Code Evolution Meets NP-Completeness", https://arxiv.org/abs/2509.07367

reply
chaisan
9 hours ago
[-]
nice. EDA indeed one of the top applications of SAT
reply
ericpauley
11 hours ago
[-]
It should be noted that MaxSAT 2024 did not include z3, as with many competitions. It’s possible (I’d argue likely) that the agent picked up on techniques from Z3 or some other non-competing solver, rather than actually discovering some novel approach.
reply
throw-qqqqq
1 hour ago
[-]
Z3 is capable (it’s an SMT solver, not just SAT), but it’s not very fast at boolean satifiability and not at all competitive with modern SOTA SAT solvers. Try comparing it to Chaff or Glucose e.g.
reply
jmalicki
11 hours ago
[-]
Or for that matter even from later versions of the same solvers that were in its training data!
reply
ericpauley
11 hours ago
[-]
True. I’d be curious whether a combination of matching comp/training cutoff and censoring web searches could yield a more precise evaluation.
reply
chaisan
9 hours ago
[-]
as its from 2024 (MaxSAT was not held in 2025), its quite likely all the solvers are in the training data. so the interesting part here is the instances for which we actually got better costs that what is currently known (in the best-cost.csv) file.
reply
ericpauley
2 hours ago
[-]
As GP noted the issue is that even better versions than competed in MaxSAT are likely in the training data or web resources.
reply
dooglius
10 hours ago
[-]
Is z3 competitive in SAT competitions? My impression was that it is popular due to the theories, the python API, and the level of support from MSR.
reply
ericpauley
9 hours ago
[-]
Funnily, this was precisely the question I had after posting this (and the topic of an LLM disagreement discussed in another thread). Turns out not, but sibling comment is another confounding factor.
reply
CJefferson
6 hours ago
[-]
One problem here is it's very easy to overtune to a past problem set -- even accidentally. You can often significantly improve performance just by changing your random number generator seed until you happen to pick the right assignment for the first few variables of some of the harder problems.

It would be interesting to take the resulting solver and apply it to an unknown data set.

reply
MrToadMan
7 hours ago
[-]
Not as many changes to the files under library as I expected to see. Most changes seemed to be under a single ‘add stuff’ commit. If some of the solvers are randomised, then repeatedly running and recording best solution found will continually improve over time and give the illusion of the agent making algorithmic advancements, won’t it?
reply
chaisan
6 hours ago
[-]
yeh. ofc. but on any problem larger than 40 variables, the gains from random restarts or initializations will quickly plateau
reply
chaisan
6 hours ago
[-]
and it would take an algo change to the solver to jump to the next local optimum
reply
MrToadMan
3 hours ago
[-]
I guess my point was that I don't see many algo changes in the commit history, which is a shame if this has been lost; library/* files are largely unchanged from the initial commits. But each time the agent runs, it has access to the best solutions found so far and can start from there, often using randomisation, which the agent claims helps it escape local minima e.g. 'simulated annealing as a universal improver'. It would be nice to see how its learnt knowledge performs when applied to unseen problems in a restricted timeframe.
reply
FernandoDe79440
54 minutes ago
[-]
fine tuning a small model usually beats prompting a large one for specific tasks imo
reply
gsnedders
10 hours ago
[-]
What counts as “our cost”? How long it takes to find the MaxSAT?
reply
chaisan
9 hours ago
[-]
the sum of the weights of the unsatistied clauses. we want to reduce this number
reply
ktimespi
9 hours ago
[-]
sounds like AlphaDev [1] might be a better approach for a problem like this.

[1] https://github.com/google-deepmind/alphadev

reply
chaisan
6 hours ago
[-]
somewhat
reply
cerved
7 hours ago
[-]
Would me be nice to try this on lcg (CP-SAT) solvers
reply
Dennis118753882
4 hours ago
[-]
we've been running something similar in prod. latency is the real bottleneck not accuracy
reply
ClawVorpal21355
8 hours ago
[-]
anyone else finding that agent architectures are way more expensive than expected?
reply
chaisan
6 hours ago
[-]
wrt. token usage?
reply
balinha_8864
9 hours ago
[-]
interesting results but the eval methodology seems a bit optimistic
reply
chaisan
9 hours ago
[-]
its just comparing the cost of the best solution found to the best known cost we had before. O(N). why optimistic?
reply
yorwba
4 hours ago
[-]
If you have showdead on, you can see that this account posts generic oneliners: https://news.ycombinator.com/threads?id=balinha_8864
reply
big-chungus4
2 hours ago
[-]
Is that bad?
reply