Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And even with CDCL and 2-WL a new implementation won't come anywhere near the performance of the best. The engineering precision involved in 3k core lines of minisat/Glucose is astounding: pre-processing, in-processing, optimalisations, multiple heuristics (VSIDS, LBD, agressive restarts), domain specific cache-aware generational garbage collection...

Competitive SAT solvers are little diamonds of engineering.



I feel like I both agree and disagree. It is true that beating glucose or lingeling at the SAT competition would require a lot more engineering than just plain CDCL + 2 literal watching. At the same time, I think a lot of the new stuff (which in my book is everything that's happened in SAT post 2007ish) is primarily focused on winning the SAT competition by taking a sort of kitchen-sink approach.

For instance, CryptoMinisat first implemented XOR detection along with Gaussian elimination. They were using SAT on cryptographic problems, so it made sense to specialize for that domain. But now there's a bunch of crypto-based problems in the SAT competition and many of the other solvers also implement Gaussian elimination. But I'd guess that most users of SAT solvers wouldn't see a noticeable performance difference if you got rid of Gaussian elimination from the solvers.

Another good example is Alan Mischenko's ABC which wins the hardware model checking competition every year but uses a fairly ancient version of MiniSAT as its SAT solving engine. It's not that a faster solver wouldn't make ABC faster, but that the real smarts in model checking are in what queries you send to the solver, not in squeezing an extra 2X or 3X performance out of the SAT solver using the same old dumb queries.


Most things I listed are pre-2007, minisat 2.2 had all but literal block distance. Which, I think, has proven itself since glucose replaced minisat in the hack thread.

You're probably right about the kitchen sink approach, however without it we probably wouldn't have MapleSat. MapleSat uses machine learning to replace the VSIDS heuristic. A bold move, but it has been replicated this year.

I checked (got curious about what ABC does): ABC uses minisat 2.2 as default and satoko (for 3 years) and glucose 3.0 (for 2 years) as options. I don't know which configuration they used for the competitions. I think I saw "bmc -g" in some of the slides which would indicate glucose. However I also saw "abc super_solve" for which the name gives no information. (Maybe not the same years)

Indeed, smart applications of SAT solvers do more than raw performance. Though, every little bit helps. Are you going to tell scientists what they should research? :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: