Consistent speed-up of incremental solver (2) used in non-incremental way #7203
-
TL;DR: I've observed consistent and substantial speed-ups of Z3's (4.8.12, I'm working on a symbolic execution engine and noticed unexpected speed-ups using a single push before a query and a single pop after - to confirm the result from the solver's perspective, I extracted the (declare-fun n_args_0 () (Array (_ BitVec 32) (_ BitVec 8)))
(assert (let ((a!1 (concat (select n_args_0 #x00000003)
(concat (select n_args_0 #x00000002)
(concat (select n_args_0 #x00000001)
(select n_args_0 #x00000000))))))
(not (not (bvult a!1 #x00000002)))))
(check-sat)
(reset)
(declare-fun n_args_0 () (Array (_ BitVec 32) (_ BitVec 8)))
(assert (let ((a!1 (concat (select n_args_0 #x00000003)
(concat (select n_args_0 #x00000002)
(concat (select n_args_0 #x00000001)
(select n_args_0 #x00000000))))))
(bvult a!1 #x00000002)))
(assert (let ((a!1 (concat (select n_args_0 #x00000003)
(concat (select n_args_0 #x00000002)
(concat (select n_args_0 #x00000001)
(select n_args_0 #x00000000))))))
(not (bvslt #x00000000 a!1))))
(check-sat)
(reset)
... with (push)
(declare-fun n_args_0 () (Array (_ BitVec 32) (_ BitVec 8)))
(assert (let ((a!1 (concat (select n_args_0 #x00000003)
(concat (select n_args_0 #x00000002)
(concat (select n_args_0 #x00000001)
(select n_args_0 #x00000000))))))
(not (not (bvult a!1 #x00000002)))))
(check-sat)
(pop)
(push)
(declare-fun n_args_0 () (Array (_ BitVec 32) (_ BitVec 8)))
(assert (let ((a!1 (concat (select n_args_0 #x00000003)
(concat (select n_args_0 #x00000002)
(concat (select n_args_0 #x00000001)
(select n_args_0 #x00000000))))))
(bvult a!1 #x00000002)))
(assert (let ((a!1 (concat (select n_args_0 #x00000003)
(concat (select n_args_0 #x00000002)
(concat (select n_args_0 #x00000001)
(select n_args_0 #x00000000))))))
(not (bvslt #x00000000 a!1))))
(check-sat)
(pop)
... If my high-level understanding of Z3's incremental behaviour is correct, the Results: the A couple of questions:
Examples (shortened): basic-smt2.txt, push-pop-smt2.txt. I'd be very grateful for any help - thank you! |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 2 replies
-
I was wrong about (3) in the general case - https://stackoverflow.com/a/40427658. Unsure how relevant that is here though, given these aren't "pure bit-vector problems"; I don't see delegation to a SAT solver when running a handful of the queries verbosely. An incremental engine that doesn't garbage collect lemmas would explain the speed-up of surrounding queries with |
Beta Was this translation helpful? Give feedback.
-
If we profile the slow version, we see that it spends quite some overhead in setting up a fresh state.
|
Beta Was this translation helpful? Give feedback.
If we profile the slow version, we see that it spends quite some overhead in setting up a fresh state.
The effect of "reset" is to clear all state that was associated with the previous commands.
There is also different behavior with respect to pre-processing. check-sat from scratch uses a richer set of pre-processing simplifications that can have their own overhead. This is not directly visible in the stack below, but may in general play a role (some of the queries do take notable time, for reasons that are not visible from the global analysis).