Solving the "Seven Segment Search" Puzzle with Z3
This week I stumbled upon someone wondering whether the second part of the recent Advent of Code puzzle “Seven Segment Search” can be expressed as a constraint satisfaction problem. As attested by the replies: yes, it can. However, I think the question deserves a more extensive discussion than just a few comments in a thread. This post tries to provide a more instructive answer and raise awareness for the tradeoffs or solver misuses some solutions put up with.
I assume that the reader is familiar with mathematical notation and
- just struggles to express the posed problem in a formal, declarative way, or
- is interested in seeing how the SMT solver Z3 can be used to express and solve the problem in several logics. It takes only few steps to get from a quantifier-laden high-level formulation to what is effectively propositional logic.
The Problem Statement
A functioning seven-segment display is supposed to represent digits as follows:
By associating each segment with a character, we can clearly describe which segments are supposed to light up for each digit:
|
|
The crux of the Seven Segment Search puzzle is that we are faced with a seven-segment display whose wiring got mixed up.
As a result, instead of turning on segments c
and f
to display a 1, our display may turn on segments a
and b
instead.
We don’t get to see how the wrong wiring looks like though.
All we can observe is a sequence of patterns and our task is to make sense of it.
That is, to find out which digit each pattern represents:
Once we’ve figured out how to map the observable patterns to the digits they were originally intended to represent, we can use this knowledge to read the number shown on a four-digit seven-segment display that uses the very same wiring:
The decoded number 5353 is the solution to this problem instance. However, Advent of Code is about programming, so – to make people solve the puzzle programmatically – there are actually 200 independent instances that need to be solved and their 4-digit numbers summed up.
Puzzle Input File
The puzzle input file consists of 200 lines – each of which encodes an independent problem instance. The first part of a line describes the ten (unordered) patterns that can be observed on the malfunctioning seven-segment display. The second part represents the four-digit seven-segment display that needs be decoded. The above problem instance is in fact the original introductory example. It may appear as follows in the input file:
acedgfb cdfbe gcdfa fbcad dab cefabd cdfgeb eafb cagedb ab | cdfeb fcadb cdfeb cdbaf
Formalising a Problem Instance
If you give it some thought, it is easy to come up with efficient procedures to solve any problem instance of this puzzle by exploiting domain-specifics like the patterns' numbers of segments.
For example, since 1
is the only digit that is displayed by exactly two segments, any observed pattern with just two lit segments must be representing it as well.
However, in general, such procedures may require significant alterations and re-analysis of puzzle aspects to exploit even if seemingly small variations are introduced.
Declarative approaches, which merely rely on a description of what a solution to a problem is, rather than how to find it, tend to be less prone to this. Therefore, instead of investing in a solution procedure tailored to a specific problem from the start, it may be sensible to first express the problem in a declarative formalism for which generic solvers exist. If, at some point, the tradeoff between flexibility and performance becomes problematic, one can still look into designing a problem-specific, imperative procedure.
In the following, we will use first-order logic to express the puzzle in a formal, declarative way. This is a reasonably high-level logic which allows us to conveniently express the relations between the problem’s entities, and is amenable to automated theorem proving.
Characterisation in First-Order Logic
Let us start by formalising the thing we know: how each digit maps to a set of segments on a (functioning) seven-segment display. In first-order logic sets are characterised by predicates. For example, if the domain of discourse is $\mathbb{Z}$, predicate $\mathit{neg}(x) := x<0$ chracterises the set of negative integers. Accordingly, to characterise the segments of each digit $d$, we could define 10 predicates $\mathit{segment}_d(s)$. However, it is probably more convenient to let one binary predicate $$ \mathit{digitSegment}:\underset{\overbrace{\\{0,1,2,3,4,5,6,7,8,9\\}}}{\mathit{Digit}}\times \underset{\overbrace{\\{a,b,c,d,e,f,g\\}}}{\mathit{Segment}} $$ characterise the digit’s segments. That is, require the following to hold $$ \tag{1}\mathit{digitSegment}(d,s) \iff s \text{ is a segment of } d $$ for all digits $d$ and segments $s$.
We’d like to have a similar characterisation of the mapping of digits to segments on the broken seven-segment display, but that can’t be stated directly as it depends on the (unknown) permutation of segments, or wires, if you will. Therefore, to first model the permutation, we introduce an uninterpreted function $$ \mathit{Perm}:\mathit{Segment}\to\mathit{Segment} $$ but restrict the possible interpretations of $\mathit{Perm}$ to permutations only. This is achieved by requiring the function to be bijective: $$ \tag{2}\forall s,s'\in\mathit{Segment}\ldotp s = s' \iff \mathit{Perm}(s) = \mathit{Perm}(s') $$
Based on that we can now characterise the permuted digit segments $$ \mathit{PermDigitSegment}:\mathit{Digit}\times \mathit{Segment} $$ by specifying that $\mathit{Perm}(s)$ must be a permuted segment of $d$ iff $s$ is a segment of $d$ on the functioning display: $$ \tag{3}\mathit{PermDigitSegment}(d,\mathit{Perm}(s)) \iff \mathit{digitSegment}(d,s) $$
Note that so far we’ve only formalised aspects that are common to all problem instances. Even the permutation $\mathit{Perm}$, which differs from instance to instance, could be introduced without referring to instance-specific details.
What distinguishes an instance are the ten patterns that can be observed on the (malfunctioning) display, i.e. the first part of each line of the input file. Just as $\mathit{digitSegment}$ characterises the segments behind each possible digit, the idea here is to introduce a predicate $$ \mathit{patternSegment}: \underset{\overbrace{\\{0,1,2,3,4,5,6,7,8,9\\}}}{\mathit{Index}} \times \mathit{Segment} $$ to characterise the segments behind each of the ten observable patterns. That is, assert for all indices $i$ and segments $s$ that $$ \tag{4}\mathit{patternSegment}(i,s) \iff s \text{ is a segment of the $i$-th pattern}. $$
The only thing that remains to be formalised is the relation between the observed patterns and the other “objects”. That’s the actual puzzle. What we know from the puzzle description is that each of the observable patterns matches the permuted segments of some digit. Therefore, there must be a “decoding function” $$ \mathit{Idx2dig}: \mathit{Index} \to \mathit{Digit} $$ which maps each observed pattern – more precisely its index $i$ – in such a way to a digit $d$ that the permuted segments of $d$ correspond to the observed pattern. Similar to $(3)$, we can constrain $\mathit{Idx2dig}$ to behave like this by specifying that $s$ must be a permuted segment of digit $\mathit{Idx2dig}(i)$ iff $s$ is a segment of the $i$-th observed pattern $$ \tag{5}\mathit{PermDigitSegment}(\mathit{Idx2dig}(i),s) \iff \mathit{patternSegment}(i,s) $$ for all indices $i$ and segments $s$.
The Characterisation at a Glance
Overall, we end up with the following constraints $$ \begin{aligned} \mathit{digitSegment}(d,s) &\iff s \text{ is a segment of } d\\ s = s' &\iff \mathit{Perm}(s) = \mathit{Perm}(s')\\ \mathit{PermDigitSegment}(d,\mathit{Perm}(s)) &\iff \mathit{digitSegment}(d,s)\\ \mathit{patternSegment}(i,s) &\iff s \text{ is a segment of the $i$-th pattern}\\ \mathit{PermDigitSegment}(\mathit{Idx2dig}(i),s) &\iff \mathit{patternSegment}(i,s) \end{aligned} $$ for all $d\in\mathit{Digit}$, $s,s'\in\mathit{Segment}$, and indices $i\in\mathit{Index}$.
If we now manage to find an interpretation of the uninterpreted symbols that satisfies all these constraints, the particular puzzle instance will be solved. We can then simply use $\mathit{Idx2dig}$ to map the patterns on the malfunctioning four-digit seven-segment display back to digits, or use $\mathit{Perm}$ to undo the permutation of segments.
Solving the Puzzle via Z3
While there are many solvers, formalisms, and technologies that we can leverage to obtain a satisfying interpretation of the above constraints, this post illustrates how to do it with the SMT solver Z3. More precisely, with its Python bindings.
Domains
To express the above predicates we first have to introduce the domains, or Sort
s, our values will be from.
Finite domains of unrelated values can be created via EnumSort
, and that’s exactly the kind of values we’re dealing with in the puzzle.
Since we will also need to convert between these values and their Python counterparts – int
for digits and indices, and str
for segments – we accompany each domain with corresponding mappings:
|
|
Of course it is possible to use IntSort
and StringSort
to model digits, indices and segments instead of introducing dedicated finite domains, and some of the suggested approaches do resort to this.
However, when doing so one must be aware of the implications.
For example, to exploit problem-specifics, one of the posted solutions features integer addition in its constraints. The result of this is that the characterisation ends up in a more complex fragment of first-order logic than necessary – in quantifier-free linear integer arithmetic (QF_LIA). This, in turn, forces SMT solvers to employ more complex techniques than necessary to solve the puzzle. However, if higher-level modelling better captures the semantics of the problem, it may pay off to use a more expressive (sub-)logic – even if reduction to a less expressive one is possible. One should just be careful to not add such complexity inadvertently. Otherwise, one can quickly end up expressing a decidable problem in terms of an undecidable one.
Solving the Puzzle Incrementally
To solve the overall puzzle, we have to solve the 200 independent problem instances described in the input file and combine their solutions. Although it is possible to construct and solve the instances' constraints independently, as most of the suggested solutions do, it is more efficient to avoid starting from scratch 200 times. Closer inspection of constraints $(1)–(5)$ shows that the instances' formalisations only differ in $(4)$, i.e. the definition of $\mathit{patternSegment}$. Therefore, a simple way to avoid starting from scratch is by first adding the core constraints $(1)–(3),(5)$ to the solver’s stack of constraints and then iteratively checking satisfiability with each of the 200 variants of $(4)$ swapped in at the top of the stack.
The following encoding-agnostic procedure implements the suggested approach.
It uses the scope management operations push
and pop
to replace the definition of $\mathit{patternSegment}$ between satisfiability checks.
When a satisfying interpretation – a so called model – is found, we can inspect it to learn how the observed patterns map to digits:
|
|
The procedure is encoding-agnostic, in the sense that it only expects the characterisation code to implement the following self-explanatory interface:
|
|
As you can hopefully see, using an SMT solver incrementally is pretty straight forward in the context of this puzzle.
Although we’ve just started, our SMT-based puzzle solver is almost finished already.
It merely remains to provide a concrete implementation of PuzzleEncoder
.
High-level Encoding
The most obvious solution is to just use the means Z3 provides to express the characterisation we came up with above.
It is handy to keep the symbols that we use in our encoding around, e.g. to reference them in the encoding functions, or to look up their interpretation later. Therefore, we declare these symbols as members of the encoder. What may catch you by surprise is that, following the SMT-LIB standard, there is no special way to create a predicate in Z3. Instead, predicates are understood as functions with a Boolean result:
|
|
Besides the standard logics Z3 supports several others. However, instead of guesswork, I find it the easiest to just look up the strings that map to supported (sub-)logics.
With the Python bindings, the expressions that represent our core constraints look very similar to the original ones.
What stands out is that, in contrast to our formalisation, the variables we quantify over must be created beforehand.
Furthermore, in code, the right-hand side of $(1)$ is a bit less readable than the $s \text{ is a segment of }d$
(cf. lines 96-98):
|
|
Since the constraints $(1)$ and $(4)$ have the same form, encode_variant
looks a lot like lines 91–98 from encode_core
:
|
|
When our constraints are determined to be satisfiable, the returned model contains – among other things – the information how $\mathit{Idx2dig}$ maps indices to digits.
Since only the encoder needs to know how exactly the encoding works, i.e. solve_puzzle
shouldn’t have to deal with the declared symbols, interpret
looks up in the model what each input is mapped to and returns the findings as a plain list of integers.
The integer at index $i$ denotes the digit encoded by the $i$-th observed pattern:
|
|
At this point you can give our puzzle solver a try.
Just make sure to pass an instance of HighLevelEncoder
to solve_puzzle
.
This naïve solution isn’t exactly fast, taking roughly 30s, but comping up with it didn’t require much thought beyond the original formalisation.
Let’s see whether this can be improved by switching to a less expressive (sub-)logic.
Mid-level Encoding
Although quantifiers facilitate concise characterisation they are also a source of complexity – especially in the context of small finite domains. Therefore, in the next step, we will bring our constraints into a quantifier-free fragment of first-order logic.
Dropping the quantifiers does not entail any changes to the declared symbols, but the new encoder should communicate that the constraints it produces are free of quantifiers:
|
|
The approach to get rid of a forall quantifier is simple: just explicitly enumerate the values and assert the nested constraint for each. This leaves us with an increased number of constraints but spares Z3 the necessity of dealing with quantifiers:
|
|
Aside from the substitution of quantification by iteration, the code is effectively the same as in our first encoder.
I find this version to be even more readable that the previous one, mostly because it is so easy to express $s \text{ is a segment of }d$
for a concrete pair $(d,s)$.
The rest of the encoder does not provide any new insights and is only shown for the sake of completeness:
|
|
Now, try running solve_puzzle
with this new encoder.
It turns out that moving to a quantifier-free fragment of first-order logic reduces the runtime significantly (to ~6s).
One might wonder whether going even lower will yield similar performance gains.
Low-level Encoding
Similar to quantifiers, uninterpreted functions introduce some complexity but do not add any expressivity that is essential to our characterisation. If our constraints were free of both quantifiers and uninterpreted functions they’d be effectively propositional. In fact, Z3 wouldn’t even reach for SMT procedures but directly employ its SAT solver.
Now, how do we get rid of the uninterpreted functions? Since all of our functions have finite domains, it is possible to introduce symbolic values to replace each possible function application in our constraints. That is, for each function and input, we introduce a variable to denote the result. This of course impacts the symbols we declare. For example, where we previously used an uninterpreted function $$ \mathit{digitSegment}:\mathit{Digit}\times\mathit{Segment}\to\mathbb{B} $$ to represent the predicate $\mathit{digitSegment}:\mathit{Digit}\times\mathit{Segment}$, we now have a Boolean variable for each pair $(d,s)\in\mathit{Digit}\times\mathit{Segment}$:
|
|
We can now use the freshly introduced variables within our constraints, in place of the original function applications. This does complicate constraints where we previously had nested function applications, such as $(3)$ and $(5)$. Here, the idea is similar to the alternative formulation of $(3)$: we constrain the result of the outer function application depending on the result of the nested function application. However, without uninterpreted functions, some constraint simplification opportunities may become more obvious, too. Since the domain and value range of $\mathit{perm}$ are equal the bijectivity constraint can be simplified to “distinct applications of $\mathit{perm}$ return distinct segments”:
|
|
As with the previous encodings, the rest of the code holds no surprises and is merely listed for the sake of completeness:
|
|
This is where we stop tweaking the encoding.
You will find that running solve_puzzle
with an instance of LowLevelEncoder
again reduces the runtime significantly (to ~1s).
Do Try This at Home
Interestingly, if the LowLevelEncoder
is used, each check
in solve_puzzle
takes only about 500µs.
So why does solve_puzzle
take 1s? That’s an order of magnitude longer than 200 times 500µs!
Well, running a profiler shows that most time is wasted in the bindings – specifically in ExprRef.__eq__
.
There are several things you can do to squeeze out better execution times:
- Now that you’ve seen how to express the constraints with the Python bindings, give the bindings for C++ – or some other language with less overhead than Python – a try.
- Avoid recreating the constraints for each variant.
They have the same form anyway.
Instead, try to come up with a way to leverage solving under assumptions, i.e. delete
encode_variant
and rather communicate the observed patterns by passing appropriate assumptions to thecheck
function. - Alternatively, instead of solving each of the 200 problem instances separately, try to combine them all into a single set of constraints.
A single invocation of
check
shall suffice to solve the complete puzzle. - Assuming you do implement the above suggestion, try feeding the constraints to a dedicated SAT solver for another performance gain. Have a look at this section from a previous post, if you need some guidance on how to do this.