Playing Hard Mastermind Games with a SAT-based AI

Back in the day, Mastermind was a popular two player code-breaking game, and many variations thereof still exist as both standalone games and puzzles within other games. Although it is difficult for a human player to make optimal guesses of the secret code, or at least guesses that do not conflict with the provided clues, the setting is usually simple enough for an AI to find such candidates via explicit exploration of the game tree.

However, such approaches become unfeasible when the number of possibilities for secret codes grows into the millions. This post illustrates the problem with standard approaches, and how finding consistent candidates can be approached with SAT solving – yielding an AI that can handle orders-of-magnitude harder Mastermind instances.

Introduction

Every now and then, my friends and I gather to play Mansions of Madness – a cooperative boardgame. In line with its Lovecraftian setting, the game is often unforgiving so we try to optimise our every move. Besides the core gameplay, solving various puzzles is often needed to advance. One of these puzzles is in fact a Mastermind variant, and, as some players argue, the instances may be rather difficult to reason about – resulting in wasted turns and decreased chance of winning the scenario. This is how the problem of making educated guesses in Mastermind piqued my interest.

Rules of Mastermind

In the classic Mastermind game, a secret code $s$ is a row of four pegs, where each peg is of one of six colours. For example given the colours $ \textcolor{#272822}● \textcolor{#f92672}● \textcolor{#a6e22e}● \textcolor{#f4bf75}● \textcolor{#66d9ef}● \textcolor{#ae81ff}● $, the following sequence would be a possible secret code:

$$ \tag{1} s=\textcolor{#a6e22e}●\textcolor{#272822}{●●}\textcolor{#f92672}{●} $$

This secret is designed by and only known to the codemaker. In every round, the codebreaker makes a guess, and receives a pair of numbers as feedback/clue from the codemaker:

  1. The number of full matches states how many of the pegs coincide with the secret.
  2. The number of partial or symbol matches states how many additional full matches can be achieved by reordering the pegs.

For example let $s$ be the secret from $(1)$ and $g=\textcolor{#a6e22e}{●}\textcolor{#f92672}{●●}\textcolor{#ae81ff}{●}$ be the codebreaker’s guess. The codemaker’s feedback to that guess would be $(1, 1)$, since

  • only one peg matches exactly (the green one in the first position)
  • even though there are two pink pegs, only one of them can become a full match if put in the last position – we can’t put both there

The following code computes such feedback in two phases. First, we get rid of all full matches from both the guess and secret, and then try to match the remaining symbols:

14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
def eval_guess(secret, guess):
    # Count full matches (position & symbol)
    guess_rest = [a for (a, b) in zip(guess, secret) if a != b]
    secret_rest = [b for (a, b) in zip(guess, secret) if a != b]
    full_match_count = len(guess) - len(guess_rest)

    # Count partial matches (symbol only)
    symbol_match_count = 0
    for symbol in guess_rest:
        try:
            secret_rest.remove(symbol)
            symbol_match_count += 1
        except ValueError:
            pass
    return Feedback(full_match_count, symbol_match_count)
mastermind.py

Example Playthrough

A full playthrough of classic Mastermind, i.e. with codes of length four, and six colours/symbols $\Sigma=\{ \textcolor{#272822}●, \textcolor{#f92672}●, \textcolor{#a6e22e}●, \textcolor{#f4bf75}●, \textcolor{#66d9ef}●, \textcolor{#ae81ff}● \}$ to choose from, may look as follows.

The game starts with the codemaker picking a secret $s\in\Sigma^4$. To keep things simple, let this be the familiar combination $s=\textcolor{#a6e22e}●\textcolor{#272822}{●●}\textcolor{#f92672}{●}$ from $(1)$. Without questioning the reasoning of the codebreaker, the following table outlines possible guesses and received feedback of every round.

Round Guess Feedback
1 $\textcolor{#272822}●\textcolor{#ae81ff}{●●}\textcolor{#66d9ef}{●}$ $(0, 1)$
2 $\textcolor{#f92672}●\textcolor{#f4bf75}{●}\textcolor{#272822}{●●}$ $(1, 2)$
3 $\textcolor{#f4bf75}●\textcolor{#a6e22e}{●}\textcolor{#272822}{●}\textcolor{#f92672}{●}$ $(2, 1)$
4 $\textcolor{#a6e22e}●\textcolor{#272822}{●●}\textcolor{#f92672}{●}$ $(4, 0)$

This is in fact the log of an actual game played by the SAT-based AI that we will end up with in the end of the post.

The corresponding game loop amounts to few lines of code. It requires a potential player or AI to be able to give a first guess at the beginning of the game (initial_guess()), and incorporate feedback for subsequent guesses (make_guess(feedback)):

54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
class Game:
    def __init__(self, secret, player):
        self.secret = secret
        self.player = player

    def run(self):
        guess = self.player.initial_guess()
        num_guesses = 1

        # Guess-Feedback loop until exact match is guessed
        while True:
            feedback = eval_guess(self.secret, guess)
            print("{} --> {}".format(guess, feedback))
            if feedback.full_matches == NUM_POSITIONS:
                break
            guess = self.player.make_guess(feedback)
            num_guesses += 1
        return guess, num_guesses
mastermind.py

Where’s the Difficulty?

With four positions and six colours there are only $6^4=1\,296$ possible combinations to choose from, so keeping the set of all combinations in memory and working with it is perfectly feasible when solving classic Mastermind instances. However, there is nothing hindering us from using even longer sequences or introducing further colours. The code length $|s|$ and set of colours/symbols $\Sigma$ are exactly the parameters that are typically varied to arrive at the different variations of Mastermind.

For example, Grand Mastermind keeps the secret length of the original but uses $25$ symbols ($5$ shapes, $5$ colours) which allows for $25^4=390\,625$ different codes. Overall, most Mastermind variants feature a number of combinations in this order of magnitude (thousands), rendering them amenable to analysis via methods that explicitly explore the game tree. At the time of writing, the Wikipedia page lists only one exception: Mastermind Secret Search uses $26$ letters as symbols to form words up to a length of six characters, which gives rise to $26^6=308\,915\,776$ possible secrets.

But how to approach playing Mastermind variants with $6^{16}=2\,821\,109\,907\,456$ or more combinations without resorting to blind guessing? Just to give you an idea: even iterating through all these combinations will take forever and is out of question for a game AI. Therefore, even when not aiming for optimal play but merely reasonable guesses, explicit and enumerative methods without backtracking or learning from conflicts won’t get us far. However, they work well for most practical Mastermind variants and are a good place to start getting a feel for the problem domain.

Explicit Approaches

In contrast to symbolic methods for search and reachability checking, which work with implicit representations of state sets, explicit methods actually construct each state (here: secret codes) separately and store each in an accessible form in memory.

In the following implementations, we represent secrets as tuples of integers. That is, the symbol set $\Sigma=\{ \textcolor{#272822}●, \textcolor{#f92672}●, \textcolor{#a6e22e}●, \textcolor{#f4bf75}●, \textcolor{#66d9ef}●, \textcolor{#ae81ff}● \}$ is treated as $\Sigma=\{0,1,2,3,4,5\}$, and the secret $s=\textcolor{#a6e22e}●\textcolor{#272822}{●●}\textcolor{#f92672}{●}$ is just the tuple $s=(2,0,0,1)$ to us. Given $|s|$ and $|\Sigma|$, we can now easily enumerate all possible secrets with Python’s product function:

42
43
def all_secrets():
    return product(range(NUM_SYMBOLS), repeat=NUM_POSITIONS)
mastermind.py

Refinement of Consistent Candidates

The simplest approach to playing Mastermind is probably to iterate over all_secrets() and commit each element, since we must eventually encounter the code chosen by the codemaker. However, this approach completely ignores the feedback received from the codemaker, and will obviously lead to many wasted rounds for the codebreaker.

The next best thing we can try is to avoid making guesses that are inconsistent with the accumulated feedback. For this, we must merely check that whatever we assume secret to be, it generates the same feedback for the previous guesses as the actual secret. For every guess-feedback pair from previous rounds the following function must return $\mathit{true}$:

31
32
def is_consistent(secret, guess, feedback):
    return feedback == eval_guess(secret, guess)
mastermind.py

This is exactly the approach suggested by Shapiro in 1983, and a naïve implementation is provided with ExplicitConsistentAi:

82
83
84
85
86
87
88
89
90
91
92
93
94
95
class ExplicitConsistentAi(Player):
    def __init__(self):
        self.last_guess = None
        self.candidates = list(all_secrets())

    def initial_guess(self):
        self.last_guess = self.candidates[0]
        return self.last_guess

    def make_guess(self, feedback):
        self.candidates = [c for c in self.candidates
                           if is_consistent(c, self.last_guess, feedback)
                           and c != self.last_guess]
        return self.initial_guess()
mastermind.py

self.candidates stores all candidates that are consistent with the received feedback. Prior to the first guess there is no feedback, so initially all possible secrets are consistent. Although the implementation always returns the first element, there is nothing wrong with picking a different one.

Unless a guess happens to match the secret exactly, we will receive feedback $f$ that renders some elements of self.candidates inconsistent with $f$. Therefore, after every round, such candidates and the last (wrong) guess are removed from self.candidates.

Even though the approach does not result in optimal play, and many instances can be won in less turns, it is simple and solves a classic Mastermind instance in $5.765$ turns on average. This is surprisingly close to the theoretical minimum of $4.34$ turns.

Lazy Enumeration of Consistent Candidates

A downside of the above method is that it keeps a list of all consistent candidates in memory. Since the number of possible secrets grows exponentially in the admissible length of secrets, this may already hinder us from solving slightly harder Mastermind variations, e.g. with $6^8$ possible secrets.

LazyExplicitConsistentAi is a more practical implementation of the previous approach. It enumerates the consistent combinations but only needs one candidate to be explicitly present in memory at a time, which results in a small memory footprint:

136
137
138
139
140
141
142
143
144
145
146
147
148
149
class LazyExplicitConsistentAi(Player):
    def __init__(self):
        self.guesses = list()
        self.feedbacks = list()
        self.candidates = (c for c in all_secrets()
                           if is_consistent_with_history(c, self.guesses, self.feedbacks))

    def initial_guess(self):
        self.guesses.append(next(self.candidates))
        return self.guesses[-1]

    def make_guess(self, feedback):
        self.feedbacks.append(feedback)
        return self.initial_guess()
mastermind.py

Unlike the previous approach, which incrementally refines the set of candidates w.r.t. the latest feedback, a lazily generated candidate is checked w.r.t. all of the received feedback:

35
36
37
38
39
def is_consistent_with_history(secret, guesses, feedbacks):
    for guess, feedback in zip(guesses, feedbacks):
        if not is_consistent(secret, guess, feedback):
            return False
    return True
mastermind.py

Minimax-based Guessing

Since my original motivation for looking into the topic was to learn how to play the Mastermind games in Mansions of Madness optimally, I feel like I should outline how the typical approach to this, too. Even though the central topic of this post is the (symbolic) computation of consistent guesses for hard Mastermind instances.

Instead of just picking some consistent candidate, one can also analyse how promising the various candidates are and pick the best one – depending on some heuristic or notion of quality.

The first and probably most popular way for picking a “good” candidate was suggested by Knuth in 1977. The general idea is that the best guess should minimise the set of consistent candidates – no matter the feedback. By assuming the least helpful feedback to be returned for each guess, and picking the (not necessarily consistent) guess in this setting that will yield the smallest set of consistent candidates, Knuth effectively implements a “shallow” Minimax rule. While a tree-like illustration is most helpful for Minimax on longer games, here, I find the table-oriented argument from the overview to be more apt.

Consider committing some candidate $c$ and, in turn, receiving some feedback $f$. The following table illustrates the possible outcomes depending on the chosen $c$ and the codemaker’s feedback $f$ for the very first guess:

 $(0,0,0,0)$   $(0,0,0,1)$   $(0,0,1,0)$   $(0,0,1,1)$   $\dots$ 
$(0,0)$ $625$ $256$ $256$ $256$ $\dots$
$(0,1)$ $0$ $308$ $308$ $256$ $\dots$
$(0,2)$ $0$ $61$ $61$ $96$ $\dots$
$(0,3)$ $0$ $0$ $0$ $16$ $\dots$
$(0,4)$ $0$ $0$ $0$ $1$ $\dots$
$(1,0)$ $500$ $317$ $317$ $256$ $\dots$
$(1,1)$ $0$ $156$ $156$ $208$ $\dots$
$(1,2)$ $0$ $27$ $27$ $36$ $\dots$
$(1,3)$ $0$ $0$ $0$ $0$ $\dots$
$(2,0)$ $150$ $123$ $123$ $114$ $\dots$
$(2,1)$ $0$ $24$ $24$ $32$ $\dots$
$(2,2)$ $0$ $3$ $3$ $4$ $\dots$
$(3,0)$ $20$ $20$ $20$ $20$ $\dots$
$(4,0)$ $1$ $1$ $1$ $1$ $\dots$

For each possible guess, there is a column which states the sizes of the resulting sets of consistent candidates for each possible feedback. To make it more clear, consider picking $c=(0,0,0,0)$ for the first guess. The worst that could happen next would be receiving the feedback $(0,0)$, since this would still leave us with a set of $625$ candidates that are consistent with the feedback. In contrast, the best feedback would clearly be $(4,0)$, as this would imply that our guess was equal to the secret code and we just won.

To develop the idea further, finding the best guess amounts to constructing such a table, identifying the worst case of each column, and picking the column/candidate with the smallest worst case. According to the provided table, a best initial guess would be $(0,0,1,1)$, which guarantees to bring the set of candidates down to a size of at most $256$. ExplicitMinimaxAi implements this method to determine the best guess:

 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
class ExplicitMinimaxAi(Player):
    def __init__(self):
        self.candidates = list(all_secrets())
        self.not_candidates = list()
        self.guesses = list()

    def initial_guess(self):
        # Pick the (not necessarily consistent) candidate with the best worst feedback
        # w.r.t. the size of the set of consistent candidates (smaller = better)
        not_tried_secrets = (c for c in chain(self.candidates, self.not_candidates)
                             if c not in self.guesses)

        best_c = None
        best_size = math.inf
        for c in not_tried_secrets:
            # Partition consistent candidates by feedback
            feedback_occs = {f: 0 for f in all_feedbacks()}
            for s in self.candidates:
                f = eval_guess(s, c)
                feedback_occs[f] += 1

            # Find worst feedback for this c
            max_size = max(feedback_occs.values())

            # Identify the c that has the best worst feedback
            if max_size < best_size:
                best_size = max_size
                best_c = c

        self.guesses.append(best_c)
        return self.guesses[-1]

    def make_guess(self, feedback):
        self.not_candidates.extend(c for c in self.candidates if not is_consistent(c, self.guesses[-1], feedback))
        self.candidates = [c for c in self.candidates if is_consistent(c, self.guesses[-1], feedback)]
        return self.initial_guess()
mastermind.py

Note that the best move may even be to gather further information and purposefully make a guess that is inconsistent with the received feedback. However, if there is both a consistent and an inconsistent candidate that are best, one should of course prefer the consistent one – it might match the secret after all. That’s why we also keep the inconsistent candidates (self.not_candidate) around.

The Minimax-based approach solves any classic Mastermind instance within five guesses, and needs $4.476$ rounds on average. Unfortunately, the runtime complexity of this and similar methods that optimise rigorously is quadratic in the number of candidates. Considering that even iterating over all possible secrets of a hard Mastermind instance takes too long for a game AI, aiming for optimal guesses is quite a stretch.

SAT-based Approach

So far we’ve seen that the Minimax-based approach is well-suited for Mastermind instances with small numbers of possible secrets, and that the lazy enumeration of consistent candidates is not optimal, but cheaper to compute, and on average not much worse. However, despite the small memory footprint, the core problem is that enumerative methods don’t scale to larger Mastermind instances.

It would be nice if we could at manage to compute consistent candidates for the hard instances, but even this problem is known to be NP-complete. Luckily, we can reduce the problem to another well-studied NP-complete problem – the Boolean satisfiability problem (SAT) – and leverage the highly-tuned solvers existing for it.

All we have to do is devise a logical characterisation of the candidates that are consistent with the feedback received so far and use a SAT solver to acquire such a candidate. Let’s see what kind of variables and constraints we need for this.

Encoding the Symbol Mapping

Since we want to guess some secret $s$, we first of all need variables that indicate which symbol occurs at which position. To this end, we introduce the Boolean variables $$ s_{\mathit{pos},\mathit{sym}} $$ where $\mathit{pos}\in \{0,\dots,|s|-1\}, \mathit{sym}\in\Sigma$.

The intended semantics is that an assignment $s_{1,2}\mapsto\mathit{true}$ denotes symbol $2$ to be at index $1$ of $s$. However, since there are no constraints yet, a solver might just as well choose $s_{1,2}$ and $s_{1,3}$ to be $\mathit{true}$, which suggests both $2$ and $3$ to be at index $1$ of the secret.

To characterise that each position of the tuple $s$ has exactly one symbol, we need add an according cardinality constraint for each position. Assuming the classic Mastermind setting, i.e. $|s|=4$ and $|\Sigma|=6$, we add the constraints:

$$ \begin{aligned} s_{0,0} + s_{0,1} + s_{0,2} + s_{0,3} + s_{0,4} + s_{0,5} &= 1,\\ s_{1,0} + s_{1,1} + s_{1,2} + s_{1,3} + s_{1,4} + s_{1,5} &= 1,\\ s_{2,0} + s_{2,1} + s_{2,2} + s_{2,3} + s_{2,4} + s_{2,5} &= 1,\\ s_{3,0} + s_{3,1} + s_{3,2} + s_{3,3} + s_{3,4} + s_{3,5} &= 1\\ \end{aligned} $$

If we were to query the satisfiability of our SAT instance now, we would get a satisfying assignment with four of the $s_{\mathit{pos},\mathit{sym}}$ variables set to $\mathit{true}$ – one for each $\mathit{pos}$.

Using the Python bindings for Z3 – a popular solver for SAT and SMT instances – the initialisation of SymbolicConsistentAi creates the discussed variables and constraints:

152
153
154
155
156
157
158
159
160
161
162
163
164
class SymbolicConsistentAi(Player):
    def __init__(self):
        self.last_guess = None
        self.solver = SolverFor("QF_FD")  # Solver for quantifier-free constraints over finite domains

        # Possible values at secret positions, i.e. s_0_2 denotes position 0 of secret having symbol 2
        self.secret_vars = [[Bool("s_{}_{}".format(pos, sym)) for sym in range(NUM_SYMBOLS)]
                            for pos in range(NUM_POSITIONS)]

        # Exactly one symbol on each position
        for pos in range(NUM_POSITIONS):
            coeffs = [(sym, 1) for sym in self.secret_vars[pos]]
            self.solver.add(PbEq(coeffs, 1))
mastermind.py

Encoding Full Matches

Initially, there is no feedback to constrain the set of consistent candidates, and picking a random candidate is unproblematic. Any subsequent guess, however, should at least be consistent with the feedback accumulated up to that point. To characterise such candidates, we next consider which constraints some feedback $f$ for a committed candidate $c$ imposes.

Consider the first part of the feedback: the number of full matches. This is clearly another cardinality constraint, but this one does not constrain the $s_{\mathit{pos},\mathit{sym}}$ variables directly. Instead we need some auxiliary variables $$ \mathit{fm}_\mathit{pos} $$ where $\mathit{pos}\in \{0,\dots,|s|-1\}$, to identify in which position we have a full match and count the number of matches. These are the variables for which we create the constraint. Assuming the classic Mastermind setting and some feedback $f=(2,1)$, we simply add $$ \mathit{fm_0} + \mathit{fm_1} + \mathit{fm_2} + \mathit{fm_3} = 2 $$

Although the $\mathit{fm}_\mathit{pos}$ variables are now constrained, they are not related to any $s_{\mathit{pos},\mathit{sym}}$ yet. Intuitively, a variable $\mathit{fm}_\mathit{pos}$ should be $\mathit{true}$, if and only if the symbol at index $\mathit{pos}$ of a committed candidate has the same symbol as the secret at index $\mathit{pos}$ – that’s what a full match is. With this in mind, and assuming the candidate to have been $c=(3,2,0,1)$, the following constraints establish the missing link:

$$ \begin{aligned} s_{0,3} &= \mathit{fm}_0,\\ s_{1,2} &= \mathit{fm}_1,\\ s_{2,0} &= \mathit{fm}_2,\\ s_{3,1} &= \mathit{fm}_3 \end{aligned} $$

The first part of make_guess mirrors the discussed idea, but introduces fresh variables $\mathit{fm}_\mathit{pos}$ in every iteration since the feedback of different rounds is unrelated:

182
183
184
185
186
187
188
189
190
191
192
def make_guess(self, feedback):
    # Fresh "full match" variables for each position
    fm = [FreshBool("fm_{}".format(pos)) for pos in range(NUM_POSITIONS)]

    # Full matches must be consistent with feedback
    fm_coeffs = [(fm_pos, 1) for fm_pos in fm]
    self.solver.add(PbEq(fm_coeffs, feedback.full_matches))

    # Possible full matches
    for pos in range(NUM_POSITIONS):
        self.solver.add(self.secret_vars[pos][self.last_guess[pos]] == fm[pos])
mastermind.py

Encoding Symbol Matches

What remains is the creation of constraints which enforce consistency w.r.t. the second part of the feedback: the number of symbol matches. Unlike the full match flag that we introduced for every position, a symbol match relates two positions – one from the guess with one from the secret. To this end, we introduce the Boolean variables $$ \mathit{sm}_{\mathit{src},\mathit{dst}} $$ where $\mathit{src},\mathit{dst}\in \{0,\dots,|s|-1\}$. A true literal $\mathit{sm}_{2,3}$ shall denote that the symbol at index $2$ of the guess can be moved to index $3$ to achieve a full match.

With this in mind, we can easily construct an according cardinality constraint for some feedback $f=(2,1)$:

$$ \mathit{sm}_{0,1} +\mathit{sm}_{0,2} + \dots + \mathit{sm}_{3,2} = 1 $$

Note that no constraint should have a variable of the form $\mathit{sm}_{\mathit{pos},\mathit{pos}}$, as such a case would actually describe a full match – not a symbol match.

To characterise the intended relation between the $\mathit{sm}_{\mathit{src},\mathit{dst}}$ variables and the $\mathit{s}_{\mathit{pos},\mathit{sym}}$ variables, we define a matching scheme that takes existing full matches into consideration and prioritises low indices. A variable $\mathit{sm}_{\mathit{src},\mathit{dst}}$ should be $\mathit{true}$, if and only if all of the following conditions are met:

  • There is no full match at position $\mathit{src}$: $$ \neg \mathit{fm}_\mathit{src} $$
  • The symbol $\mathit{sym}$ at index $\mathit{src}$ of the guess is the same as the symbol at index $\mathit{dst}$ of the secret: $$ s_{\mathit{dst},\mathit{sym}} $$
  • There is no full match at position $\mathit{dst}$: $$ \neg \mathit{fm}_\mathit{dst} $$
  • No smaller (higher priority) position has matched the symbol at $\mathit{dst}$: $$ \bigwedge_{\substack{0\leq\mathit{prev}<\mathit{src}\\ \mathit{prev} \neq \mathit{dst}}} \neg \mathit{sm}_{\mathit{prev},\mathit{dst}} $$
  • The symbol at index $\mathit{src}$ has not matched any symbol on a smaller (higher priority) position: $$ \bigwedge_{\substack{0\leq\mathit{prev}<\mathit{dst}\\ \mathit{prev} \neq \mathit{src}}} \neg \mathit{sm}_{\mathit{src},\mathit{prev}} $$

The rest of the make_guess method – started in the previous section – implements the proposed encoding:

194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
# Fresh "symbol match" variables for each pair of positions
sm = [[FreshBool("sm_{}_{}".format(src_pos, dst_pos)) for dst_pos in range(NUM_POSITIONS)]
      for src_pos in range(NUM_POSITIONS)]

# Symbol matches must be consistent with feedback
sm_coeffs = [(sm[src_pos][dst_pos], 1) for (src_pos, dst_pos) in
             permutations(range(NUM_POSITIONS), 2)]
self.solver.add(PbEq(sm_coeffs, feedback.symbol_matches))

# Possible symbol matches
for src_pos in range(NUM_POSITIONS):
    match_exprs = list()
    for dst_pos in range(NUM_POSITIONS):
        if src_pos == dst_pos:
            continue

        lhs = list()
        lhs.append(Not(fm[src_pos]))
        lhs.append(self.secret_vars[dst_pos][self.last_guess[src_pos]])
        lhs.append(Not(fm[dst_pos]))
        # No previous position in the guess has a symbol match with the current dst_pos
        lhs.extend([Not(sm[prev_pos][dst_pos]) for prev_pos in range(src_pos) if dst_pos != prev_pos])
        # src_pos has no symbol match with a previous dst_pos
        lhs.extend([Not(sm[src_pos][prev_pos]) for prev_pos in range(dst_pos) if src_pos != prev_pos])

        rhs = sm[src_pos][dst_pos]
        match_exprs.append(And(lhs) == rhs)
    self.solver.add(And(match_exprs))

return self.initial_guess()
mastermind.py

Note that we treat the $\mathit{sm}_{\mathit{src},\mathit{dst}}$ variables similar to the feedback-specific $\mathit{fm}_\mathit{pos}$ variables: we introduce fresh ones for every feedback.

Computing a Guess

We now have all the constraints together for the $s_{\mathit{pos},\mathit{sym}}$ variables to characterise the code sequences that are consistent with the received feedback. To actually compute a guess, we just need to find an assignment that satisfies these constraints. Luckily, we can delegate this task to a SAT solver.

To construct the guess, we identify the $s_{\mathit{pos},\mathit{sym}}$ variables that are set to $\mathit{true}$:

166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
def initial_guess(self):
    status = self.solver.check()

    # Extract guess from solution to constraints
    m = self.solver.model()
    guess = list()
    for pos in range(NUM_POSITIONS):
        for sym in range(NUM_SYMBOLS):
            if is_true(m.eval(self.secret_vars[pos][sym])):
                guess.append(sym)
                continue
    assert (len(guess) == NUM_POSITIONS)

    self.last_guess = tuple(guess)
    return self.last_guess
mastermind.py

That’s all there is to it. Feel free to experiment with the provided implementation of the discussed methods. Using the SymbolicConsistentAi you can now easily play Mastermind games with huge numbers of secret code combinations. On average, it takes only $10.51$ guesses to find the secret within $2\,821\,109\,907\,456$ possible combinations of the classic Mastermind variant with codes of length $16$ (see experiments below).

Experiments

For reference, I’ve run some experiments to measure the performance on an i7-7700K CPU, using Python 3.8 and Z3 v4.8.7. The following table provides both a qualitative and quantitative comparison of the lazy enumeration AI and the SAT-based AI, with measurement cells formatted as (Explicit / Symbolic):

$|s|,|\Sigma|$   Games Played   Avg. Turns   Max. Turns   Avg. Game Time 
4, 6 all 1 296 5.765 / 4.659 9 / 7 2ms / 17ms
5, 6 all 7 776 6.218 / 5.100 11 / 8 11ms / 34ms
6, 6 all 46 656 6.735 / 5.572 12 / 9 71ms / 60ms
7, 6 10 000 7.301 / 6.072 11 / 9 466ms / 101ms
8, 6 1 000 7.874 / 6.560 12 / 10 3s / 161ms
9, 6 10 000 ? / 7.237 ? / 11 ? / 263ms
10, 6 10 000 ? / 7.901 ? / 12 ? / 413ms
16, 6 1 000 ? / 10.51 ? / 15 ? / 6.6s

As to be expected, the lazy enumeration works well for Mastermind variants with small numbers of possible secrets but becomes unusably slow on harder instances. Such measurements were aborted and are denoted by ? in the table. Furthermore, once the average game times went into the seconds, I switched from evaluating the performance for each possible secret to a fixed number of secrets that are evenly spaced over all possible combinations. Accordingly, for those non-exhaustive runs, the measured maximal number of turns is not a guaranteed worst case for all instances.

Although both approaches implement the same idea, i.e. make guesses that are consistent with the feedback, the qualitative results differ. This is because our implementation of Shapiro’s idea picks the smallest consistent candidate but the SAT-based approach picks some consistent candidate. The latter seems to be advantageous.

Do Try This at Home!

As usual, there is a lot of room for further improvement and experimentation. Here are some points you might want to explore on your own (easiest first):

  • There are many low hanging fruits regarding performance improvements. Python is convenient for prototyping but its interpreted nature slows down the execution. Try alternative Python implementations like Pypy to get a significant speedup of the explicit approaches for free, or port the approaches to your favorite compiled language. Furthermore, even though Z3 has tons of features and is a joy to use, I expect dedicated SAT solvers like Lingeling to be even faster at solving the resulting constraints.
  • Try to devise a SAT/SMT-based solver for a Mastermind variation with different forms of feedback. For example, Number Mastermind is played with numbers and the secret’s sum is an additional clue.
  • The proposed logical characterisation is just what seemed most straightforward to me. Try reformulating the encoding to achieve even better solver performance – I might have overlooked a thing or two.
  • Investigate whether it is feasible to implement the Minimax-style approach (or some other heuristic) symbolically. Depending on the chosen approach, it might help to look into MAX-SAT or (approximate) model counting.