Can Solitaire Be Solved by Algorithms or AI?

Learn whether solitaire can be solved by algorithms, how AI solvers work and what they reveal about strategy.

Yes — and no, depending on what "solved" means and which variant is in question. Computational solvers have exhaustively analysed every deal in FreeCell's standard numbered set, cataloguing the eight unwinnable deals with certainty and confirming that the other 32,000-odd are all winnable. The same solvers have established the winnability floors described throughout this strategy cluster: the proportion of Klondike, Spider, and Forty Thieves deals that are intrinsically solvable has been estimated with high confidence through automated exhaustive search across millions of randomly generated deals. In that sense, solitaire has been extensively solved algorithmically — the mathematics of which deals are winnable has been largely established, and automated solvers can find winning move sequences for most winnable deals they encounter.

Can Solitaire Be Solved With Algorithms? The Complete Answer

Yes — and no, depending on what "solved" means and which variant is in question. Computational solvers have exhaustively analysed every deal in FreeCell's standard numbered set, cataloguing the eight unwinnable deals with certainty and confirming that the other 32,000-odd are all winnable. The same solvers have established the winnability floors described throughout this strategy cluster: the proportion of Klondike, Spider, and Forty Thieves deals that are intrinsically solvable has been estimated with high confidence through automated exhaustive search across millions of randomly generated deals. In that sense, solitaire has been extensively solved algorithmically — the mathematics of which deals are winnable has been largely established, and automated solvers can find winning move sequences for most winnable deals they encounter.

In another sense, solitaire has not been fully solved algorithmically, and the unsolved aspects are genuinely hard computational problems. Klondike's winnability analysis is not exact — it remains a probability range (approximately 79–91% winnable) rather than a precise figure — because the face-down card uncertainty in Klondike creates a hidden information problem that makes complete exhaustive analysis computationally intractable at scale. Even in fully visible variants like FreeCell, determining whether a specific position is winnable is a problem in the computational complexity class PSPACE-complete — a class of problems believed to be harder than NP-complete problems and for which no polynomial-time algorithm is known to exist. The existence of efficient algorithms that solve all solitaire variants perfectly and in reasonable time is not guaranteed by any current mathematical result, and several variants remain open research problems in computer science.

For the practical solitaire player, the relationship between algorithms and solitaire strategy is most useful in two directions: understanding what algorithmic solvers have established about deal winnability (which directly informs the win rate expectations in our win odds guide), and understanding which algorithmic techniques map onto human strategic thinking — because the most effective human solitaire strategy turns out to apply a simplified version of the same heuristic search that automated solvers use.

What Is Solitaire and How Algorithms Approach It

From an algorithmic perspective, solitaire is a single-player search problem: starting from an initial state (the shuffled deal), the solver must find a sequence of legal moves that transforms the initial state into the win state (all cards on foundations in correct order). The state space — the total number of distinct board positions the game can pass through — is astronomically large. A single Klondike game has a state space estimated in the billions to trillions of distinct positions when all possible move sequences from all possible initial deals are considered. Forty Thieves, with its 80-card two-deck tableau, has a proportionally larger state space that makes exhaustive search computationally expensive even for modern hardware.

The core algorithmic approaches applied to solitaire search are: exhaustive depth-first search, which explores all possible move sequences systematically until either a winning path is found or all paths are confirmed losing; heuristic best-first search, which prioritises moves based on estimated position quality scores and explores promising paths first; and Monte Carlo simulation, which runs thousands of random playouts from the current position and uses the proportion of playouts that reach the win state as an estimate of the position's winnability probability. Each approach has different tradeoffs between computational cost and solution quality, and each maps onto a different level of human strategic thinking.

Key Rules of Algorithm Behaviour: What Different Solvers Achieve

Exhaustive depth-first search: exact answers, high computational cost. Depth-first search (DFS) explores every possible move sequence from the initial position, backtracking when dead ends are reached, until either a winning path is confirmed or all paths are confirmed to lead to losing positions. For FreeCell, where the complete information makes the state space fully enumerable, DFS can conclusively determine whether any given deal is winnable and find a winning sequence if one exists. The eight unwinnable FreeCell deals (including the well-known deals 11,982 and 146,692) were identified by DFS solvers that confirmed, by exhaustive exploration, that no legal move sequence from those starting positions leads to a win. For Klondike, DFS faces the hidden information problem: face-down cards could be any of several possible cards, and the solver must either assume specific hidden card values (which produces conclusions conditional on those assumptions) or branch over all possible hidden card assignments (which multiplies the search space by the number of possible hidden card configurations, making complete exhaustive search intractable for large samples). This is why Klondike's winnability rate is a range rather than a precise figure — it is the result of probabilistic sampling across many random deals rather than complete exhaustive analysis of all possible deals.

Heuristic best-first search: fast near-optimal solutions, no completeness guarantee. Best-first search uses an evaluation function — a heuristic score that estimates the quality of a board position — to prioritise which states to explore first. A high-quality heuristic for solitaire might score positions based on the number of foundation cards, the number of uncovered face-down cards, the number of legal moves available, and the degree of suit consolidation in the tableau sequences. Best-first search with a good heuristic finds winning solutions dramatically faster than DFS on winnable deals, but it cannot guarantee that it has found the optimal (shortest) solution, and it may fail to find any solution on deals where the winning path requires temporarily moving away from what the heuristic considers a good position. The heuristic best-first approach is most directly analogous to human expert strategy: both use a position quality assessment function (the heuristic) to guide move selection, both explore promising paths first, and both may miss solutions that require counter-intuitive intermediate positions that look worse before they look better.

Monte Carlo simulation: probabilistic estimates, suitable for hidden information. Monte Carlo methods run large numbers of random playouts from the current position — making legal moves at random until the game ends in a win or a loss — and use the win proportion across all playouts as a probability estimate of the position's winnability. Monte Carlo is particularly suitable for Klondike and other hidden information variants because it naturally handles the uncertainty: each playout randomly assigns hidden card values consistent with the currently known distribution, explores a random move sequence from there, and contributes its win/loss result to the estimate. The resulting win probability estimate is not an exact answer — it is a sample average with statistical uncertainty — but it is computationally tractable and produces calibrated confidence intervals that improve as more playouts are added. Monte Carlo simulation is the computational analogue of the human probabilistic assessment described in our probability strategy guide: both estimate position winnability from a sample of possible outcomes rather than from complete enumeration.

Strategy Tips: What Algorithm Research Teaches Human Players

The heuristic search model explains why the forced scan sequence works. The forced scan sequence — Foundation? Uncover? Pure build? Empty column? Stock last — is a human-executable heuristic that assigns priority weights to move types in the same way an algorithmic heuristic assigns priority scores to board positions. Foundation moves are prioritised because they advance the win condition directly; uncovering moves are prioritised because they expand the state space of future winnable positions; stock draws are deprioritised because they consume finite resources whose value is highest when the tableau is fully evaluated. This priority ordering is not arbitrary — it corresponds closely to the move-priority orderings that empirically trained solitaire heuristics assign to the same move types. The forced scan sequence is, in effect, a hand-executable approximation of the best-first heuristic that automated solvers use.

The undo function enables human hypothesis testing analogous to algorithmic branching. Automated solvers explore multiple branches of the move tree simultaneously, backtracking from dead ends to try alternative paths. Human players cannot explore branches simultaneously, but the undo function in online solitaire provides an approximation: the player can make a move, observe its consequences several moves later, and undo back to the branch point to try an alternative if the first path proves unpromising. This speculative branching — making a move to test a hypothesis about what it enables, then undoing if the hypothesis is disconfirmed — is the human-playable equivalent of the backtracking step in DFS. Players who use undo systematically for hypothesis testing rather than only for mistake correction are applying a human-scale version of the exhaustive search that makes DFS solvers effective on complex positions.

Circular dependency identification is the human-executable version of dead-end detection. When a DFS solver reaches a position from which no winning path exists, it detects this by exhausting all possible continuations and finding that none lead to a win — a computationally expensive process. Human players can detect a subset of dead-end positions much more quickly by identifying circular dependencies: configurations where card A cannot move before card B and card B cannot move before card A, with no external resolution available. This circular dependency check is a human-executable dead-end detection heuristic that, when positive, provides the same information as the algorithmic dead-end detection without requiring exhaustive search. Developing the circular dependency identification habit is the primary practical benefit of understanding how algorithmic solvers work — it converts an expensive computational process into a fast human-executable check that produces the same resignation decision more efficiently.

FreeCell's complete information makes it the variant where algorithmic and human strategy converge most closely. In FreeCell, where all cards are face-up from the first move, human players can in principle compute the same complete move tree that an automated solver explores. The practical difference is computational capacity: a solver can explore millions of positions per second while a human player can evaluate perhaps five to ten positions per minute. But the structure of the problem — complete information, deterministic outcomes, fully enumerable state space — is identical between human and algorithmic play. This is why FreeCell is the best variant for developing the deepest strategic thinking: its complete information environment gives human players access to the full position quality assessment that heuristic solvers use, enabling the conditional move sequencing and full expected value evaluation described at the expert level in the probability strategy guide. Playing FreeCell with systematic undo-based hypothesis testing is the closest a human player comes to running an algorithmic solver manually.

Common Mistakes Players Make About Algorithms and Solitaire

Believing that the existence of algorithmic solvers means every deal is solvable. Algorithmic solvers find winning paths on winnable deals — they do not create winning paths where none exist. The eight unwinnable FreeCell deals were identified by algorithmic solvers precisely because the solvers exhausted all possible paths and found none leading to a win. In variants with higher unwinnable rates (Forty Thieves at 40–60%, Spider 4-Suit at 45–60%), algorithmic solvers confirm that a large proportion of random deals have no winning path, which means no algorithm — however sophisticated — can win those games. The existence of powerful solvers does not change the structure of the deal space; it only allows us to analyse that structure more precisely.

Assuming that a solver's winning path is the only winning path. Most winnable solitaire deals have multiple winning paths — multiple distinct move sequences that all lead to the win condition. A solver that finds one winning path has not found the only path; it has found one path out of potentially hundreds or thousands. This matters for human play because the solver's specific path may be highly non-intuitive — requiring a move that looks actively harmful several steps before its benefit becomes apparent — while other winning paths for the same deal are more accessible to human pattern recognition. The existence of a solver-found winning path is confirmation that a deal is winnable; it is not necessarily a guide to how a human player should approach the same deal most efficiently.

Treating algorithmic win rates as applicable to casual play. When a solver determines that 35% of Klondike Turn 1 deals are winnable under optimal play, that figure assumes perfect information handling, optimal move selection at every decision point, and no human cognitive limitations. Casual human play achieves 15–25% in Klondike, strategic play achieves 35–45%, but the algorithmic optimal ceiling of approximately 79–91% (the winnability floor) is achievable only by a solver with access to the full state space. The correct interpretation of algorithmic win rate data for human players is not as a personal benchmark but as a structural reference: it tells the player how much of the gap between their current win rate and the theoretical ceiling is caused by deal mathematics versus strategy quality. For the complete framework on using this structural data, see our deal quality guide.

Best Free Solitaire Games for Understanding the Algorithm Connection

FreeCell is the optimal game for experiencing the algorithm-strategy connection because its complete information makes the relationship between systematic search and correct play directly observable. A FreeCell player who methodically traces the move tree — evaluating all legal moves, selecting the highest-priority path, using undo to test alternative branches, and identifying circular dependencies that confirm dead ends — is performing a simplified version of the heuristic best-first search that automated solvers use. Spider Solitaire introduces the hidden information dimension that makes full algorithmic analysis intractable and that requires the probabilistic estimation approach of Monte Carlo methods — translated into human play as the probability-weighted move evaluation described in the probability strategy guide. Together, FreeCell (complete information, deterministic, heuristic search) and Spider (partial information, probabilistic, Monte Carlo estimation) cover the two main algorithmic paradigms that solitaire AI research explores, and developing strong play in both provides the deepest understanding of how algorithmic and human solitaire strategy relate.

Frequently Asked Questions

The human-executable algorithm that most closely approximates what automated solvers do combines three components: the forced scan sequence as a move-priority heuristic (Foundation → Uncover → Pure build → Empty column → Stock), undo-based hypothesis testing as a backtracking mechanism, and circular dependency identification as a dead-end detection shortcut. Together these three components implement a simplified heuristic best-first search with backtracking and partial dead-end detection — the same three components that define the most effective automated solvers, scaled to human cognitive capacity. A player who develops all three as consistent habits is executing the closest human approximation of the algorithm that maximises win rate across the full deal distribution.

FreeCell is easiest for algorithms for the same reasons it is most tractable for human strategic analysis: complete information eliminates the hidden card branching problem, near-100% winnability means almost all positions have winning paths to find, and four free cells plus eight columns provide enough staging flexibility that the solution path rarely requires counter-intuitive regressive moves. The computational complexity of solving a specific FreeCell position is still PSPACE-complete in the worst case — meaning that no polynomial-time algorithm is guaranteed to exist — but in practice, well-implemented heuristic solvers find winning paths for almost all FreeCell deals within milliseconds. Spider 1-Suit is similarly tractable due to its single-suit constraint. At the difficult end, Klondike's hidden information makes it harder for algorithms to solve definitively than its apparent simplicity suggests, and Forty Thieves's large two-deck state space with restricted build rules makes it computationally expensive even when full information is assumed.

FAQ

Can all variants of Solitaire be solved by algorithms?

Not all variants of Solitaire can be solved by algorithms. While games like FreeCell have been exhaustively analyzed, confirming that most deals are winnable, other variants like Klondike and Spider have not been fully solved. This is due to their complexity and the vast number of possible moves. For example, Klondike has an estimated 8.5 billion possible game states, making it impractical for a complete algorithmic solution. However, some algorithms can provide strategies for these games, helping players improve their chances of winning.

What are some common mistakes players make regarding algorithms in Solitaire?

One common mistake is assuming that algorithms can guarantee a win in every game of Solitaire. While algorithms can analyze and suggest optimal moves, they cannot account for every possible game state, especially in complex variants. Another mistake is relying solely on algorithms without understanding the underlying strategies. Players may overlook fundamental principles like card sequencing or tableau management, which are crucial for success. Lastly, some players may misinterpret algorithm outputs, thinking they indicate a guaranteed win rather than a recommended strategy.

How can I apply algorithm research to improve my Solitaire game?

To apply algorithm research to your Solitaire game, start by studying the strategies that successful algorithms use. For example, algorithms often prioritize uncovering face-down cards and maintaining flexibility in moves. Practice these strategies in your gameplay. Additionally, analyze your past games to identify patterns in your mistakes. Use software or apps that simulate algorithmic play to see how different moves impact the game outcome. Finally, consider joining online forums or communities where players discuss algorithmic strategies, allowing you to learn from others' experiences and insights.