From 0583a2535855ab8d6d971dc7c54ac4e21dd11d93 Mon Sep 17 00:00:00 2001 From: Dhruv Goel <83488876+Dhruvgoel3829@users.noreply.github.com> Date: Wed, 2 Oct 2024 18:56:48 +0530 Subject: [PATCH 01/12] game theory folder added --- maths/Game Theory/placeholder | 1 + 1 file changed, 1 insertion(+) create mode 100644 maths/Game Theory/placeholder diff --git a/maths/Game Theory/placeholder b/maths/Game Theory/placeholder new file mode 100644 index 000000000000..8b137891791f --- /dev/null +++ b/maths/Game Theory/placeholder @@ -0,0 +1 @@ + From 02e2eb88f4e7b4a7bc9f9e780d90cee2a5eba5f3 Mon Sep 17 00:00:00 2001 From: Dhruv Goel <83488876+Dhruvgoel3829@users.noreply.github.com> Date: Wed, 2 Oct 2024 18:57:25 +0530 Subject: [PATCH 02/12] Create placeholder --- maths/Game Theory/minimax/placeholder | 1 + 1 file changed, 1 insertion(+) create mode 100644 maths/Game Theory/minimax/placeholder diff --git a/maths/Game Theory/minimax/placeholder b/maths/Game Theory/minimax/placeholder new file mode 100644 index 000000000000..8b137891791f --- /dev/null +++ b/maths/Game Theory/minimax/placeholder @@ -0,0 +1 @@ + From c383eeb10ebcbfc0db0962b50312760af20700ec Mon Sep 17 00:00:00 2001 From: Dhruv Goel <83488876+Dhruvgoel3829@users.noreply.github.com> Date: Wed, 2 Oct 2024 18:57:49 +0530 Subject: [PATCH 03/12] Add files via upload --- maths/Game Theory/minimax/README.md | 30 ++++++ maths/Game Theory/minimax/minimax.py | 147 +++++++++++++++++++++++++++ 2 files changed, 177 insertions(+) create mode 100644 maths/Game Theory/minimax/README.md create mode 100644 maths/Game Theory/minimax/minimax.py diff --git a/maths/Game Theory/minimax/README.md b/maths/Game Theory/minimax/README.md new file mode 100644 index 000000000000..b6f73f301e49 --- /dev/null +++ b/maths/Game Theory/minimax/README.md @@ -0,0 +1,30 @@ + +# Minimax Algorithm + +A decision-making algorithm for two-player games to minimize the maximum possible loss (This is a simple, recursive, implementation of the MiniMax algorithm in Python) + +MiniMax is used in decision, game theory, statistics and philosophy. It can be implemented on a two player's game given full information of the states like the one offered by games like Tic Tac Toe. That means MM cannot be used in games that feature randomness like dices. The reason is that it has to be fully aware of all possible moves/states during gameplay before it makes its mind on the best move to play. + +The following implementation is made for the cube/sticks game: User sets an initial number of cubes available on a table. Both players (the user & the PC implementing MM) can pick up a number of cubes off the table in groups of 1, 2 or K. K is also set by the user. The player who picks up the last remaining cubes from the table on a single take wins the game. + +The MiniMax algorithm is being implemented for the PC player and it always assume that the opponent (user) is also playing optimum. MM is fully aware of the remaining cubes and its valid moves at all states. So technically it will recursively expand the whole game tree and given the fact that the amount of possible moves are three (1,2,K), all tree nodes will end up with 3 leaves, one for each option. + +Game over is the case where there are no available cubes on the table or in the case of a negative amount of cubes. The reason for the negative scenario is due to the fact that MM will expand the whole tree without checking if all three options are allowed during a state. In a better implementation we could take care of that scenario as we also did on the user side. No matter what, if MM’s move lead to negative cubes he will lose the game. + +Evaluation starts on the leaves of the tree. Both players alternate during game play so each layer of the tree marks the current player (MAX or MIN). That way the evaluation function can set a higher/positive value if player MAX wins and a lower/negative value if he loses (remember evaluation happens from the MiniMax’s perspective so he will be the MAX player). When all leaves get their evaluation and thanks to the recursive implementation of the algorithm, their values climb up on each layer till the root of the tree also gets evaluated. That way MAX player will try to lead the root to get the highest possible value, assuming that MIN player (user) will try its best to lead to the lowest value possible. When the root gets its value, MAX player (who will be the first one to play) knows what move would lead to victory or at least lead to a less painful loss. + +So the goal of MiniMax is to minimize the possible loss for a worst case scenario, from the algorithm's perspective. + + + +/// There is an example code implemented with deatailed explanation in the minimax.py file /// + + + + +## Acknowledgements + + - [Original Author](https://github.com/savvasio) + - [Wiki](https://en.wikipedia.org/wiki/Minimax) + - [Video Explanation](https://www.youtube.com/watch?v=l-hh51ncgDI) + diff --git a/maths/Game Theory/minimax/minimax.py b/maths/Game Theory/minimax/minimax.py new file mode 100644 index 000000000000..8cbf01798455 --- /dev/null +++ b/maths/Game Theory/minimax/minimax.py @@ -0,0 +1,147 @@ +# ==================== 0. Evaluation & Utilities ================== + +# If the amount of cubes on the table is 0, the last player to pick up cubes off the table is the winner. +# State evaluation is set on the MAX player's perspective (PC), so if he wins he gets eval +100. If he loses, his eval is set to -100. +# In states with a negative amount of cubes availiable on the table, the last person played is the loser. +# If the current state is not final, we don't care on the current evaluation so we simply initialise it to 0. + +def evaluate(state, player): + if(state == 0): + if(-player == MAX): + return +100 + else: + return -100 + elif(state < 0): + if(-player == MAX): + return -100 + else: + return +100 + else: + return 0 + +def gameOver(remainingCubes, player): + if(remainingCubes == 0): + if(player == MAX): # If MAX's turn led to 0 cubes on the table + print('='*20) + print('Im sorry, you lost!') + print('='*20) + else: + print('='*69) + print('Hey congrats! You won MiniMax. Didnt see that coming!') + print('='*69) + return True + +# M input validation +def validateM(message): + while True: + try: + inp = input(message) + if(inp == 'q' or inp == 'Q'): quit() # Exit tha game + M = int(inp) + except ValueError: + print('Try again with an integer!') + continue + else: + if(M >= 4): # We can not accept less than 4 + return M + else: + print('Please try again with an integer bigger than 3.') + continue + +# K input validation +def validateK(message): + while True: + try: + inp = input(message) + if(inp == 'q' or inp == 'Q'): quit() + K = int(inp) + except ValueError: + print('Try again with an integer!') + continue + if(K > 2) and (K < M): # acceptable K limits are 2+1 & M-1 respectively. + return K + else: + print(f'You need to insert an integer in the range of 3 to {M-1}!') + +# Game play input validation +# Input is considered valid only if its one of the 3 availiable options and does not cause a negative amount of cubes on the table. +def validateInput(message): + while True: + try: + inp = input(message) + if(inp == 'q' or inp == 'Q'): quit() + inp = int(inp) # in the cause of not integer input it causes an error + except ValueError: + print(f'Try again with an integer!') + continue + if(inp in choices): + if(M - inp >=0): + return inp # Accepted input + else: + print(f'There are no {inp} availiable cubes. Try to pick up less..') + else: + print(f'Wrong choice, try again. Availiable options are: 1 or 2 or {K}: ') + +def plural(choice): + if(choice == 1): + return 'cube' + else: + return 'cubes' + +# ==================== 1. MiniMax for the optimal choice from MAX ================== +# It recursively expands the whole tree and returns the list [score, move], +# meaning the pair of best score tighten to the actual move that caused it. +def MiniMax(state, player): + if(state <= 0): # Base case that will end recursion + return [evaluate(state, player), 0] # We really do not care on the move at this point + + availiableChoices=[] + for i in range(len(choices)): # for every availiable choice/branch of the tree 1, 2 ή K + score, move = MiniMax(state - choices[i], -player) # Again we dont care on the move here + availiableChoices.append(score) + + if (player == MAX): + score = max(availiableChoices) + move = [i for i, value in enumerate(availiableChoices) if value == score] + # move list consists of all indexes where min or max shows up but we will + # use only the 1st one. + return [score, move[0]] + else: + score = min(availiableChoices) + move = [i for i, value in enumerate(availiableChoices) if value == score] + return [score, move[0]] + + +# ====================== 2. MAIN EXECUTION ====================== +print('+'*126) +print('INSTUCTIONS: There are M availiable cubes on the table. Both players are allowed to remove 1, 2 or K cubes at the same time.') +print('You will set the M & K variables. Since tree prunning has not been implemented, its Minimax after all, we suggest you set M < 20 for the execution to be smooth.') +print('Press q to exit the game.') +print('The player who removes the last cube off the table will be the winner. The first player is the PC. Good luck!') +print('+'*126) + +MAX = +1 +MIN = -1 +M = validateM('Please insert an initial number of cubes (M) availiable on the table: ') # M = state/depth/remainingCubes +K = validateK('Please insert an integer K, 2 < K < M, that will act as the 3rd option for the ammount of cubes both players can get off the table: ') +choices = [1, 2, K] + +print(f'\nThe game begins with {M} cubes availiable on the table and each player can pick 1, 2 ή {K}:') +while(M > 0): + # ===== PC's turn ===== + print('Please wait for the PC to make its mind..') + score, move = MiniMax(M, MAX) + M = M - choices[move] + + print(f'\nPc chose to remove {choices[move]} {plural(choices[move])} off the table. Remaining cubes are {M}.') + if((gameOver(M, MAX))): break # Game over check + + # ===== Παίζει ο χρήστης ===== + else: + userChoice = validateInput(f'\nHow many cubes would you like to pick up (1, 2 ή {K}): ') + # In valid the game goes on. In any other case it gets stacked on the validation function till a proper input is given. + + M = M - int(userChoice) + print(f'\nYou chose to remove {userChoice} {plural(int(userChoice))} from the table. Remaining cubes are {M}.') + if((gameOver(M, MIN))): break # Game over check. + \ No newline at end of file From e1a670190b85c396115fa7ff65fc7d09699e2f0e Mon Sep 17 00:00:00 2001 From: Dhruv Goel <83488876+Dhruvgoel3829@users.noreply.github.com> Date: Wed, 2 Oct 2024 18:59:42 +0530 Subject: [PATCH 04/12] Delete maths/Game Theory/placeholder --- maths/Game Theory/placeholder | 1 - 1 file changed, 1 deletion(-) delete mode 100644 maths/Game Theory/placeholder diff --git a/maths/Game Theory/placeholder b/maths/Game Theory/placeholder deleted file mode 100644 index 8b137891791f..000000000000 --- a/maths/Game Theory/placeholder +++ /dev/null @@ -1 +0,0 @@ - From 27ff760be82803f409a0b3be53f6f2085acfb74f Mon Sep 17 00:00:00 2001 From: Dhruv Goel <83488876+Dhruvgoel3829@users.noreply.github.com> Date: Wed, 2 Oct 2024 19:01:10 +0530 Subject: [PATCH 05/12] Create readme.md --- maths/Game Theory/AlphaBetaPruning/readme.md | 14 ++++++++++++++ 1 file changed, 14 insertions(+) create mode 100644 maths/Game Theory/AlphaBetaPruning/readme.md diff --git a/maths/Game Theory/AlphaBetaPruning/readme.md b/maths/Game Theory/AlphaBetaPruning/readme.md new file mode 100644 index 000000000000..b5a728105449 --- /dev/null +++ b/maths/Game Theory/AlphaBetaPruning/readme.md @@ -0,0 +1,14 @@ +# Alpha-Beta Pruning + +An optimization technique for the minimax algorithm that reduces the number of nodes evaluated by eliminating branches that won't affect the final decision (basically an upgrade of minimax algorithm) + +As we have seen in the minimax search algorithm that the number of game states it has to examine are exponential in depth of the tree. Since we cannot eliminate the exponent, but we can cut it to half. Hence there is a technique by which without checking each node of the game tree we can compute the correct minimax decision, and this technique is called pruning. This involves two threshold parameter Alpha and beta for future expansion, so it is called alpha-beta pruning. It is also called as Alpha-Beta Algorithm. Alpha-beta pruning can be applied at any depth of a tree, and sometimes it not only prunes the tree leaves but also entire sub-tree. The two-parameter can be defined as: + +1. Alpha: The best (highest-value) choice we have found so far at any point along the path of Maximizer. The initial value of alpha is -∞. +2. Beta: The best (lowest-value) choice we have found so far at any point along the path of Minimizer. The initial value of beta is +∞. The Alpha-beta pruning to a standard minimax algorithm returns the same move as the standard algorithm does, but it removes all the nodes which are not really affecting the final decision but making algorithm slow. Hence by pruning these nodes, it makes the algorithm fast. +## Acknowledgements + + - [Original Author](https://github.com/anmolchandelCO180309) + - [Wiki](https://en.wikipedia.org/wiki/Alpha%E2%80%93beta_pruning) + +#### /// The alphabetapruning.py file has a Tic-Tac-Toe game implemented with a good explanation /// From 3580f822312d6864ae280ceb534ce54115baa59a Mon Sep 17 00:00:00 2001 From: Dhruv Goel <83488876+Dhruvgoel3829@users.noreply.github.com> Date: Wed, 2 Oct 2024 19:01:38 +0530 Subject: [PATCH 06/12] Add files via upload --- .../AlphaBetaPruning/alphabetapruning.py | 194 ++++++++++++++++++ 1 file changed, 194 insertions(+) create mode 100644 maths/Game Theory/AlphaBetaPruning/alphabetapruning.py diff --git a/maths/Game Theory/AlphaBetaPruning/alphabetapruning.py b/maths/Game Theory/AlphaBetaPruning/alphabetapruning.py new file mode 100644 index 000000000000..82ab468a8d6c --- /dev/null +++ b/maths/Game Theory/AlphaBetaPruning/alphabetapruning.py @@ -0,0 +1,194 @@ +from random import choice +from math import inf + +board = [[0, 0, 0], + [0, 0, 0], + [0, 0, 0]] + +def Gameboard(board): + chars = {1: 'X', -1: 'O', 0: ' '} + for x in board: + for y in x: + ch = chars[y] + print(f'| {ch} |', end='') + print('\n' + '---------------') + print('===============') + +def Clearboard(board): + for x, row in enumerate(board): + for y, col in enumerate(row): + board[x][y] = 0 + +def winningPlayer(board, player): + conditions = [[board[0][0], board[0][1], board[0][2]], + [board[1][0], board[1][1], board[1][2]], + [board[2][0], board[2][1], board[2][2]], + [board[0][0], board[1][0], board[2][0]], + [board[0][1], board[1][1], board[2][1]], + [board[0][2], board[1][2], board[2][2]], + [board[0][0], board[1][1], board[2][2]], + [board[0][2], board[1][1], board[2][0]]] + + if [player, player, player] in conditions: + return True + + return False + +def gameWon(board): + return winningPlayer(board, 1) or winningPlayer(board, -1) + +def printResult(board): + if winningPlayer(board, 1): + print('X has won! ' + '\n') + + elif winningPlayer(board, -1): + print('O\'s have won! ' + '\n') + + else: + print('Draw' + '\n') + +def blanks(board): + blank = [] + for x, row in enumerate(board): + for y, col in enumerate(row): + if board[x][y] == 0: + blank.append([x, y]) + + return blank + +def boardFull(board): + if len(blanks(board)) == 0: + return True + return False + +def setMove(board, x, y, player): + board[x][y] = player + +def playerMove(board): + e = True + moves = {1: [0, 0], 2: [0, 1], 3: [0, 2], + 4: [1, 0], 5: [1, 1], 6: [1, 2], + 7: [2, 0], 8: [2, 1], 9: [2, 2]} + while e: + try: + move = int(input('Enter a number between 1-9: ')) + if move < 1 or move > 9: + print('Invalid Move! Try again!') + elif not (moves[move] in blanks(board)): + print('Invalid Move! Try again!') + else: + setMove(board, moves[move][0], moves[move][1], 1) + Gameboard(board) + e = False + except(KeyError, ValueError): + print('Enter a number!') + +def getScore(board): + if winningPlayer(board, 1): + return 10 + + elif winningPlayer(board, -1): + return -10 + + else: + return 0 + +def abminimax(board, depth, alpha, beta, player): + row = -1 + col = -1 + if depth == 0 or gameWon(board): + return [row, col, getScore(board)] + + else: + for cell in blanks(board): + setMove(board, cell[0], cell[1], player) + score = abminimax(board, depth - 1, alpha, beta, -player) + if player == 1: + # X is always the max player + if score[2] > alpha: + alpha = score[2] + row = cell[0] + col = cell[1] + + else: + if score[2] < beta: + beta = score[2] + row = cell[0] + col = cell[1] + + setMove(board, cell[0], cell[1], 0) + + if alpha >= beta: + break + + if player == 1: + return [row, col, alpha] + + else: + return [row, col, beta] + +def o_comp(board): + if len(blanks(board)) == 9: + x = choice([0, 1, 2]) + y = choice([0, 1, 2]) + setMove(board, x, y, -1) + Gameboard(board) + + else: + result = abminimax(board, len(blanks(board)), -inf, inf, -1) + setMove(board, result[0], result[1], -1) + Gameboard(board) + +def x_comp(board): + if len(blanks(board)) == 9: + x = choice([0, 1, 2]) + y = choice([0, 1, 2]) + setMove(board, x, y, 1) + Gameboard(board) + + else: + result = abminimax(board, len(blanks(board)), -inf, inf, 1) + setMove(board, result[0], result[1], 1) + Gameboard(board) + +def makeMove(board, player, mode): + if mode == 1: + if player == 1: + playerMove(board) + + else: + o_comp(board) + else: + if player == 1: + o_comp(board) + else: + x_comp(board) + +def pvc(): + while True: + try: + order = int(input('Enter to play 1st or 2nd: ')) + if not (order == 1 or order == 2): + print('Please pick 1 or 2') + else: + break + except(KeyError, ValueError): + print('Enter a number') + + Clearboard(board) + if order == 2: + currentPlayer = -1 + else: + currentPlayer = 1 + + while not (boardFull(board) or gameWon(board)): + makeMove(board, currentPlayer, 1) + currentPlayer *= -1 + + printResult(board) + +# Driver Code +print("=================================================") +print("TIC-TAC-TOE using MINIMAX with ALPHA-BETA Pruning") +print("=================================================") +pvc() From e328c5e5df0a8e21ec304c7ad13110455d28f697 Mon Sep 17 00:00:00 2001 From: Dhruv Goel <83488876+Dhruvgoel3829@users.noreply.github.com> Date: Wed, 2 Oct 2024 19:02:22 +0530 Subject: [PATCH 07/12] removed junk --- maths/Game Theory/minimax/placeholder | 1 - 1 file changed, 1 deletion(-) delete mode 100644 maths/Game Theory/minimax/placeholder diff --git a/maths/Game Theory/minimax/placeholder b/maths/Game Theory/minimax/placeholder deleted file mode 100644 index 8b137891791f..000000000000 --- a/maths/Game Theory/minimax/placeholder +++ /dev/null @@ -1 +0,0 @@ - From 3fe75e0f391cf8b7adb66908a10805fb0daebf77 Mon Sep 17 00:00:00 2001 From: Dhruv Goel <83488876+Dhruvgoel3829@users.noreply.github.com> Date: Wed, 2 Oct 2024 19:03:43 +0530 Subject: [PATCH 08/12] Create README.md --- .../MonteCarloTreeSearch (MCTS)/README.md | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) create mode 100644 maths/Game Theory/MonteCarloTreeSearch (MCTS)/README.md diff --git a/maths/Game Theory/MonteCarloTreeSearch (MCTS)/README.md b/maths/Game Theory/MonteCarloTreeSearch (MCTS)/README.md new file mode 100644 index 000000000000..3af1d8c95fda --- /dev/null +++ b/maths/Game Theory/MonteCarloTreeSearch (MCTS)/README.md @@ -0,0 +1,16 @@ + +# Monte Carlo Tree Search (MCTS) + +A heuristic search algorithm used for decision-making processes, particularly in games like Go. + +The focus of MCTS is on the analysis of the most promising moves, expanding the search tree based on random sampling of the search space. The application of Monte Carlo tree search in games is based on many playouts, also called roll-outs. In each playout, the game is played out to the very end by selecting moves at random. The final game result of each playout is then used to weight the nodes in the game tree so that better nodes are more likely to be chosen in future playouts. + +#### In the monte_carlo_tree_search.py file there is a minimal implementation of Monte Carlo tree search (MCTS) in Python 3 + +#### In the tictactoe.py file tere is an example implementation of the abstract Node class for use in MCTS + + +## Acknowledgements + + - [Original Author](https://gist.github.com/qpwo/c538c6f73727e254fdc7fab81024f6e1) + - [Wiki](https://en.wikipedia.org/wiki/Monte_Carlo_tree_search) From 057380d165a7a49f6f93eb95b8dcc717abbe1d4d Mon Sep 17 00:00:00 2001 From: Dhruv Goel <83488876+Dhruvgoel3829@users.noreply.github.com> Date: Wed, 2 Oct 2024 19:04:09 +0530 Subject: [PATCH 09/12] Add files via upload --- .../monte_carlo_tree_search.py | 134 ++++++++++++++++++ .../MonteCarloTreeSearch (MCTS)/tictactoe.py | 129 +++++++++++++++++ 2 files changed, 263 insertions(+) create mode 100644 maths/Game Theory/MonteCarloTreeSearch (MCTS)/monte_carlo_tree_search.py create mode 100644 maths/Game Theory/MonteCarloTreeSearch (MCTS)/tictactoe.py diff --git a/maths/Game Theory/MonteCarloTreeSearch (MCTS)/monte_carlo_tree_search.py b/maths/Game Theory/MonteCarloTreeSearch (MCTS)/monte_carlo_tree_search.py new file mode 100644 index 000000000000..19a06dd63d17 --- /dev/null +++ b/maths/Game Theory/MonteCarloTreeSearch (MCTS)/monte_carlo_tree_search.py @@ -0,0 +1,134 @@ +""" +A minimal implementation of Monte Carlo tree search (MCTS) in Python 3 +Luke Harold Miles, July 2019, Public Domain Dedication +See also https://en.wikipedia.org/wiki/Monte_Carlo_tree_search +https://gist.github.com/qpwo/c538c6f73727e254fdc7fab81024f6e1 +""" +from abc import ABC, abstractmethod +from collections import defaultdict +import math + + +class MCTS: + "Monte Carlo tree searcher. First rollout the tree then choose a move." + + def __init__(self, exploration_weight=1): + self.Q = defaultdict(int) # total reward of each node + self.N = defaultdict(int) # total visit count for each node + self.children = dict() # children of each node + self.exploration_weight = exploration_weight + + def choose(self, node): + "Choose the best successor of node. (Choose a move in the game)" + if node.is_terminal(): + raise RuntimeError(f"choose called on terminal node {node}") + + if node not in self.children: + return node.find_random_child() + + def score(n): + if self.N[n] == 0: + return float("-inf") # avoid unseen moves + return self.Q[n] / self.N[n] # average reward + + return max(self.children[node], key=score) + + def do_rollout(self, node): + "Make the tree one layer better. (Train for one iteration.)" + path = self._select(node) + leaf = path[-1] + self._expand(leaf) + reward = self._simulate(leaf) + self._backpropagate(path, reward) + + def _select(self, node): + "Find an unexplored descendent of `node`" + path = [] + while True: + path.append(node) + if node not in self.children or not self.children[node]: + # node is either unexplored or terminal + return path + unexplored = self.children[node] - self.children.keys() + if unexplored: + n = unexplored.pop() + path.append(n) + return path + node = self._uct_select(node) # descend a layer deeper + + def _expand(self, node): + "Update the `children` dict with the children of `node`" + if node in self.children: + return # already expanded + self.children[node] = node.find_children() + + def _simulate(self, node): + "Returns the reward for a random simulation (to completion) of `node`" + invert_reward = True + while True: + if node.is_terminal(): + reward = node.reward() + return 1 - reward if invert_reward else reward + node = node.find_random_child() + invert_reward = not invert_reward + + def _backpropagate(self, path, reward): + "Send the reward back up to the ancestors of the leaf" + for node in reversed(path): + self.N[node] += 1 + self.Q[node] += reward + reward = 1 - reward # 1 for me is 0 for my enemy, and vice versa + + def _uct_select(self, node): + "Select a child of node, balancing exploration & exploitation" + + # All children of node should already be expanded: + assert all(n in self.children for n in self.children[node]) + + log_N_vertex = math.log(self.N[node]) + + def uct(n): + "Upper confidence bound for trees" + return self.Q[n] / self.N[n] + self.exploration_weight * math.sqrt( + log_N_vertex / self.N[n] + ) + + return max(self.children[node], key=uct) + + +class Node(ABC): + """ + A representation of a single board state. + MCTS works by constructing a tree of these Nodes. + Could be e.g. a chess or checkers board state. + """ + + @abstractmethod + def find_children(self): + "All possible successors of this board state" + return set() + + @abstractmethod + def find_random_child(self): + "Random successor of this board state (for more efficient simulation)" + return None + + @abstractmethod + def is_terminal(self): + "Returns True if the node has no children" + return True + + @abstractmethod + def reward(self): + "Assumes `self` is terminal node. 1=win, 0=loss, .5=tie, etc" + return 0 + + @abstractmethod + def __hash__(self): + "Nodes must be hashable" + return 123456789 + + @abstractmethod + def __eq__(node1, node2): + "Nodes must be comparable" + return True \ No newline at end of file diff --git a/maths/Game Theory/MonteCarloTreeSearch (MCTS)/tictactoe.py b/maths/Game Theory/MonteCarloTreeSearch (MCTS)/tictactoe.py new file mode 100644 index 000000000000..1dd906b4ae43 --- /dev/null +++ b/maths/Game Theory/MonteCarloTreeSearch (MCTS)/tictactoe.py @@ -0,0 +1,129 @@ +""" +An example implementation of the abstract Node class for use in MCTS + +If you run this file then you can play against the computer. + +A tic-tac-toe board is represented as a tuple of 9 values, each either None, +True, or False, respectively meaning 'empty', 'X', and 'O'. + +The board is indexed by row: +0 1 2 +3 4 5 +6 7 8 + +For example, this game board +O - X +O X - +X - - +corrresponds to this tuple: +(False, None, True, False, True, None, True, None, None) +""" + +from collections import namedtuple +from random import choice +from monte_carlo_tree_search import MCTS, Node + +_TTTB = namedtuple("TicTacToeBoard", "tup turn winner terminal") + +# Inheriting from a namedtuple is convenient because it makes the class +# immutable and predefines __init__, __repr__, __hash__, __eq__, and others +class TicTacToeBoard(_TTTB, Node): + def find_children(board): + if board.terminal: # If the game is finished then no moves can be made + return set() + # Otherwise, you can make a move in each of the empty spots + return { + board.make_move(i) for i, value in enumerate(board.tup) if value is None + } + + def find_random_child(board): + if board.terminal: + return None # If the game is finished then no moves can be made + empty_spots = [i for i, value in enumerate(board.tup) if value is None] + return board.make_move(choice(empty_spots)) + + def reward(board): + if not board.terminal: + raise RuntimeError(f"reward called on nonterminal board {board}") + if board.winner is board.turn: + # It's your turn and you've already won. Should be impossible. + raise RuntimeError(f"reward called on unreachable board {board}") + if board.turn is (not board.winner): + return 0 # Your opponent has just won. Bad. + if board.winner is None: + return 0.5 # Board is a tie + # The winner is neither True, False, nor None + raise RuntimeError(f"board has unknown winner type {board.winner}") + + def is_terminal(board): + return board.terminal + + def make_move(board, index): + tup = board.tup[:index] + (board.turn,) + board.tup[index + 1 :] + turn = not board.turn + winner = _find_winner(tup) + is_terminal = (winner is not None) or not any(v is None for v in tup) + return TicTacToeBoard(tup, turn, winner, is_terminal) + + def to_pretty_string(board): + to_char = lambda v: ("X" if v is True else ("O" if v is False else " ")) + rows = [ + [to_char(board.tup[3 * row + col]) for col in range(3)] for row in range(3) + ] + return ( + "\n 1 2 3\n" + + "\n".join(str(i + 1) + " " + " ".join(row) for i, row in enumerate(rows)) + + "\n" + ) + + +def play_game(): + tree = MCTS() + board = new_tic_tac_toe_board() + print(board.to_pretty_string()) + while True: + row_col = input("enter row,col: ") + row, col = map(int, row_col.split(",")) + index = 3 * (row - 1) + (col - 1) + if board.tup[index] is not None: + raise RuntimeError("Invalid move") + board = board.make_move(index) + print(board.to_pretty_string()) + if board.terminal: + break + # You can train as you go, or only at the beginning. + # Here, we train as we go, doing fifty rollouts each turn. + for _ in range(50): + tree.do_rollout(board) + board = tree.choose(board) + print(board.to_pretty_string()) + if board.terminal: + break + + +def _winning_combos(): + for start in range(0, 9, 3): # three in a row + yield (start, start + 1, start + 2) + for start in range(3): # three in a column + yield (start, start + 3, start + 6) + yield (0, 4, 8) # down-right diagonal + yield (2, 4, 6) # down-left diagonal + + +def _find_winner(tup): + "Returns None if no winner, True if X wins, False if O wins" + for i1, i2, i3 in _winning_combos(): + v1, v2, v3 = tup[i1], tup[i2], tup[i3] + if False is v1 is v2 is v3: + return False + if True is v1 is v2 is v3: + return True + return None + + +def new_tic_tac_toe_board(): + return TicTacToeBoard(tup=(None,) * 9, turn=True, winner=None, terminal=False) + + +if __name__ == "__main__": + play_game() \ No newline at end of file From b1b3826895b8a3d2979aee9ac8dc834ce7475906 Mon Sep 17 00:00:00 2001 From: "pre-commit-ci[bot]" <66853113+pre-commit-ci[bot]@users.noreply.github.com> Date: Wed, 2 Oct 2024 13:57:17 +0000 Subject: [PATCH 10/12] [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --- .../AlphaBetaPruning/alphabetapruning.py | 84 +++++---- .../monte_carlo_tree_search.py | 3 +- .../MonteCarloTreeSearch (MCTS)/tictactoe.py | 3 +- maths/Game Theory/minimax/minimax.py | 168 +++++++++++------- 4 files changed, 160 insertions(+), 98 deletions(-) diff --git a/maths/Game Theory/AlphaBetaPruning/alphabetapruning.py b/maths/Game Theory/AlphaBetaPruning/alphabetapruning.py index 82ab468a8d6c..b817347b32d8 100644 --- a/maths/Game Theory/AlphaBetaPruning/alphabetapruning.py +++ b/maths/Game Theory/AlphaBetaPruning/alphabetapruning.py @@ -1,51 +1,57 @@ from random import choice from math import inf -board = [[0, 0, 0], - [0, 0, 0], - [0, 0, 0]] +board = [[0, 0, 0], [0, 0, 0], [0, 0, 0]] + def Gameboard(board): - chars = {1: 'X', -1: 'O', 0: ' '} + chars = {1: "X", -1: "O", 0: " "} for x in board: for y in x: ch = chars[y] - print(f'| {ch} |', end='') - print('\n' + '---------------') - print('===============') + print(f"| {ch} |", end="") + print("\n" + "---------------") + print("===============") + def Clearboard(board): for x, row in enumerate(board): for y, col in enumerate(row): board[x][y] = 0 + def winningPlayer(board, player): - conditions = [[board[0][0], board[0][1], board[0][2]], - [board[1][0], board[1][1], board[1][2]], - [board[2][0], board[2][1], board[2][2]], - [board[0][0], board[1][0], board[2][0]], - [board[0][1], board[1][1], board[2][1]], - [board[0][2], board[1][2], board[2][2]], - [board[0][0], board[1][1], board[2][2]], - [board[0][2], board[1][1], board[2][0]]] + conditions = [ + [board[0][0], board[0][1], board[0][2]], + [board[1][0], board[1][1], board[1][2]], + [board[2][0], board[2][1], board[2][2]], + [board[0][0], board[1][0], board[2][0]], + [board[0][1], board[1][1], board[2][1]], + [board[0][2], board[1][2], board[2][2]], + [board[0][0], board[1][1], board[2][2]], + [board[0][2], board[1][1], board[2][0]], + ] if [player, player, player] in conditions: return True return False + def gameWon(board): return winningPlayer(board, 1) or winningPlayer(board, -1) + def printResult(board): if winningPlayer(board, 1): - print('X has won! ' + '\n') + print("X has won! " + "\n") elif winningPlayer(board, -1): - print('O\'s have won! ' + '\n') + print("O's have won! " + "\n") else: - print('Draw' + '\n') + print("Draw" + "\n") + def blanks(board): blank = [] @@ -56,32 +62,44 @@ def blanks(board): return blank + def boardFull(board): if len(blanks(board)) == 0: return True return False + def setMove(board, x, y, player): board[x][y] = player + def playerMove(board): e = True - moves = {1: [0, 0], 2: [0, 1], 3: [0, 2], - 4: [1, 0], 5: [1, 1], 6: [1, 2], - 7: [2, 0], 8: [2, 1], 9: [2, 2]} + moves = { + 1: [0, 0], + 2: [0, 1], + 3: [0, 2], + 4: [1, 0], + 5: [1, 1], + 6: [1, 2], + 7: [2, 0], + 8: [2, 1], + 9: [2, 2], + } while e: try: - move = int(input('Enter a number between 1-9: ')) + move = int(input("Enter a number between 1-9: ")) if move < 1 or move > 9: - print('Invalid Move! Try again!') + print("Invalid Move! Try again!") elif not (moves[move] in blanks(board)): - print('Invalid Move! Try again!') + print("Invalid Move! Try again!") else: setMove(board, moves[move][0], moves[move][1], 1) Gameboard(board) e = False - except(KeyError, ValueError): - print('Enter a number!') + except (KeyError, ValueError): + print("Enter a number!") + def getScore(board): if winningPlayer(board, 1): @@ -93,6 +111,7 @@ def getScore(board): else: return 0 + def abminimax(board, depth, alpha, beta, player): row = -1 col = -1 @@ -127,6 +146,7 @@ def abminimax(board, depth, alpha, beta, player): else: return [row, col, beta] + def o_comp(board): if len(blanks(board)) == 9: x = choice([0, 1, 2]) @@ -139,6 +159,7 @@ def o_comp(board): setMove(board, result[0], result[1], -1) Gameboard(board) + def x_comp(board): if len(blanks(board)) == 9: x = choice([0, 1, 2]) @@ -151,6 +172,7 @@ def x_comp(board): setMove(board, result[0], result[1], 1) Gameboard(board) + def makeMove(board, player, mode): if mode == 1: if player == 1: @@ -164,16 +186,17 @@ def makeMove(board, player, mode): else: x_comp(board) + def pvc(): while True: try: - order = int(input('Enter to play 1st or 2nd: ')) + order = int(input("Enter to play 1st or 2nd: ")) if not (order == 1 or order == 2): - print('Please pick 1 or 2') + print("Please pick 1 or 2") else: break - except(KeyError, ValueError): - print('Enter a number') + except (KeyError, ValueError): + print("Enter a number") Clearboard(board) if order == 2: @@ -187,6 +210,7 @@ def pvc(): printResult(board) + # Driver Code print("=================================================") print("TIC-TAC-TOE using MINIMAX with ALPHA-BETA Pruning") diff --git a/maths/Game Theory/MonteCarloTreeSearch (MCTS)/monte_carlo_tree_search.py b/maths/Game Theory/MonteCarloTreeSearch (MCTS)/monte_carlo_tree_search.py index 19a06dd63d17..28a23f347a28 100644 --- a/maths/Game Theory/MonteCarloTreeSearch (MCTS)/monte_carlo_tree_search.py +++ b/maths/Game Theory/MonteCarloTreeSearch (MCTS)/monte_carlo_tree_search.py @@ -4,6 +4,7 @@ See also https://en.wikipedia.org/wiki/Monte_Carlo_tree_search https://gist.github.com/qpwo/c538c6f73727e254fdc7fab81024f6e1 """ + from abc import ABC, abstractmethod from collections import defaultdict import math @@ -131,4 +132,4 @@ def __hash__(self): @abstractmethod def __eq__(node1, node2): "Nodes must be comparable" - return True \ No newline at end of file + return True diff --git a/maths/Game Theory/MonteCarloTreeSearch (MCTS)/tictactoe.py b/maths/Game Theory/MonteCarloTreeSearch (MCTS)/tictactoe.py index 1dd906b4ae43..d16d602714d0 100644 --- a/maths/Game Theory/MonteCarloTreeSearch (MCTS)/tictactoe.py +++ b/maths/Game Theory/MonteCarloTreeSearch (MCTS)/tictactoe.py @@ -25,6 +25,7 @@ _TTTB = namedtuple("TicTacToeBoard", "tup turn winner terminal") + # Inheriting from a namedtuple is convenient because it makes the class # immutable and predefines __init__, __repr__, __hash__, __eq__, and others class TicTacToeBoard(_TTTB, Node): @@ -126,4 +127,4 @@ def new_tic_tac_toe_board(): if __name__ == "__main__": - play_game() \ No newline at end of file + play_game() diff --git a/maths/Game Theory/minimax/minimax.py b/maths/Game Theory/minimax/minimax.py index 8cbf01798455..13cf0cdfabb6 100644 --- a/maths/Game Theory/minimax/minimax.py +++ b/maths/Game Theory/minimax/minimax.py @@ -1,67 +1,74 @@ # ==================== 0. Evaluation & Utilities ================== # If the amount of cubes on the table is 0, the last player to pick up cubes off the table is the winner. -# State evaluation is set on the MAX player's perspective (PC), so if he wins he gets eval +100. If he loses, his eval is set to -100. +# State evaluation is set on the MAX player's perspective (PC), so if he wins he gets eval +100. If he loses, his eval is set to -100. # In states with a negative amount of cubes availiable on the table, the last person played is the loser. # If the current state is not final, we don't care on the current evaluation so we simply initialise it to 0. + def evaluate(state, player): - if(state == 0): - if(-player == MAX): - return +100 + if state == 0: + if -player == MAX: + return +100 else: return -100 - elif(state < 0): - if(-player == MAX): - return -100 + elif state < 0: + if -player == MAX: + return -100 else: return +100 else: - return 0 + return 0 + def gameOver(remainingCubes, player): - if(remainingCubes == 0): - if(player == MAX): # If MAX's turn led to 0 cubes on the table - print('='*20) - print('Im sorry, you lost!') - print('='*20) - else: - print('='*69) - print('Hey congrats! You won MiniMax. Didnt see that coming!') - print('='*69) + if remainingCubes == 0: + if player == MAX: # If MAX's turn led to 0 cubes on the table + print("=" * 20) + print("Im sorry, you lost!") + print("=" * 20) + else: + print("=" * 69) + print("Hey congrats! You won MiniMax. Didnt see that coming!") + print("=" * 69) return True + # M input validation def validateM(message): while True: try: inp = input(message) - if(inp == 'q' or inp == 'Q'): quit() # Exit tha game + if inp == "q" or inp == "Q": + quit() # Exit tha game M = int(inp) except ValueError: - print('Try again with an integer!') + print("Try again with an integer!") continue else: - if(M >= 4): # We can not accept less than 4 + if M >= 4: # We can not accept less than 4 return M else: - print('Please try again with an integer bigger than 3.') + print("Please try again with an integer bigger than 3.") continue + # K input validation def validateK(message): while True: try: inp = input(message) - if(inp == 'q' or inp == 'Q'): quit() + if inp == "q" or inp == "Q": + quit() K = int(inp) except ValueError: - print('Try again with an integer!') + print("Try again with an integer!") continue - if(K > 2) and (K < M): # acceptable K limits are 2+1 & M-1 respectively. + if (K > 2) and (K < M): # acceptable K limits are 2+1 & M-1 respectively. return K else: - print(f'You need to insert an integer in the range of 3 to {M-1}!') + print(f"You need to insert an integer in the range of 3 to {M-1}!") + # Game play input validation # Input is considered valid only if its one of the 3 availiable options and does not cause a negative amount of cubes on the table. @@ -69,79 +76,108 @@ def validateInput(message): while True: try: inp = input(message) - if(inp == 'q' or inp == 'Q'): quit() - inp = int(inp) # in the cause of not integer input it causes an error + if inp == "q" or inp == "Q": + quit() + inp = int(inp) # in the cause of not integer input it causes an error except ValueError: - print(f'Try again with an integer!') + print(f"Try again with an integer!") continue - if(inp in choices): - if(M - inp >=0): - return inp # Accepted input + if inp in choices: + if M - inp >= 0: + return inp # Accepted input else: - print(f'There are no {inp} availiable cubes. Try to pick up less..') + print(f"There are no {inp} availiable cubes. Try to pick up less..") else: - print(f'Wrong choice, try again. Availiable options are: 1 or 2 or {K}: ') + print(f"Wrong choice, try again. Availiable options are: 1 or 2 or {K}: ") + def plural(choice): - if(choice == 1): - return 'cube' + if choice == 1: + return "cube" else: - return 'cubes' + return "cubes" + # ==================== 1. MiniMax for the optimal choice from MAX ================== -# It recursively expands the whole tree and returns the list [score, move], +# It recursively expands the whole tree and returns the list [score, move], # meaning the pair of best score tighten to the actual move that caused it. def MiniMax(state, player): - if(state <= 0): # Base case that will end recursion - return [evaluate(state, player), 0] # We really do not care on the move at this point - - availiableChoices=[] - for i in range(len(choices)): # for every availiable choice/branch of the tree 1, 2 ή K - score, move = MiniMax(state - choices[i], -player) # Again we dont care on the move here + if state <= 0: # Base case that will end recursion + return [ + evaluate(state, player), + 0, + ] # We really do not care on the move at this point + + availiableChoices = [] + for i in range( + len(choices) + ): # for every availiable choice/branch of the tree 1, 2 ή K + score, move = MiniMax( + state - choices[i], -player + ) # Again we dont care on the move here availiableChoices.append(score) - if (player == MAX): + if player == MAX: score = max(availiableChoices) move = [i for i, value in enumerate(availiableChoices) if value == score] - # move list consists of all indexes where min or max shows up but we will - # use only the 1st one. + # move list consists of all indexes where min or max shows up but we will + # use only the 1st one. return [score, move[0]] else: score = min(availiableChoices) move = [i for i, value in enumerate(availiableChoices) if value == score] return [score, move[0]] - + # ====================== 2. MAIN EXECUTION ====================== -print('+'*126) -print('INSTUCTIONS: There are M availiable cubes on the table. Both players are allowed to remove 1, 2 or K cubes at the same time.') -print('You will set the M & K variables. Since tree prunning has not been implemented, its Minimax after all, we suggest you set M < 20 for the execution to be smooth.') -print('Press q to exit the game.') -print('The player who removes the last cube off the table will be the winner. The first player is the PC. Good luck!') -print('+'*126) +print("+" * 126) +print( + "INSTUCTIONS: There are M availiable cubes on the table. Both players are allowed to remove 1, 2 or K cubes at the same time." +) +print( + "You will set the M & K variables. Since tree prunning has not been implemented, its Minimax after all, we suggest you set M < 20 for the execution to be smooth." +) +print("Press q to exit the game.") +print( + "The player who removes the last cube off the table will be the winner. The first player is the PC. Good luck!" +) +print("+" * 126) MAX = +1 MIN = -1 -M = validateM('Please insert an initial number of cubes (M) availiable on the table: ') # M = state/depth/remainingCubes -K = validateK('Please insert an integer K, 2 < K < M, that will act as the 3rd option for the ammount of cubes both players can get off the table: ') +M = validateM( + "Please insert an initial number of cubes (M) availiable on the table: " +) # M = state/depth/remainingCubes +K = validateK( + "Please insert an integer K, 2 < K < M, that will act as the 3rd option for the ammount of cubes both players can get off the table: " +) choices = [1, 2, K] -print(f'\nThe game begins with {M} cubes availiable on the table and each player can pick 1, 2 ή {K}:') -while(M > 0): +print( + f"\nThe game begins with {M} cubes availiable on the table and each player can pick 1, 2 ή {K}:" +) +while M > 0: # ===== PC's turn ===== - print('Please wait for the PC to make its mind..') + print("Please wait for the PC to make its mind..") score, move = MiniMax(M, MAX) M = M - choices[move] - print(f'\nPc chose to remove {choices[move]} {plural(choices[move])} off the table. Remaining cubes are {M}.') - if((gameOver(M, MAX))): break # Game over check + print( + f"\nPc chose to remove {choices[move]} {plural(choices[move])} off the table. Remaining cubes are {M}." + ) + if gameOver(M, MAX): + break # Game over check # ===== Παίζει ο χρήστης ===== - else: - userChoice = validateInput(f'\nHow many cubes would you like to pick up (1, 2 ή {K}): ') + else: + userChoice = validateInput( + f"\nHow many cubes would you like to pick up (1, 2 ή {K}): " + ) # In valid the game goes on. In any other case it gets stacked on the validation function till a proper input is given. - M = M - int(userChoice) - print(f'\nYou chose to remove {userChoice} {plural(int(userChoice))} from the table. Remaining cubes are {M}.') - if((gameOver(M, MIN))): break # Game over check. - \ No newline at end of file + M = M - int(userChoice) + print( + f"\nYou chose to remove {userChoice} {plural(int(userChoice))} from the table. Remaining cubes are {M}." + ) + if gameOver(M, MIN): + break # Game over check. From ca661e40e3fce124f26e44b983d2383c1d3288a9 Mon Sep 17 00:00:00 2001 From: Dhruv Goel <83488876+Dhruvgoel3829@users.noreply.github.com> Date: Wed, 2 Oct 2024 19:31:18 +0530 Subject: [PATCH 11/12] deleted failing directories --- .../MonteCarloTreeSearch (MCTS)/README.md | 16 --- .../monte_carlo_tree_search.py | 135 ------------------ .../MonteCarloTreeSearch (MCTS)/tictactoe.py | 130 ----------------- 3 files changed, 281 deletions(-) delete mode 100644 maths/Game Theory/MonteCarloTreeSearch (MCTS)/README.md delete mode 100644 maths/Game Theory/MonteCarloTreeSearch (MCTS)/monte_carlo_tree_search.py delete mode 100644 maths/Game Theory/MonteCarloTreeSearch (MCTS)/tictactoe.py diff --git a/maths/Game Theory/MonteCarloTreeSearch (MCTS)/README.md b/maths/Game Theory/MonteCarloTreeSearch (MCTS)/README.md deleted file mode 100644 index 3af1d8c95fda..000000000000 --- a/maths/Game Theory/MonteCarloTreeSearch (MCTS)/README.md +++ /dev/null @@ -1,16 +0,0 @@ - -# Monte Carlo Tree Search (MCTS) - -A heuristic search algorithm used for decision-making processes, particularly in games like Go. - -The focus of MCTS is on the analysis of the most promising moves, expanding the search tree based on random sampling of the search space. The application of Monte Carlo tree search in games is based on many playouts, also called roll-outs. In each playout, the game is played out to the very end by selecting moves at random. The final game result of each playout is then used to weight the nodes in the game tree so that better nodes are more likely to be chosen in future playouts. - -#### In the monte_carlo_tree_search.py file there is a minimal implementation of Monte Carlo tree search (MCTS) in Python 3 - -#### In the tictactoe.py file tere is an example implementation of the abstract Node class for use in MCTS - - -## Acknowledgements - - - [Original Author](https://gist.github.com/qpwo/c538c6f73727e254fdc7fab81024f6e1) - - [Wiki](https://en.wikipedia.org/wiki/Monte_Carlo_tree_search) diff --git a/maths/Game Theory/MonteCarloTreeSearch (MCTS)/monte_carlo_tree_search.py b/maths/Game Theory/MonteCarloTreeSearch (MCTS)/monte_carlo_tree_search.py deleted file mode 100644 index 28a23f347a28..000000000000 --- a/maths/Game Theory/MonteCarloTreeSearch (MCTS)/monte_carlo_tree_search.py +++ /dev/null @@ -1,135 +0,0 @@ -""" -A minimal implementation of Monte Carlo tree search (MCTS) in Python 3 -Luke Harold Miles, July 2019, Public Domain Dedication -See also https://en.wikipedia.org/wiki/Monte_Carlo_tree_search -https://gist.github.com/qpwo/c538c6f73727e254fdc7fab81024f6e1 -""" - -from abc import ABC, abstractmethod -from collections import defaultdict -import math - - -class MCTS: - "Monte Carlo tree searcher. First rollout the tree then choose a move." - - def __init__(self, exploration_weight=1): - self.Q = defaultdict(int) # total reward of each node - self.N = defaultdict(int) # total visit count for each node - self.children = dict() # children of each node - self.exploration_weight = exploration_weight - - def choose(self, node): - "Choose the best successor of node. (Choose a move in the game)" - if node.is_terminal(): - raise RuntimeError(f"choose called on terminal node {node}") - - if node not in self.children: - return node.find_random_child() - - def score(n): - if self.N[n] == 0: - return float("-inf") # avoid unseen moves - return self.Q[n] / self.N[n] # average reward - - return max(self.children[node], key=score) - - def do_rollout(self, node): - "Make the tree one layer better. (Train for one iteration.)" - path = self._select(node) - leaf = path[-1] - self._expand(leaf) - reward = self._simulate(leaf) - self._backpropagate(path, reward) - - def _select(self, node): - "Find an unexplored descendent of `node`" - path = [] - while True: - path.append(node) - if node not in self.children or not self.children[node]: - # node is either unexplored or terminal - return path - unexplored = self.children[node] - self.children.keys() - if unexplored: - n = unexplored.pop() - path.append(n) - return path - node = self._uct_select(node) # descend a layer deeper - - def _expand(self, node): - "Update the `children` dict with the children of `node`" - if node in self.children: - return # already expanded - self.children[node] = node.find_children() - - def _simulate(self, node): - "Returns the reward for a random simulation (to completion) of `node`" - invert_reward = True - while True: - if node.is_terminal(): - reward = node.reward() - return 1 - reward if invert_reward else reward - node = node.find_random_child() - invert_reward = not invert_reward - - def _backpropagate(self, path, reward): - "Send the reward back up to the ancestors of the leaf" - for node in reversed(path): - self.N[node] += 1 - self.Q[node] += reward - reward = 1 - reward # 1 for me is 0 for my enemy, and vice versa - - def _uct_select(self, node): - "Select a child of node, balancing exploration & exploitation" - - # All children of node should already be expanded: - assert all(n in self.children for n in self.children[node]) - - log_N_vertex = math.log(self.N[node]) - - def uct(n): - "Upper confidence bound for trees" - return self.Q[n] / self.N[n] + self.exploration_weight * math.sqrt( - log_N_vertex / self.N[n] - ) - - return max(self.children[node], key=uct) - - -class Node(ABC): - """ - A representation of a single board state. - MCTS works by constructing a tree of these Nodes. - Could be e.g. a chess or checkers board state. - """ - - @abstractmethod - def find_children(self): - "All possible successors of this board state" - return set() - - @abstractmethod - def find_random_child(self): - "Random successor of this board state (for more efficient simulation)" - return None - - @abstractmethod - def is_terminal(self): - "Returns True if the node has no children" - return True - - @abstractmethod - def reward(self): - "Assumes `self` is terminal node. 1=win, 0=loss, .5=tie, etc" - return 0 - - @abstractmethod - def __hash__(self): - "Nodes must be hashable" - return 123456789 - - @abstractmethod - def __eq__(node1, node2): - "Nodes must be comparable" - return True diff --git a/maths/Game Theory/MonteCarloTreeSearch (MCTS)/tictactoe.py b/maths/Game Theory/MonteCarloTreeSearch (MCTS)/tictactoe.py deleted file mode 100644 index d16d602714d0..000000000000 --- a/maths/Game Theory/MonteCarloTreeSearch (MCTS)/tictactoe.py +++ /dev/null @@ -1,130 +0,0 @@ -""" -An example implementation of the abstract Node class for use in MCTS - -If you run this file then you can play against the computer. - -A tic-tac-toe board is represented as a tuple of 9 values, each either None, -True, or False, respectively meaning 'empty', 'X', and 'O'. - -The board is indexed by row: -0 1 2 -3 4 5 -6 7 8 - -For example, this game board -O - X -O X - -X - - -corrresponds to this tuple: -(False, None, True, False, True, None, True, None, None) -""" - -from collections import namedtuple -from random import choice -from monte_carlo_tree_search import MCTS, Node - -_TTTB = namedtuple("TicTacToeBoard", "tup turn winner terminal") - - -# Inheriting from a namedtuple is convenient because it makes the class -# immutable and predefines __init__, __repr__, __hash__, __eq__, and others -class TicTacToeBoard(_TTTB, Node): - def find_children(board): - if board.terminal: # If the game is finished then no moves can be made - return set() - # Otherwise, you can make a move in each of the empty spots - return { - board.make_move(i) for i, value in enumerate(board.tup) if value is None - } - - def find_random_child(board): - if board.terminal: - return None # If the game is finished then no moves can be made - empty_spots = [i for i, value in enumerate(board.tup) if value is None] - return board.make_move(choice(empty_spots)) - - def reward(board): - if not board.terminal: - raise RuntimeError(f"reward called on nonterminal board {board}") - if board.winner is board.turn: - # It's your turn and you've already won. Should be impossible. - raise RuntimeError(f"reward called on unreachable board {board}") - if board.turn is (not board.winner): - return 0 # Your opponent has just won. Bad. - if board.winner is None: - return 0.5 # Board is a tie - # The winner is neither True, False, nor None - raise RuntimeError(f"board has unknown winner type {board.winner}") - - def is_terminal(board): - return board.terminal - - def make_move(board, index): - tup = board.tup[:index] + (board.turn,) + board.tup[index + 1 :] - turn = not board.turn - winner = _find_winner(tup) - is_terminal = (winner is not None) or not any(v is None for v in tup) - return TicTacToeBoard(tup, turn, winner, is_terminal) - - def to_pretty_string(board): - to_char = lambda v: ("X" if v is True else ("O" if v is False else " ")) - rows = [ - [to_char(board.tup[3 * row + col]) for col in range(3)] for row in range(3) - ] - return ( - "\n 1 2 3\n" - + "\n".join(str(i + 1) + " " + " ".join(row) for i, row in enumerate(rows)) - + "\n" - ) - - -def play_game(): - tree = MCTS() - board = new_tic_tac_toe_board() - print(board.to_pretty_string()) - while True: - row_col = input("enter row,col: ") - row, col = map(int, row_col.split(",")) - index = 3 * (row - 1) + (col - 1) - if board.tup[index] is not None: - raise RuntimeError("Invalid move") - board = board.make_move(index) - print(board.to_pretty_string()) - if board.terminal: - break - # You can train as you go, or only at the beginning. - # Here, we train as we go, doing fifty rollouts each turn. - for _ in range(50): - tree.do_rollout(board) - board = tree.choose(board) - print(board.to_pretty_string()) - if board.terminal: - break - - -def _winning_combos(): - for start in range(0, 9, 3): # three in a row - yield (start, start + 1, start + 2) - for start in range(3): # three in a column - yield (start, start + 3, start + 6) - yield (0, 4, 8) # down-right diagonal - yield (2, 4, 6) # down-left diagonal - - -def _find_winner(tup): - "Returns None if no winner, True if X wins, False if O wins" - for i1, i2, i3 in _winning_combos(): - v1, v2, v3 = tup[i1], tup[i2], tup[i3] - if False is v1 is v2 is v3: - return False - if True is v1 is v2 is v3: - return True - return None - - -def new_tic_tac_toe_board(): - return TicTacToeBoard(tup=(None,) * 9, turn=True, winner=None, terminal=False) - - -if __name__ == "__main__": - play_game() From ad651d47a3c520ef966528dae1b116aac2670127 Mon Sep 17 00:00:00 2001 From: Dhruv Goel <83488876+Dhruvgoel3829@users.noreply.github.com> Date: Wed, 2 Oct 2024 19:31:33 +0530 Subject: [PATCH 12/12] Delete maths/Game Theory/AlphaBetaPruning directory --- .../AlphaBetaPruning/alphabetapruning.py | 218 ------------------ maths/Game Theory/AlphaBetaPruning/readme.md | 14 -- 2 files changed, 232 deletions(-) delete mode 100644 maths/Game Theory/AlphaBetaPruning/alphabetapruning.py delete mode 100644 maths/Game Theory/AlphaBetaPruning/readme.md diff --git a/maths/Game Theory/AlphaBetaPruning/alphabetapruning.py b/maths/Game Theory/AlphaBetaPruning/alphabetapruning.py deleted file mode 100644 index b817347b32d8..000000000000 --- a/maths/Game Theory/AlphaBetaPruning/alphabetapruning.py +++ /dev/null @@ -1,218 +0,0 @@ -from random import choice -from math import inf - -board = [[0, 0, 0], [0, 0, 0], [0, 0, 0]] - - -def Gameboard(board): - chars = {1: "X", -1: "O", 0: " "} - for x in board: - for y in x: - ch = chars[y] - print(f"| {ch} |", end="") - print("\n" + "---------------") - print("===============") - - -def Clearboard(board): - for x, row in enumerate(board): - for y, col in enumerate(row): - board[x][y] = 0 - - -def winningPlayer(board, player): - conditions = [ - [board[0][0], board[0][1], board[0][2]], - [board[1][0], board[1][1], board[1][2]], - [board[2][0], board[2][1], board[2][2]], - [board[0][0], board[1][0], board[2][0]], - [board[0][1], board[1][1], board[2][1]], - [board[0][2], board[1][2], board[2][2]], - [board[0][0], board[1][1], board[2][2]], - [board[0][2], board[1][1], board[2][0]], - ] - - if [player, player, player] in conditions: - return True - - return False - - -def gameWon(board): - return winningPlayer(board, 1) or winningPlayer(board, -1) - - -def printResult(board): - if winningPlayer(board, 1): - print("X has won! " + "\n") - - elif winningPlayer(board, -1): - print("O's have won! " + "\n") - - else: - print("Draw" + "\n") - - -def blanks(board): - blank = [] - for x, row in enumerate(board): - for y, col in enumerate(row): - if board[x][y] == 0: - blank.append([x, y]) - - return blank - - -def boardFull(board): - if len(blanks(board)) == 0: - return True - return False - - -def setMove(board, x, y, player): - board[x][y] = player - - -def playerMove(board): - e = True - moves = { - 1: [0, 0], - 2: [0, 1], - 3: [0, 2], - 4: [1, 0], - 5: [1, 1], - 6: [1, 2], - 7: [2, 0], - 8: [2, 1], - 9: [2, 2], - } - while e: - try: - move = int(input("Enter a number between 1-9: ")) - if move < 1 or move > 9: - print("Invalid Move! Try again!") - elif not (moves[move] in blanks(board)): - print("Invalid Move! Try again!") - else: - setMove(board, moves[move][0], moves[move][1], 1) - Gameboard(board) - e = False - except (KeyError, ValueError): - print("Enter a number!") - - -def getScore(board): - if winningPlayer(board, 1): - return 10 - - elif winningPlayer(board, -1): - return -10 - - else: - return 0 - - -def abminimax(board, depth, alpha, beta, player): - row = -1 - col = -1 - if depth == 0 or gameWon(board): - return [row, col, getScore(board)] - - else: - for cell in blanks(board): - setMove(board, cell[0], cell[1], player) - score = abminimax(board, depth - 1, alpha, beta, -player) - if player == 1: - # X is always the max player - if score[2] > alpha: - alpha = score[2] - row = cell[0] - col = cell[1] - - else: - if score[2] < beta: - beta = score[2] - row = cell[0] - col = cell[1] - - setMove(board, cell[0], cell[1], 0) - - if alpha >= beta: - break - - if player == 1: - return [row, col, alpha] - - else: - return [row, col, beta] - - -def o_comp(board): - if len(blanks(board)) == 9: - x = choice([0, 1, 2]) - y = choice([0, 1, 2]) - setMove(board, x, y, -1) - Gameboard(board) - - else: - result = abminimax(board, len(blanks(board)), -inf, inf, -1) - setMove(board, result[0], result[1], -1) - Gameboard(board) - - -def x_comp(board): - if len(blanks(board)) == 9: - x = choice([0, 1, 2]) - y = choice([0, 1, 2]) - setMove(board, x, y, 1) - Gameboard(board) - - else: - result = abminimax(board, len(blanks(board)), -inf, inf, 1) - setMove(board, result[0], result[1], 1) - Gameboard(board) - - -def makeMove(board, player, mode): - if mode == 1: - if player == 1: - playerMove(board) - - else: - o_comp(board) - else: - if player == 1: - o_comp(board) - else: - x_comp(board) - - -def pvc(): - while True: - try: - order = int(input("Enter to play 1st or 2nd: ")) - if not (order == 1 or order == 2): - print("Please pick 1 or 2") - else: - break - except (KeyError, ValueError): - print("Enter a number") - - Clearboard(board) - if order == 2: - currentPlayer = -1 - else: - currentPlayer = 1 - - while not (boardFull(board) or gameWon(board)): - makeMove(board, currentPlayer, 1) - currentPlayer *= -1 - - printResult(board) - - -# Driver Code -print("=================================================") -print("TIC-TAC-TOE using MINIMAX with ALPHA-BETA Pruning") -print("=================================================") -pvc() diff --git a/maths/Game Theory/AlphaBetaPruning/readme.md b/maths/Game Theory/AlphaBetaPruning/readme.md deleted file mode 100644 index b5a728105449..000000000000 --- a/maths/Game Theory/AlphaBetaPruning/readme.md +++ /dev/null @@ -1,14 +0,0 @@ -# Alpha-Beta Pruning - -An optimization technique for the minimax algorithm that reduces the number of nodes evaluated by eliminating branches that won't affect the final decision (basically an upgrade of minimax algorithm) - -As we have seen in the minimax search algorithm that the number of game states it has to examine are exponential in depth of the tree. Since we cannot eliminate the exponent, but we can cut it to half. Hence there is a technique by which without checking each node of the game tree we can compute the correct minimax decision, and this technique is called pruning. This involves two threshold parameter Alpha and beta for future expansion, so it is called alpha-beta pruning. It is also called as Alpha-Beta Algorithm. Alpha-beta pruning can be applied at any depth of a tree, and sometimes it not only prunes the tree leaves but also entire sub-tree. The two-parameter can be defined as: - -1. Alpha: The best (highest-value) choice we have found so far at any point along the path of Maximizer. The initial value of alpha is -∞. -2. Beta: The best (lowest-value) choice we have found so far at any point along the path of Minimizer. The initial value of beta is +∞. The Alpha-beta pruning to a standard minimax algorithm returns the same move as the standard algorithm does, but it removes all the nodes which are not really affecting the final decision but making algorithm slow. Hence by pruning these nodes, it makes the algorithm fast. -## Acknowledgements - - - [Original Author](https://github.com/anmolchandelCO180309) - - [Wiki](https://en.wikipedia.org/wiki/Alpha%E2%80%93beta_pruning) - -#### /// The alphabetapruning.py file has a Tic-Tac-Toe game implemented with a good explanation ///