{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Deep Otello AI\n", "\n", "The game reversi is a very good game to apply deep learning methods to.\n", "\n", "Othello also known as reversi is a board game first published in 1883 by eiter Lewis Waterman or John W. Mollet in England (each one was denouncing the other as fraud).\n", "It is a strickt turn based zero-sum game with a clear Markov chain and now hidden states like in card games with an unknown distribution of cards or unknown player allegiance.\n", "There is like for the game go only one set of stones with two colors which is much easier to abstract than chess with its 6 unique pieces.\n", "The game has a symmetrical game board wich allows to play with rotating the state around an axis to allow for a breaking of sequences or interesting ANN architectures, quadruple the data generation by simulation or interesting test cases where a symetry in turns should be observable if the AI reaches an \"objective\" policy." ] }, { "cell_type": "markdown", "source": [ "\n", "## Content\n", "\n", "* [The game rules](#the-game-rules) A short overview over the rules of the game.\n", "* [Some common Otello strategies](#some-common-otello-strategies) introduces some easy approaches to a classic Otello AI and defines some behavioral expectations.\n", "* [Initial design decisions](#initial-design-decisions) an explanation about some initial design decision and assumptions\n", "* [Imports and dependencies](#imports-and-dependencies) explains what libraries where used" ], "metadata": { "collapsed": false } }, { "cell_type": "markdown", "source": [ "\n", "## The game rules\n", "\n", "Othello is played on a board with 8 x 8 fields for two player.\n", "The board geometry is equal to a chess game.\n", "The game is played with game stones that are black on one siede and white on the other.\n", "![Othello game board example](reversi_example.png)\n", "The player take turns.\n", "A player places a stone with his or her color up on the game board.\n", "The player can only place stones when he surrounds a number of stones with the opponents color with the new stone and already placed stones of his color.\n", "Those surrounded stones can either be horizontally, vertically and/or diagonally be placed.\n", "All stones thus surrounded will be flipped to be of the players color.\n", "Turns are only possible if the player is also changing the color of the opponents stones. If a player can't act he is skipped.\n", "The game ends if both players can't act. The player with the most stones wins.\n", "If the score is counted in detail unclaimed fields go to the player with more stones of his or her color on the board.\n", "The game begins with four stones places in the center of the game. Each player gets two. They are placed diagonally to each other.\n", "\n", "\n", "![Startaufstellung.png](Startaufstellung.png)\n", "\n", "## Some common Othello strategies\n", "\n", "As can be easily understood the placement of stones and on the bord is always a careful balance of attack and defence.\n", "If the player occupies huge homogenous stretches on the board it can be attacked easier.\n", "The boards corners provide safety from wich occupied territory is impossible to loos but since it is only possible to reach the corners if the enemy is forced to allow this or calculates the cost of giving a stable base to the enemy it is difficult to obtain.\n", "There are some text on otello computer strategies which implement greedy algorithms for reversi based on a modified score to each field.\n", "Those different values are score modifiers for a traditional greedy algorithm.\n", "If a players stone has captured such a filed the score reached is multiplied by the modifier.\n", "The total score is the score reached by the player subtracted with the score of the enemy.\n", "The scores change in the course of the game and converges against one. This gives some indications of what to expect from an Othello AI.\n", "\n", "![ComputerPossitionScore](computer-score.png)\n" ], "metadata": { "collapsed": false } }, { "cell_type": "markdown", "source": [ "## Initial design decisions\n", "\n", "At the beginning of this project I made some design decisions.\n", "The first onw was that I do not want to use a gym library because it limits the data formats accessible.\n", "I choose to implement the hole game as entry in a stack in numpy arrays to be able to accommodate interfacing with a neural network easier and to use scipy pattern recognition tools to implement some game mechanics for a fast simulation cycle.\n", "I chose to ignore player colors as far as I could instead a player perspective was used. Which allowed to change the perspective with a flipping of the sign. (multiplying with -1).\n", "The array format should also allow for data multiplication or the breaking of strikt sequences by flipping the game along one the for axis, (horizontal, vertical, transpose along both diagonals).\n", "\n", "I wanted to implement different agents as classes that act on those game stacks.\n", "\n", "Since computation time is critical all computational have results are saved.\n", "The analysis of those is then repeated in real time. If a recalculation of such a section is required the save file can be deleted and the code should be executed again." ], "metadata": { "collapsed": false } }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "\n", "%load_ext blackcellmagic" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Imports and dependencies\n", "\n", "The following direct dependencies where used for this project:\n", "```toml\n", "jupyter = \"^1.0.0\"\n", "matplotlib = \"^3.6.3\"\n", "numpy = \"^1.24.1\"\n", "pytest = \"^7.2.1\"\n", "python = \"3.10.*\"\n", "scipy = \"^1.10.0\"\n", "tqdm = \"^4.64.1\"\n", "jupyterlab = \"^3.6.1\"\n", "torchvision = \"^0.14.1\"\n", "torchaudio = \"^0.13.1\"\n", "```\n", "* `Jupyter` and `jupyterlab` on pycharm was used as a IDE / Ipython was used to implement this code.\n", "* `matplotlib` was used for visualisation and statistics.\n", "* `numpy` was used for array support and mathematical functions\n", "* `tqdm` was used for progress bars\n", "* `scipy` contains fast pattern recognition tools for images. It was used to make an initial estimation about where possible turns should be.\n", "* `torch` supplied the ANN functionalities." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "import itertools\n", "import numpy as np\n", "import abc\n", "from typing import Final\n", "from scipy.ndimage import binary_dilation\n", "import matplotlib.pyplot as plt\n", "from abc import ABC" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Constants\n", "\n", "Some general constants needed to be defined. Such as board game size and Player and Enemy representations. Also, directional offsets and the initial placement of blocks." ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [], "source": [ "BOARD_SIZE: Final[int] = 8 # defines the board side length as 8\n", "PLAYER: Final[int] = 1 # defines the number symbolising the player as 1\n", "ENEMY: Final[int] = -1 # defines the number symbolising the enemy as -1\n", "EXAMPLE_STACK_SIZE: Final[int] = 1000 # defines the game stack size for examples\n", "IMPOSSIBLE: Final[np.ndarray] = np.array([-1, -1], dtype=int)\n", "IMPOSSIBLE.setflags(write=False)\n", "SIMULATE_TURNS: Final[int] = 70" ] }, { "cell_type": "markdown", "source": [ "The directions array contains all the numerical offsets needed to move along one of the 8 directions in a 2 dimensional grid. This will allow an iteration over the game board.\n", "![8-directions.png](8-directions.png \"Offset in 8 directions\")" ], "metadata": { "collapsed": false } }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/plain": "array([[-1, -1],\n [-1, 0],\n [-1, 1],\n [ 0, -1],\n [ 0, 1],\n [ 1, -1],\n [ 1, 0],\n [ 1, 1]])" }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "DIRECTIONS: Final[np.ndarray] = np.array(\n", " [[i, j] for i in range(-1, 2) for j in range(-1, 2) if j != 0 or i != 0],\n", " dtype=int,\n", ")\n", "DIRECTIONS.setflags(write=False)\n", "DIRECTIONS" ] }, { "cell_type": "markdown", "source": [ "Another constant needed is the initial start square at the center of the board." ], "metadata": { "collapsed": false } }, { "cell_type": "code", "execution_count": 5, "outputs": [ { "data": { "text/plain": "array([[-1, 1],\n [ 1, -1]])" }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "START_SQUARE: Final[np.ndarray] = np.array(\n", " [[ENEMY, PLAYER], [PLAYER, ENEMY]], dtype=int\n", ")\n", "START_SQUARE.setflags(write=False)\n", "START_SQUARE" ], "metadata": { "collapsed": false } }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Creating new boards\n", "\n", "The first function implemented and tested is a function to generate the starting environment as a stack of games.\n", "As described above I simply placed a 2 by 2 square in the center of an empty stack of boards." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": "array([[ 0, 0, 0, 0, 0, 0, 0, 0],\n [ 0, 0, 0, 0, 0, 0, 0, 0],\n [ 0, 0, 0, 0, 0, 0, 0, 0],\n [ 0, 0, 0, -1, 1, 0, 0, 0],\n [ 0, 0, 0, 1, -1, 0, 0, 0],\n [ 0, 0, 0, 0, 0, 0, 0, 0],\n [ 0, 0, 0, 0, 0, 0, 0, 0],\n [ 0, 0, 0, 0, 0, 0, 0, 0]])" }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "def get_new_games(number_of_games: int) -> np.ndarray:\n", " \"\"\"Generates a stack of initialised game boards.\n", "\n", " Args:\n", " number_of_games: The size of the board stack.\n", "\n", " Returns: The generates stack of games as a stack n x 8 x 8.\n", "\n", " \"\"\"\n", " empty = np.zeros([number_of_games, BOARD_SIZE, BOARD_SIZE], dtype=int)\n", " empty[:, 3:5, 3:5] = START_SQUARE\n", " return empty\n", "\n", "\n", "get_new_games(1)[0]" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "test_number_of_games = 3\n", "assert get_new_games(test_number_of_games).shape == (\n", " test_number_of_games,\n", " BOARD_SIZE,\n", " BOARD_SIZE,\n", ")\n", "np.testing.assert_equal(\n", " get_new_games(test_number_of_games).sum(axis=1),\n", " np.zeros(\n", " [\n", " test_number_of_games,\n", " 8,\n", " ]\n", " ),\n", ")\n", "np.testing.assert_equal(\n", " get_new_games(test_number_of_games).sum(axis=2),\n", " np.zeros(\n", " [\n", " test_number_of_games,\n", " 8,\n", " ]\n", " ),\n", ")\n", "assert np.all(get_new_games(test_number_of_games)[:, 3:4, 3:4] != 0)\n", "del test_number_of_games" ] }, { "cell_type": "markdown", "source": [ "## Visualisation tools\n", "\n", "In this section a visualisation help was implemented for debugging of the game and a proper display of the results.\n", "For this visualisation ChatGPT was used as a prompted code generator that was later reviewed and refactored by hand to integrate seamlessly into the project as a whole.\n", "White stones represent the player, black stones the enemy. A single plot can be used as a subplot when the `ax` argument is used." ], "metadata": { "collapsed": false } }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/plain": "
", "image/png": "iVBORw0KGgoAAAANSUhEUgAAASIAAAEiCAYAAABdvt+2AAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjYuMywgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/P9b71AAAACXBIWXMAAA9hAAAPYQGoP6dpAAAdq0lEQVR4nO3de3BU5f0G8OckG1dIsivEYFizQEIsMcHwA0RLMyhBQQKh0HFocUIBRcslAuq0lWBtaRUWx7ZDBQwUwqVDuNkR6jByEeRSUS5BodyChktZSCKMhV0S6prsnt8fx90mkE32bPacN8l5PjNndHfP2e/7kvDw7tnzvkeSZVkGEZFAUaIbQETEICIi4RhERCQcg4iIhGMQEZFwDCIiEo5BRETCMYiISDiT3gV9Ph8qKioQHx8PSZL0Lk9EOpFlGTdv3oTNZkNUVNNjHt2DqKKiAna7Xe+yRCSI0+lEcnJyk/voHkTx8fGB/+/YVd/at6oAyAAkoGOScWqLrs++i6ktuv6tSuW/9f/OB6N7EPk/jnXsCoyv0Ld2STJQcwWItQH5l41TW3R99t2YfV9rU8IolFMwPFlNRMIxiIhIOAYREQnHICIi4RhERCQcg4iIhGMQEZFwDCIiEk51EO3fvx+jRo2CzWaDJEnYsmWLBs0iIiNRHUQ1NTXo06cPlixZokV7iMiAVE/xyM3NRW5urhZtISKD0nyumcfjgcfjCTx2u91alySiNkbzk9UOhwNWqzWwcQkQIrqd5kFUWFgIl8sV2JxOp9YliaiN0fyjmdlshtls1roMEbVhvI6IiIRTPSKqrq5GeXl54PGFCxdw7NgxdO7cGd26dYto44jIGFQHUWlpKXJycgKPX3nlFQDAxIkTsXr16og1jIiMQ3UQDR48GLIsa9EWIjIoniMiIuEYREQkHIOIiIRjEBGRcAwiIhKOQUREwjGIiEg4BhERCSfJOl+d6Ha7YbVaAUm5H7eeblUCsg+QooCOXY1TW3R99t2Yfa+pACADLpcLFoulyX3FBRERGUIoQaT5MiBBcURkmPrsuzH77h8RhUJYEHVMAvIv61uzJBmouaL8QIxUW3R99t2YfV9rU4IwFDxZTUTCMYiISDgGEREJxyAiIuEYREQkHIOIiIRjEBGRcAwiIhJOVRA5HA4MGDAA8fHx6NKlC8aMGYOzZ89q1TYiMghVQbRv3z4UFBTg4MGD+Oijj1BbW4thw4ahpqZGq/YRkQGomuKxffv2Bo9Xr16NLl264OjRo3jsscci2jAiMo4WzTVzuVwAgM6dOwfdx+PxwOPxBB673e6WlCSidijsk9U+nw8vvfQSsrOz0bt376D7ORwOWK3WwGa328MtSUTtVNhBVFBQgJMnT2LDhg1N7ldYWAiXyxXYnE5nuCWJqJ0K66PZiy++iK1bt2L//v1ITk5ucl+z2Qyz2RxW44jIGFQFkSzLmDFjBjZv3oy9e/ciJSVFq3YRkYGoCqKCggKsW7cO//jHPxAfH4+qqioAgNVqRYcOHTRpIBG1f6rOERUVFcHlcmHw4MHo2rVrYNu4caNW7SMiA1D90YyIKNI414yIhGMQEZFwDCIiEo5BRETCMYiISDgGEREJxyAiIuEYREQknCTrfJWi2+2G1WoFJCDWpmdl5T7csg+QopR7gRultuj67Lsx+15TAUBW1i2zWCxN7isuiIjIEEIJohat0NgiHBEZpj77bsy++0dEoRAWRB2TgPzL+tYsSQZqrig/ECPVFl2ffTdm39falCAMBU9WE5FwDCIiEo5BRETCMYiISDgGEREJxyAiIuEYREQknOrF87OysmCxWGCxWDBw4EBs27ZNq7YRkUGoCqLk5GQsWLAAR48eRWlpKYYMGYLRo0fj1KlTWrWPiAxA1ZXVo0aNavB43rx5KCoqwsGDB5GZmRnRhhGRcYQ9xcPr9eK9995DTU0NBg4cGMk2EZHBqA6iEydOYODAgfj2228RFxeHzZs3IyMjI+j+Ho8HHo8n8NjtdofXUiJqt1R/a9arVy8cO3YMhw4dwrRp0zBx4kScPn066P4OhwNWqzWw2e32FjWYiNof1UF01113IS0tDf3794fD4UCfPn3wl7/8Jej+hYWFcLlcgc3pdLaowUTU/rR4GRCfz9fgo9ftzGYzzGZzS8sQUTumKogKCwuRm5uLbt264ebNm1i3bh327t2LHTt2aNU+IjIAVUF09epVTJgwAZWVlbBarcjKysKOHTswdOhQrdpHRAagKoiKi4u1agcRGRjnmhGRcAwiIhKOQUREwjGIiEg4BhERCccgIiLhGEREJByDiIiEk2RZDvHu1JHhdrthtVoBCYi16VmZ90Bn39l3PdVUAJABl8sFi8XS5L7igoiIDCGUIGrx7PuwcURkmPrsuzH77h8RhUJYEHVMAvIv61uzJBmouaL8QIxUW3R99t2YfV9rU4IwFOJGRNRmmBGLRKTBBDPq4ME1lMODGl1q11YDrnLA5wGizIA1DYiJ06U06YhBRI3qigfxGKaiN0YgEamQ6n3BKsOHaziPk/gQ+7EUlTgT0drXTwOnlwLODwH3eTQc3kuAJRWwjwAypgKdgi+XTm0Ig4gaSEAP5GMZMjEMXtQiGjF37CMhCl2QhscxDUMwE6ewEyWYgm9wsUW13ReAf04BrnwESCZArmtkJxlwnwNOFwGnFgH3DwUGLQMsKS0qTYLxOiIKyMZkzMVppCMHABoNofr8r6cjB3NxCtmYHHbtshXAexlAxR7lcaMhVI//9Yo9ynFlK8IuTa0Ag4gAALmYgwlYgRjc3WwA3S4aMYhBB0zACuRijuran88D9r8AeL9tPoBuJ9cpx+1/QXkfapsYRIRsTMYYKH+LJUhhvYf/uDGYh2w8F/JxZSuA0t+EVfIOpb8ByriIaJvEIDK4BPTAOCyCHOoFH82QIWMcFiEBPZrd130BODAjImUDDryovC+1LQwig8vHMkTDFPZI6HYSJEQjBvlY1uy+/5wC+FR+FGuOr055X2pbWhRECxYsgCRJeOmllyLUHNJTVzyITAxTfU6oOdGIQSaGIQnpQfe5flr5dkztOaHmyHXK+16P7BUFpLGwg+jIkSNYtmwZsrKyItke0tFjmAovajV5by9q8TimBX399FLlK3otSCbl631qO8IKourqauTn52P58uXo1KlTpNtEOumNEREfDflFIwa9kRv0deeHkR8N+cl1gHObNu9N2ggriAoKCjBy5Eg8+eSTkW4P6cSMOCQiVdMaiegJM2LveP67m99fMa0h9zllegi1DaoHxxs2bMDnn3+OI0eOhLS/x+OBx+MJPHa73WpLkgYS0bPBtA0tSIhCItIAHG/wvPscQp6VHTZZmaN27/9pXIciQtVvotPpxKxZs1BSUoK77747pGMcDgesVmtgs9vtYTWUIssEs7A6Pk8jO2pArzrUcqqC6OjRo7h69Sr69esHk8kEk8mEffv24Z133oHJZILX673jmMLCQrhcrsDmdDoj1ngKXx30+VvaWJ0ofTJQtzrUcqo+mj3xxBM4ceJEg+eeffZZpKen49VXX0V0dPQdx5jNZpjN/I1oba6hHDJ8mn48U2bpl9/xvDUNgARtP55J39ehNkFVEMXHx6N3794NnouNjUVCQsIdz1Pr5kENruE8ukC7v63XcK7RdYti4pSlPNznNCsNS0+uW9SW8MpqAzuJDzW9jugkgn+Hbh+h7XVE9uBXDlAr1OJfhb1790agGSTCfizFEMzU5L2jEYN9CH5VYcZUZT0hLch1QEbwaympFeKIyMAqcQansDPioyIvanEKO1GFsqD7dMpQFjWL9KhIMinv2+nByL4vaYtBZHAlmAIvaiM6+96LWpSg+Zmng5YBUREOoiiT8r7UtjCIDO4bXMQGzIzo7PsNmBHSsrGWFCA7wh/Pshdz2di2iEFEOIBibMFrABD2yMh/3BbMwQGsDPm49OeBh98Mq+QdBswD0sNfrZYE4uL5BADYhvlw42uMwyJEw6RqMqwXtfCiFhswQ1UI+fV7Deh4n7JImq9O3WRYyaR8HMtezBBqyzgiooADKMZcZKAMygr2zZ3E9r9ehj2Yi8ywQsgv/Xlg7GnApqzb3+xJbP/rthzlOIZQ28YRETXwDS7iHTxV775muXdMkFWumD6Hk9iGfShq8tsxNSwpwMid9e5rtq2RCbKScrGiPVf5ip7fjrUPDCJqVCXOYCNmYSNm6X6n104ZQPY7yv/zTq/GIMmyrPWCDA243W5YrVZAAmJtelZW7sMt+wApSrkXuFFqi67Pvhuz7zUVUJZjcblgsVia3FdcEBGRIYQSROI+mnFEZJj67Lsx++4fEYVCWBB1TALyL+tbsyQZqLmi/ECMVFt0ffbdmH1fa1OCMBQ8WU3NEnnCWO8T5SQGg4gaFfgK/cPvF7q//Sv0VGUpj4ypyrdckfS/SwdGIBGpjVw6cB4n8SH2YykqwRuYtQcMImrAfUG5U+qVj5SLBhu9yllWru85XaQs5XH/UGWiaUvneCWgB/KxDJkYBi9qG726W0IUuiANj2MahmAmTmEnSjAlpLlt1HrxymoKKFsBvJcBVCgXVjc71cL/esUe5biyFeHXzsZkzMVppEO5tLq5KSb+19ORg7k4hWzw0uq2jEFEAIDP5wH7XwC836q/8aFcpxy3/wXlfdTKxRxMwArE4G7VN3yMRgxi0AETsAK5mKO+OLUKDCJC2Qqg9DeRea/S3wBlxaHvn43JGAMlvcJdisR/3BjMQzaeC+s9SCwGkcG5Lyiz3iPpwIvK+zYnAT0wDosiuijbOCxCAnpE5P1IPwwig/vnFGXpjUjy1Snv25x8LEM0TBFdlC0aMcgHl2hsa1QF0dy5cyFJUoMtPT1dq7aRxq6fVr4dU3tOqDlynfK+15v4Zr0rHkQmhqk+J9ScaMQgE8OQBP5etiWqR0SZmZmorKwMbJ988okW7SIdnF6q7S19Tge/iQcew1RNb2X0OHgbj7ZE9a+hyWRCUlKSFm0hnTk/jPxoyE+uU9YTCqY3RkR8NOQXjRj0Ri42YpYm70+Rp3pE9NVXX8FmsyE1NRX5+fm4dOmSFu0ijX138/srpjXkPqdMD7mdGXFIRKqmtRPRE2bEalqDIkdVED366KNYvXo1tm/fjqKiIly4cAGDBg3CzZs3gx7j8XjgdrsbbCTeHSsfakFW5qjd7vYVH7UgIQqJGt5OmyJL1Uez3Nz/3cc3KysLjz76KLp3745NmzZh8uTGr2x1OBz4/e9/37JWUsT5POLqmGDWpbZedajlWvTP0j333IMf/OAHKC9v5J+97xUWFsLlcgU2p9PZkpIUIVE6/R1trE4d9ElBvepQy7UoiKqrq3Hu3Dl07Rp8xSWz2QyLxdJgI/GsaUCELt8JTvq+zm2uoRwyfJqWVmbpB/8HkloXVUH0y1/+Evv27cPFixfx6aef4ic/+Qmio6PxzDPPaNU+0khMnLKUh5YsPRtft8iDGlyDtmfKr+Ec1y1qQ1QF0eXLl/HMM8+gV69e+OlPf4qEhAQcPHgQiYmJWrWPNGQfoe11RPbc4K+fxIeaXkd0Ek1cO0Ctjqpfww0bNmjVDhIgY6qynpAW5DrlvmPB7MdSDMFMTWpHIwb70MTVlNTqcK6ZgXXKUBY1i/SoSDIp79vUzQ8rcQansDPioyIvanEKOyN200fSB4PI4AYtU+4dH0lRJuV9m1OCKfCiNqKz772oRQlCmHFLrQqDyOAsKUB2hD+eZS8ObdnYb3ARGzAzorPvN2AGl41tgxhEhPTngYffjMx7DZgHpKtYtfUAirEFrwFA2CMj/3FbMAcHsDKs9yCxuHg+AQD6vQZ0vE9ZJM1Xp24yrGRSPo5lL1YXQn7bMB9ufI1xWIRomFRNhvWiFl7UYgNmMITaMI6IKCD9eWDsacCmrF/f7Els/+u2HOW4cELI7wCKMRcZKIOycn9zJ7H9r5dhD+YikyHUxnFERA1YUoCRO+vd12xbIxNkJeViRXuu8hV9U9+OqfENLuIdPFXvvma5d0yQVa6YPoeT2IZ9KOK3Y+0Eg4ga1SkDyH5H+X+97/RaiTPYiFnYiFm806tBSLIsa70YRANutxtWqxWQgFibnpWV+3DLPkCKUu4FbpTaouuz78bse00FlKVgXK5m55iKCyIiMoRQgkjcRzOOiAxTn303Zt/9I6JQCAuijklA/mV9a5YkAzVXlB+IkWqLrs++G7Pva21KEIaCX98TkXAMIiISjkFERMIxiIhIOAYREQnHICIi4RhERCQcg4iIhFMdRFeuXMH48eORkJCADh064KGHHkJpaakWbSMig1B1ZfX169eRnZ2NnJwcbNu2DYmJifjqq6/QqVMnrdpHRAagKojeeust2O12rFq1KvBcSkoIixMTETVB1UezDz74AA8//DDGjh2LLl26oG/fvli+fHmTx3g8Hrjd7gYbEVF9qoLo/PnzKCoqwgMPPIAdO3Zg2rRpmDlzJtasWRP0GIfDAavVGtjsdnuLG01E7YuqIPL5fOjXrx/mz5+Pvn374he/+AVeeOEFLF26NOgxhYWFcLlcgc3pdLa40UTUvqgKoq5duyIjI6PBcw8++CAuXboU9Biz2QyLxdJgIyKqT1UQZWdn4+zZsw2e+/LLL9G9e/eINoqIjEVVEL388ss4ePAg5s+fj/Lycqxbtw5//etfUVBQoFX7iMgAVAXRgAEDsHnzZqxfvx69e/fGG2+8gYULFyI/P1+r9hGRAaheKjYvLw95eXlatIWIDIpzzYhIOAYREQnHICIi4RhERCQcg4iIhGMQEZFwDCIiEo5BRETCSbIsy3oWdLvdsFqtgATE2vSsrNyHW/YBUpRyL3Cj1BZdn303Zt9rKgDIgMvlanayu7ggIiJDCCWIVE/xiBiOiAxTn303Zt/9I6JQCAuijklA/mV9a5YkAzVXlB+IkWqLrs++G7Pva21KEIaCJ6uJSDgGEREJxyAiIuEYREQkHIOIiIRjEBGRcAwiIhKOQUREwqkKoh49ekCSpDs23k6IiFpC1ZXVR44cgdfrDTw+efIkhg4dirFjx0a8YURkHKqCKDExscHjBQsWoGfPnnj88ccj2igiMpaw55p99913WLt2LV555RVIkhR0P4/HA4/HE3jsdrvDLUlE7VTYJ6u3bNmCGzduYNKkSU3u53A4YLVaA5vdbg+3JBG1U2EHUXFxMXJzc2GzNb2WR2FhIVwuV2BzOp3hliSidiqsj2b//ve/sWvXLrz//vvN7ms2m2E2m8MpQ0QGEdaIaNWqVejSpQtGjhwZ6fYQkQGpDiKfz4dVq1Zh4sSJMJnELfBIRO2H6iDatWsXLl26hOeee06L9hCRAake0gwbNgw6r7dPRO0c55oRkXAMIiISjkFERMIxiIhIOAYREQnHICIi4RhERCScJOt8UZDb7YbVagUkILbp+bIRx3ugs+/su35qKgDIgMvlgsViaXJfcUFERIYQShCJmyzGEZFh6rPvxuy7f0QUCmFB1DEJyL+sb82SZKDmivIDMVJt0fXZd2P2fa1NCcJQ8GQ1EQnHICIi4RhERCQcg4iIhGMQEZFwDCIiEo5BRETCMYiISDhVQeT1evH6668jJSUFHTp0QM+ePfHGG29wDWsiahFVV1a/9dZbKCoqwpo1a5CZmYnS0lI8++yzsFqtmDlzplZtJKJ2TlUQffrppxg9enTgxoo9evTA+vXrcfjwYU0aR0TGoOqj2Y9+9CPs3r0bX375JQDg+PHj+OSTT5Cbm6tJ44jIGFSNiGbPng2324309HRER0fD6/Vi3rx5yM/PD3qMx+OBx+MJPHa73eG3lojaJVUjok2bNqGkpATr1q3D559/jjVr1uCPf/wj1qxZE/QYh8MBq9Ua2Ox2e4sbTUTti6og+tWvfoXZs2dj3LhxeOihh/Dzn/8cL7/8MhwOR9BjCgsL4XK5ApvT6Wxxo4mofVH10ezWrVuIimqYXdHR0fD5fEGPMZvNMJvN4bWOiAxBVRCNGjUK8+bNQ7du3ZCZmYkvvvgCf/7zn/Hcc89p1T4iMgBVQbRo0SK8/vrrmD59Oq5evQqbzYYpU6bgt7/9rVbtIyIDUBVE8fHxWLhwIRYuXKhRc4jIiDjXjIiEYxARkXAMIiISjkFERMIxiIhIOAYREQnHICIi4RhERCScJOu8zqvL5cI999wDQLkft55uVQGQAUhAxyTj1BZdn30XU1t0ff9972/cuAGr1drkvroH0eXLl7kUCJGBOJ1OJCcnN7mP7kHk8/lQUVGB+Ph4SJKk6li32w273Q6n0wmLxaJRC1tnffbdeLVF129pbVmWcfPmTdhstjtW7bidqrlmkRAVFdVsOjbHYrEI+aVoDfXZd+PVFl2/JbWb+0jmx5PVRCQcg4iIhGtTQWQ2m/G73/1O2IqPIuuz78arLbq+nrV1P1lNRHS7NjUiIqL2iUFERMIxiIhIOAYREQnXpoLos88+Q3R0NEaOHKlbzUmTJkGSpMCWkJCA4cOH41//+pdubaiqqsKMGTOQmpoKs9kMu92OUaNGYffu3ZrWrd/3mJgY3HfffRg6dChWrlzZ5L3stKhffxs+fLjmtZuqX15ernntqqoqzJo1C2lpabj77rtx3333ITs7G0VFRbh165ZmdSdNmoQxY8bc8fzevXshSRJu3LihSd02FUTFxcWYMWMG9u/fj4qKCt3qDh8+HJWVlaisrMTu3bthMpmQl5enS+2LFy+if//++Pjjj/H222/jxIkT2L59O3JyclBQUKB5fX/fL168iG3btiEnJwezZs1CXl4e6urqdKtff1u/fr3mdZuqn5KSomnN8+fPo2/fvti5cyfmz5+PL774Ap999hl+/etfY+vWrdi1a5em9UXQfYpHuKqrq7Fx40aUlpaiqqoKq1evxpw5c3SpbTabkZSkTF1OSkrC7NmzMWjQIFy7dg2JiYma1p4+fTokScLhw4cRGxsbeD4zM1OXG1vW7/v999+Pfv364Yc//CGeeOIJrF69Gs8//7xu9UUQUX/69OkwmUwoLS1t8DNPTU3F6NGj0R6vuGkzI6JNmzYhPT0dvXr1wvjx47Fy5UohP5Dq6mqsXbsWaWlpSEhI0LTWf/7zH2zfvh0FBQUNfiH9/Mup6G3IkCHo06cP3n//fSH127NvvvkGO3fuDPozB6B6snhb0GaCqLi4GOPHjwegDJddLhf27dunS+2tW7ciLi4OcXFxiI+PxwcffICNGzc2O6O4pcrLyyHLMtLT0zWtE4709HRcvHhR8zr1/+z92/z58zWvG6z+2LFjNa3n/5n36tWrwfP33ntvoA2vvvqqpm1o7M88NzdX05pt4qPZ2bNncfjwYWzevBkAYDKZ8LOf/QzFxcUYPHiw5vVzcnJQVFQEALh+/Treffdd5Obm4vDhw+jevbtmdVvzEFyWZV3+Za7/Z+/XuXNnzesGqx9slKK1w4cPw+fzIT8/Hx6PR9Najf2ZHzp0KDAQ0EKbCKLi4mLU1dXBZrMFnpNlGWazGYsXLw55qYFwxcbGIi0tLfB4xYoVsFqtWL58Od58803N6j7wwAOQJAllZWWa1QjXmTNnND9pC9z5Z683veunpaVBkiScPXu2wfOpqakAgA4dOmjehsb6fPnyZU1rtvqPZnV1dfjb3/6GP/3pTzh27FhgO378OGw2m67foPhJkoSoqCj897//1bRO586d8dRTT2HJkiWoqam543Wtvkptzscff4wTJ07g6aefFlK/PUtISMDQoUOxePHiRn/m7VWrHxFt3boV169fx+TJk+8Y+Tz99NMoLi7G1KlTNW2Dx+NBVVUVAOWj2eLFi1FdXY1Ro0ZpWhcAlixZguzsbDzyyCP4wx/+gKysLNTV1eGjjz5CUVERzpw5o2l9f9+9Xi++/vprbN++HQ6HA3l5eZgwYYKmtevXr89kMuHee+/VvLYo7777LrKzs/Hwww9j7ty5yMrKQlRUFI4cOYKysjL0799fdBMjT27l8vLy5BEjRjT62qFDh2QA8vHjxzWrP3HiRBnK8uMyADk+Pl4eMGCA/Pe//12zmrerqKiQCwoK5O7du8t33XWXfP/998s//vGP5T179mhat37fTSaTnJiYKD/55JPyypUrZa/Xq2nt2+vX33r16qV5bX/90aNH61LrdhUVFfKLL74op6SkyDExMXJcXJz8yCOPyG+//bZcU1OjWd1gfd6zZ48MQL5+/bomdbkMCBEJ1+rPERFR+8cgIiLhGEREJByDiIiEYxARkXAMIiISjkFERMIxiIhIOAYREQnHICIi4RhERCQcg4iIhPt/kWo4zMTZT44AAAAASUVORK5CYII=\n" }, "metadata": {}, "output_type": "display_data" } ], "source": [ "def plot_othello_board(board, ax=None) -> None:\n", " \"\"\"Plots a single otello board.\n", "\n", " If a matplot axis object is given the board will be plotted into that axis. If not an axis object will be generated.\n", " The image generated will be shown directly.\n", "\n", " Args:\n", " board: The bord that should be plotted. Only a single games is allowed. A numpy array of the form 8x8 is expected.\n", " ax: If needed a matplotlib axis object can be defined that is used to place the board as a sublot into a bigger context.\n", " \"\"\"\n", " assert board.shape == (8, 8)\n", " plot_all = False\n", " if ax is None:\n", " fig_size = 3\n", " plot_all = True\n", " fig, ax = plt.subplots(figsize=(fig_size, fig_size))\n", "\n", " ax.set_facecolor(\"#66FF00\")\n", " for x_pos, y_pos in itertools.product(range(BOARD_SIZE), range(BOARD_SIZE)):\n", " if board[x_pos, y_pos] == -1:\n", " color = \"white\"\n", " elif board[x_pos, y_pos] == 1:\n", " color = \"black\"\n", " else:\n", " continue\n", " ax.scatter(y_pos, x_pos, s=300 if plot_all else 150, c=color)\n", " for x_pos in range(-1, 8):\n", " ax.axhline(x_pos + 0.5, color=\"black\", lw=2)\n", " ax.axvline(x_pos + 0.5, color=\"black\", lw=2)\n", " ax.set_xlim(-0.5, 7.5)\n", " ax.set_ylim(7.5, -0.5)\n", " ax.set_xticks(np.arange(8))\n", " ax.set_xticklabels(list(\"ABCDEFGH\"))\n", " ax.set_yticks(np.arange(8))\n", " ax.set_yticklabels(list(\"12345678\"))\n", " if plot_all:\n", " plt.tight_layout()\n", " plt.show()\n", "\n", "\n", "plot_othello_board(get_new_games(1)[0])" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "def plot_othello_boards(boards: np.ndarray) -> None:\n", " \"\"\"Plots multiple boards into subplots.\n", "\n", " The plots are shown directly.\n", "\n", " Args:\n", " boards: Plots the boards given into subplots. The maximum number of boards accepted is 70.\n", " \"\"\"\n", " assert len(boards.shape) == 3\n", " assert boards.shape[1:] == (BOARD_SIZE, BOARD_SIZE)\n", " assert boards.shape[0] < 70\n", "\n", " plots_per_row = 4\n", " rows = int(np.ceil(boards.shape[0] / plots_per_row))\n", " fig, axs = plt.subplots(rows, plots_per_row, figsize=(12, 3 * rows))\n", " for game_index, ax in enumerate(axs.flatten()):\n", " if game_index >= boards.shape[0]:\n", " fig.delaxes(ax)\n", " else:\n", " plot_othello_board(boards[game_index], ax)\n", " plt.tight_layout()\n", " plt.show()" ] }, { "cell_type": "markdown", "source": [ "## Find possible actions to take\n", "\n", "The frist step in the implementation of an AI like this is to get an overview over the possible actions that can be taken in a situation.\n", "Here was the design choice taken to first find fields that are empty and have at least one neighbouring enemy stone.\n", "This was implemented with element wise check for a stone and a binary dilation marking all fields neighboring an enemy stone.\n", "For that the `SURROUNDING` mask was used. Both aries are then element wise combined using and.\n", "The resulting array contains all filed where a turn could potentially be made. Those are then check in detail.\n", "The previous element wise operations on the numpy array increase the spead for this operation dramatically.\n", "\n", "The check for a possible turn is done in detail by following each direction step by step as long as there are enemy stones in that direction.\n", "If the board end is reached or en empty filed before reaching a field occupied by the player that direction does not surround enemy stones.\n", "If one direction surrounds enemy stone a turn is possible.\n", "This detailed step is implemented as a recursion and need to go at leas one step to return True." ], "metadata": { "collapsed": false } }, { "cell_type": "code", "execution_count": 10, "metadata": { "tags": [] }, "outputs": [ { "data": { "text/plain": "array([[[1, 1, 1],\n [1, 0, 1],\n [1, 1, 1]]])" }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "SURROUNDING: Final = np.array(\n", " [[[1, 1, 1], [1, 0, 1], [1, 1, 1]]]\n", ") # defines the binary dilation mask to check if a field is next to an enemy stones\n", "SURROUNDING" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "9.43 ms ± 1 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)\n", "1 s ± 179 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n" ] }, { "data": { "text/plain": "array([[[False, False, False, False, False, False, False, False],\n [False, False, False, False, False, False, False, False],\n [False, False, False, True, False, False, False, False],\n [False, False, True, False, False, False, False, False],\n [False, False, False, False, False, True, False, False],\n [False, False, False, False, True, False, False, False],\n [False, False, False, False, False, False, False, False],\n [False, False, False, False, False, False, False, False]]])" }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "def _recursive_steps(board, rec_direction, rec_position, step_one=True) -> bool:\n", " \"\"\"Check if a player can place a stone on the board specified in the direction specified and direction specified.\n", "\n", " Args:\n", " board: The board that should be checked for a playable action.\n", " rec_direction: The direction that should be checked.\n", " rec_position: The position that should be checked.\n", " step_one: Defines if the call of this function is the firs or not. Should be kept to the default value for proper functionality.\n", "\n", " Returns:\n", " True if a turn is possible for possition and direction on the board defined.\n", " \"\"\"\n", " rec_position = rec_position + rec_direction\n", " if np.any((rec_position >= BOARD_SIZE) | (rec_position < 0)):\n", " return False\n", " next_field = board[tuple(rec_position.tolist())]\n", " if next_field == 0:\n", " return False\n", " if next_field == -1:\n", " return _recursive_steps(board, rec_direction, rec_position, step_one=False)\n", " if next_field == 1:\n", " return not step_one\n", "\n", "\n", "def get_possible_turns(boards: np.ndarray) -> np.ndarray:\n", " \"\"\"Analyses a stack of boards.\n", "\n", " Args:\n", " boards: A stack of boards to check.\n", "\n", " Returns:\n", " A stack of game boards containing boolean values showing where turns are possible for the player.\n", " \"\"\"\n", " assert len(boards.shape) == 3, \"The number fo input dimensions does not fit.\"\n", " assert boards.shape[1:] == (\n", " BOARD_SIZE,\n", " BOARD_SIZE,\n", " ), \"The input dimensions do not fit.\"\n", "\n", " _poss_turns = boards == 0 # checks where fields are empty.\n", " _poss_turns &= binary_dilation(\n", " boards == -1, SURROUNDING\n", " ) # checks where fields are next to an enemy filed an empty\n", " for game, idx, idy in itertools.product(\n", " range(boards.shape[0]), range(BOARD_SIZE), range(BOARD_SIZE)\n", " ):\n", " position = idx, idy\n", " if _poss_turns[game, idx, idy]:\n", " _poss_turns[game, idx, idy] = any(\n", " _recursive_steps(boards[game, :, :], direction, position)\n", " for direction in DIRECTIONS\n", " )\n", " return _poss_turns\n", "\n", "\n", "# some simple testing to ensure the function works after simple changes\n", "# this testing is complete, its more of a smoke-test\n", "test_array = get_new_games(3)\n", "expected_result = np.zeros_like(test_array, dtype=bool)\n", "expected_result[:, 4, 5] = expected_result[:, 2, 3] = True\n", "expected_result[:, 5, 4] = expected_result[:, 3, 2] = True\n", "np.testing.assert_equal(get_possible_turns(test_array), expected_result)\n", "\n", "\n", "%timeit get_possible_turns(get_new_games(10)) # checks turn possibility evaluation time for 10 initial games\n", "%timeit get_possible_turns(get_new_games(EXAMPLE_STACK_SIZE)) # check turn possibility evaluation time for EXAMPLE_STACK_SIZE initial games\n", "\n", "# shows a singe game\n", "get_possible_turns(get_new_games(3))[:1]" ] }, { "cell_type": "markdown", "source": [ "Besides the ability to generate an array of possible turns there needs to be a functions that check if a given turn is possible.\n", "On is needed for the action space validation. The other is for validating a players turn." ], "metadata": { "collapsed": false } }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def move_possible(board: np.ndarray, move: np.ndarray) -> bool:\n", " \"\"\"Checks if a turn is possible.\n", "\n", " Checks if a turn is possible. If no turn is possible to input array [-1, -1] is expected.\n", "\n", " Args:\n", " board: A board where it should be checkt if a turn is possible.\n", " move: The move that should be taken. Expected is the index of the filed where a stone should be placed [x, y]. If no placement is possible [-1, -1] is expected as an input.\n", "\n", " Returns:\n", " True if the move is possible\n", " \"\"\"\n", " if np.all(move == -1):\n", " return not np.any(get_possible_turns(np.reshape(board, (1, 8, 8))))\n", " return any(\n", " _recursive_steps(board[:, :], direction, move) for direction in DIRECTIONS\n", " )\n", "\n", "\n", "# Some testing for this function and the underlying recursive functions that are called.\n", "assert move_possible(get_new_games(1)[0], np.array([2, 3])) is True\n", "assert move_possible(get_new_games(1)[0], np.array([3, 2])) is True\n", "assert move_possible(get_new_games(1)[0], np.array([2, 2])) is False\n", "assert move_possible(np.zeros((8, 8)), np.array([3, 2])) is False\n", "assert move_possible(np.ones((8, 8)) * 1, np.array([-1, -1])) is True\n", "assert move_possible(np.ones((8, 8)) * -1, np.array([-1, -1])) is True\n", "assert move_possible(np.ones((8, 8)) * 0, np.array([-1, -1])) is True" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def moves_possible(boards: np.ndarray, moves: np.ndarray) -> np.ndarray:\n", " \"\"\"Checks if a stack of moves can be executed on a stack of boards.\n", "\n", " Args:\n", " boards: A board where the next stone should be placed.\n", " moves: A stack stones to be placed. Each move is formatted as an array in the form of [x, y] if no turn is possible the value [-1, -1] is expected.\n", "\n", " Returns:\n", " An array marking for each and every game and move in the stack if the move can be executed.\n", " \"\"\"\n", " arr_moves_possible = np.zeros(boards.shape[0], dtype=bool)\n", " for game in range(boards.shape[0]):\n", " if np.all(\n", " moves[game] == -1\n", " ): # can be all or any. All should be faster since most times neither value will be -1.\n", " arr_moves_possible[game] = not np.any(\n", " get_possible_turns(np.reshape(boards[game], (1, 8, 8)))\n", " )\n", " else:\n", " arr_moves_possible[game] = any(\n", " _recursive_steps(boards[game, :, :], direction, moves[game])\n", " for direction in DIRECTIONS\n", " )\n", " return arr_moves_possible\n", "\n", "\n", "np.testing.assert_array_equal(\n", " moves_possible(np.ones((3, 8, 8)) * 1, np.array([[-1, -1]] * 3)),\n", " np.array([True] * 3),\n", ")\n", "\n", "np.testing.assert_array_equal(\n", " moves_possible(get_new_games(3), np.array([[2, 3], [3, 2], [3, 2]])),\n", " np.array([True] * 3),\n", ")\n", "np.testing.assert_array_equal(\n", " moves_possible(get_new_games(3), np.array([[2, 2], [1, 1], [0, 0]])),\n", " np.array([False] * 3),\n", ")\n", "np.testing.assert_array_equal(\n", " moves_possible(np.ones((3, 8, 8)) * -1, np.array([[-1, -1]] * 3)),\n", " np.array([True] * 3),\n", ")\n", "np.testing.assert_array_equal(\n", " moves_possible(np.zeros((3, 8, 8)), np.array([[-1, -1]] * 3)),\n", " np.array([True] * 3),\n", ")" ] }, { "cell_type": "markdown", "source": [ "## Reword functions\n", "\n", "For any kind of reinforcement learning is a reword function needed.\n", "For otello this would be the final score, the information who won or changes to the score.\n", "A combination of those three would also be possible.\n", "It is probably not be possible to weight the current score to high in a reword function since that would be to close to a classic greedy algorithm.\n", "But some direct influence would increase the learning speed.\n", "In the next section are all three reword functions implemented to be combined and weight later on as needed." ], "metadata": { "collapsed": false } }, { "cell_type": "code", "execution_count": 24, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "177 µs ± 3.97 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)\n", "29.7 µs ± 106 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)\n", "31.2 µs ± 269 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)\n" ] } ], "source": [ "def final_boards_evaluation(boards: np.ndarray) -> np.ndarray:\n", " \"\"\"Evaluates the board at the end of the game.\n", "\n", " All unused fields are added to the score of the player that has more stones with his color up.\n", " This score only applies to the end of the game.\n", " Normally the score is represented by the number of stones each player has.\n", " In this case the score was combined by building the difference.\n", "\n", " Args:\n", " boards: A stack of game bords ot the end of the game.\n", "\n", " Returns:\n", " the combined score for both player.\n", " \"\"\"\n", " score1, score2 = np.sum(boards == 1, axis=(1, 2)), np.sum(boards == -1, axis=(1, 2))\n", " player_1_won = score1 > score2\n", " player_2_won = score1 < score2\n", " score1_final = 64 - score2[player_1_won]\n", " score2_final = 64 - score1[player_2_won]\n", " score1[player_1_won] = score1_final\n", " score2[player_2_won] = score2_final\n", " return score1 - score2\n", "\n", "\n", "def evaluate_boards(boards: np.ndarray) -> np.ndarray:\n", " \"\"\"Counts the stones each player has on the board.\n", "\n", " Args:\n", " boards: A stack of boards for evaluation.\n", "\n", " Returns:\n", " the combined score for both player.\n", " \"\"\"\n", " return np.sum(boards, axis=(1, 2))\n", "\n", "\n", "def evaluate_who_won(boards: np.ndarray) -> np.ndarray:\n", " \"\"\"Checks who won or is winning a game.\n", "\n", " Args:\n", " boards: A stack of boards for evaluation.\n", "\n", " Returns:\n", " The information who won for both player. 1 meaning the player won, -1 means the opponent lost. 0 represents a patt.\n", " \"\"\"\n", " return np.sign(np.sum(boards, axis=(1, 2)))\n", "\n", "\n", "_boards = get_new_games(EXAMPLE_STACK_SIZE)\n", "%timeit final_boards_evaluation(_boards)\n", "%timeit evaluate_boards(_boards)\n", "%timeit evaluate_who_won(_boards)" ], "metadata": { "collapsed": false } }, { "cell_type": "markdown", "source": [ "## Execute a chosen action\n", "\n", "After an evaluation what turns are possible there needs to be a function that executes a turn.\n", "This next sections does that." ], "metadata": { "collapsed": false } }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class InvalidTurn(ValueError):\n", " \"\"\"\n", " This error is thrown if a given turn is not valid.\n", " \"\"\"" ] }, { "cell_type": "code", "execution_count": 28, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "95.1 ms ± 3.5 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)\n" ] }, { "data": { "text/plain": "
", "image/png": "iVBORw0KGgoAAAANSUhEUgAAASIAAAEiCAYAAABdvt+2AAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjYuMywgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/P9b71AAAACXBIWXMAAA9hAAAPYQGoP6dpAAAdqElEQVR4nO3de3BU5f0/8PdJNi4QsivEYFiyQEIsMcHw5aalGZCoIIFQ7Di0OqGCAgUJF3XaSqy2tAqLY9uhCgYK4dIh3OyIdRi5y6WiXAJCwRA0CGUhpDg27JJQV5I9vz8Ou78EctmzOec8m5z3a+aMbHbP+TwPG9885/YcSZZlGUREAkWJbgAREYOIiIRjEBGRcAwiIhKOQUREwjGIiEg4BhERCccgIiLhLEYX9Pv9qKioQFxcHCRJMro8ERlElmVcv34dDocDUVHNj3kMD6KKigo4nU6jyxKRIG63G0lJSc1+xvAgiouLC/65U3dja9+oBCADkIBOieapLbo++y6mtuj6N64o/63//3xTDA+iwO5Yp+7AxApjaxcnATWXgVgHkHfJPLVF12ffzdn3dQ4ljEI5BMOD1UQkHIOIiIRjEBGRcAwiIhKOQUREwjGIiEg4BhERCccgIiLhVAfRgQMHMG7cODgcDkiShA8++ECHZhGRmagOopqaGvTv3x9Lly7Voz1EZEKqb/HIyclBTk6OHm0hIpPS/V4zn88Hn88XfO31evUuSURtjO4Hq10uF+x2e3DhFCBEdDvdg6igoAAejye4uN1uvUsSURuj+66Z1WqF1WrVuwwRtWG8joiIhFM9IqqurkZ5eXnw9fnz53HixAl07doVPXv21LRxRGQOqoOopKQE2dnZwdcvvfQSAGDSpElYs2aNZg0jIvNQHUQjRoyALMt6tIWITIrHiIhIOAYREQnHICIi4RhERCQcg4iIhGMQEZFwDCIiEo5BRETCSbLBVyd6vV7Y7XZAUp7HbaQbVwDZD0hRQKfu5qktuj77bs6+11QAkAGPxwObzdbsZ8UFERGZQihBpPs0IE3iiMg09dl3c/Y9MCIKhbAg6pQI5F0ytmZxElBzWflCzFRbdH323Zx9X+dQgjAUPFhNRMIxiIhIOAYREQnHICIi4RhERCQcg4iIhGMQEZFwDCIiEk5VELlcLgwZMgRxcXHo1q0bnnjiCZw9e1avthGRSagKov379yM/Px+HDh3Crl27cPPmTYwaNQo1NTV6tY+ITEDVLR7bt29v8HrNmjXo1q0bjh07huHDh2vaMCIyj1bda+bxeAAAXbt2bfIzPp8PPp8v+Nrr9bamJBG1Q2EfrPb7/XjhhReQlZWFfv36Nfk5l8sFu90eXJxOZ7gliaidCjuI8vPzcfr0aWzcuLHZzxUUFMDj8QQXt9sdbkkiaqfC2jWbNWsWtm7digMHDiApKanZz1qtVlit1rAaR0TmoCqIZFnG7NmzsWXLFuzbtw/Jycl6tYuITERVEOXn52P9+vX4xz/+gbi4OFRWVgIA7HY7OnbsqEsDiaj9U3WMqLCwEB6PByNGjED37t2Dy6ZNm/RqHxGZgOpdMyIirfFeMyISjkFERMIxiIhIOAYREQnHICIi4RhERCQcg4iIhGMQEZFwkmzwVYperxd2ux2QgFiHkZWV53DLfkCKUp4Fbpbaouuz7+bse00FAFmZt8xmszX7WXFBRESmEEoQtWqGxlbhiMg09dl3c/Y9MCIKhbAg6pQI5F0ytmZxElBzWflCzFS7tfVvVgOecsDvA6KsgD0ViOlsTG0t8HsXU3+dQwnCUIgbEVFEqyoFSpcB7o8A79do+C+bBNhSAOcYIH0G0CVdVCupvWAQUQPe88A/pwOXdwGSBZBrG/mQDHjPAaWFwBfvAD1GAsOWAzbOk0dh4ul7CipbCbyXDlTsVV43GkL1BN6v2KusV7ZS3/ZR+8UgIgDA8QXAgWlA3XctB9Dt5FplvQPTlO0QqcUgIpStBEpe1WZbJa8CZUXabIvMg0Fkct7zwMHZ2m7z4Cxlu0ShYhCZ3D+nA36Vu2It8dcq2yUKlerJ8zMzM2Gz2WCz2TB06FBs27ZNr7aRzqpKlbNjao8JtUSuVbZbdUbb7VL7pSqIkpKSsGjRIhw7dgwlJSV45JFHMH78eHzxxRd6tY90VLpMOUWvB8minN4nCoWqX8Nx48Y1eL1gwQIUFhbi0KFDyMjI0LRhpD/3R9qPhgLkWsDNwTKFKOx/D+vq6vDee++hpqYGQ4cO1bJNZIDvr9+6YlpH3nPK7SFELVEdRKdOncLQoUPx3XffoXPnztiyZQvS05u+xt/n88Hn8wVfe73e8FpKmvKeQ8g3JIZNVu5RI2qJ6rNmffv2xYkTJ3D48GE8//zzmDRpEkpLS5v8vMvlgt1uDy5Op7NVDSZt+H0tf6Yt1aG2TXUQ3XXXXUhNTcWgQYPgcrnQv39//OUvf2ny8wUFBfB4PMHF7Xa3qsGkjShr+6pDbVurz5n4/f4Gu163s1qtsFr52xhp7KkAJOi7eybdqkPUAlVBVFBQgJycHPTs2RPXr1/H+vXrsW/fPuzYsUOv9pFOYjorU3l4z+lXw9ZH3bxFZF6qgujq1at45plncOXKFdjtdmRmZmLHjh0YOXKkXu0jHTnHKNf66HEKX7IAzhztt0vtk6ogKiri3YztSfoMZT4hPci1QPrz+myb2h/ea2ZiXdKVSc20vrpasijb7XK/ttul9otBZHLDlgNRGgdRlEXZLlGoGEQmZ0sGsjTePctawmljSR0GESFtKjD4DW22NWQBkDZFm22ReTCICAAw8DfA8BVAdAf1x4wki7Le8JXAgFf0aR+1bwwiCkqbCkwoBRzZyuuWAinwviNbWY8jIQoXHydEDdiSgbE76z3XbFsjN8hKysWKzhzlFD3PjlFrMYioUV3Sgay3lT+39kmvRC2RZFnWezKIBrxeL+x2OyABsQ4jK/MZ6Ow7+26kmgooU8F4PLDZbM1+VlwQEZEphBJE4nbNOCIyTX323Zx9D4yIQiEsiDolAnmXjK1ZnATUXFa+EDPVFl2ffTdn39c5lCAMBQ9WU4tEHqzmgXJzYBBRo4Kn7z+6Ncn+7afvU5RpRNJnKGfY2kttEoNBRA14zytPab28S7lgsdG5imTl2qLSQmUakR4jlZtcW3t/mcjaJBavrKagspXAe+lAxV7ldUsTpgXer9irrFe2sm3WJvEYRAQAOL4AODANqPtO/YyNcq2y3oFpynbaUm2KDAwiQtlKoORVbbZV8ipQpmIiT5G1KXIwiEzOex44OFvbbR6cpWw3kmtTZGEQmdw/pwN+jSfP99cq243k2hRZWhVEixYtgiRJeOGFFzRqDhmpqlQ5Q6X1UzzkWmW7VWciszZFnrCD6OjRo1i+fDkyMzO1bA8ZqHSZ9hPnB0gW5RR7JNamyBNWEFVXVyMvLw8rVqxAly5dtG4TGcT9kT7PNAOU7bq3RWZtijxhBVF+fj7Gjh2Lxx57TOv2kEG+v37rqmUdec8pt2hEUm2KTKoHxxs3bsTx48dx9OjRkD7v8/ng8/mCr71er9qSpIM7Zl3Ug6zcJyay9j3/p3Md0oSqEZHb7cbcuXNRXFyMDh06hLSOy+WC3W4PLk6nM6yGkrb8vpY/o1cdkbUpMqkKomPHjuHq1asYOHAgLBYLLBYL9u/fj7fffhsWiwV1dXV3rFNQUACPxxNc3G63Zo2n8EVZxdURWZsik6pds0cffRSnTp1q8LNnn30WaWlpePnllxEdHX3HOlarFVYrfyMijT0VgAR9d5GkW3UiqDZFJlVBFBcXh379+jX4WWxsLOLj4+/4OUW2mM7KdBrec/rVsPVpfO4gkbUpMvHKahNzjtH3Wh5nTmTWpsjT6l+Fffv2adAMEiF9hjKnjx7kWuWZZ5FYmyIPR0Qm1iVdmVhM65GJZFG229yDF0XWpsjDIDK5YcuBKI3DIMqibDeSa1NkYRCZnC0ZyNJ4FylrSWhTt4qsTZGFQURImwoMfkObbQ1ZAKRNaRu1KXIwiAgAMPA3wPAVQHQH9cdtJIuy3vCVwIBX2lZtigwMIgpKmwpMKAUc2crrlkIh8L4jW1mvNaMRkbVJPD5OiBqwJQNjd9Z7tti2Rm5SlZQLBp05ymlyrc5QiaxNYjGIqFFd0oGst5U/G/20VZG1SQxJlmW9J2RowOv1wm63AxIQ6zCysvIcbtkPSFHKs8DNUlt0ffbdnH2vqYAyHYvHA5vN1uxnxQUREZlCKEEkbteMIyLT1Gffzdn3wIgoFMKCqFMikHfJ2JrFSUDNZeULMVNt0fXZd3P2fZ1DCcJQ8GA1tUjkAWMrYpGAVFhgRS18+Abl8KHGmOJkGAYRNSp4Cv2jWxPd334KPUWZyiN9hnKWS0vdcT+GYwb6YQwSkAKp3uVuMvz4Bl/jND7CASzDFfABZu0Bg4ga8J5XnpR6eZdy0WCjj/yRlet7SguVqTx6jFRuNG3tPV7x6I08LEcGRqEONxGNmDs+IyEK3ZCKh/E8HsEcfIGdKMZ0fIsLrStOQvHKagoqWwm8lw5U7FVet/TcscD7FXuV9cpWhl87C1MwH6VIg3JpdWMhVF/g/TRkYz6+QBZ4aXVbxiAiAMDxBcCBaUDdd+offCjXKusdmKZsR60cvIJnsBIx6NBiAN0uGjGIQUc8g5XIAW82a6sYRISylUDJq9psq+RVoKwo9M9nYQqegJJeEqSwagbWewILkIXnwtoGicUgMjnveeDgbG23eXCWst2WxKM3nsI7kDV6nIcMGU/hHcSjtybbI+MwiEzun9MBv8bPoPfXKtttSR6WIxqWsEdCt5MgIRoxyAOnaGxrVAXR/PnzIUlSgyUtLU2vtpHOqkqVs2Nqjwm1RK5VtlvVzJn17rgfGRil+phQS6IRgwyMQiL4e9mWqB4RZWRk4MqVK8Hlk08+0aNdZIDSZfo+0qe0sOn3h2MG6nBTl9p1uImHwcd4tCWqfw0tFgsSExP1aAsZzP2R9qOhALlWmU+oKf0wRvPRUEA0YtAPOdiEubpsn7SnekT01VdfweFwICUlBXl5ebh48aIe7SKdfX/91hXTOvKeU24PuZ0VnZGAFF1rJ6APrIjVtQZpR1UQPfTQQ1izZg22b9+OwsJCnD9/HsOGDcP169ebXMfn88Hr9TZYSLw7Zj7Ug6zco3a7BPRpcNuGHiREIQGputYg7ajaNcvJ+f/P8c3MzMRDDz2EXr16YfPmzZgypfErW10uF37/+9+3rpWkOb9PXB0LrIbUNqoOtV6r/lm6++678YMf/ADl5Y38s3dLQUEBPB5PcHG73a0pSRqJMuj/0cbq1MKYFDSqDrVeq4Kouroa586dQ/fuTc+4ZLVaYbPZGiwknj0V0OjynaZJt+rc5huUQ4Zf19LKXfpN/wNJkUVVEP3yl7/E/v37ceHCBXz66af4yU9+gujoaDz99NN6tY90EtNZmcpDT7Y+jc9b5EMNvoG+R8q/wTnOW9SGqAqiS5cu4emnn0bfvn3x05/+FPHx8Th06BASEhL0ah/pyDlG3+uInDlNv38aH+l6HdFpNHPtAEUcVb+GGzdu1KsdJED6DGU+IT3Itcpzx5pyAMvwCOboUjsaMdiPZq6mpIjDe81MrEu6MqmZ1qMiyaJst7mHH17BGXyBnZqPiupwE19gJypRpul2SV8MIpMbthyI0jiIoizKdltSjOmow01N776vw00UI4Q7bimiMIhMzpYMZGm8e5a1JLRpY7/FBWzEHE3vvt+I2Zw2tg1iEBHSpgKD39BmW0MWAGkqZm09iCJ8gN8AQNgjo8B6H+AVHMSqsLZBYnHyfAIADPwN0OleZZI0f626m2Eli7I7lrVEXQgFbMNCePEfPIV3EA2Lqpth63ATdbiJjZjNEGrDOCKioLSpwIRSwKHMX9/iQezA+45sZb1wQijgIIowH+kogzJzf0sHsQPvl2Ev5iODIdTGcUREDdiSgbE76z3XbFsjN8hKysWKzhzlFH1zZ8fU+BYX8DYer/dcs5w7bpBVrpg+h9PYhv0o5NmxdoJBRI3qkg5kva382egnvV7BGWzCXGzCXD7p1SQkWZb1ngyiAa/XC7vdDkhArMPIyspzuGU/IEUpzwI3S23R9dl3c/a9pgLKVDAeT4v3mIoLIiIyhVCCSNyuGUdEpqnPvpuz74ERUSiEBVGnRCDvkrE1i5OAmsvKF2Km2qLrs+/m7Ps6hxKEoeDpeyISjkFERMIxiIhIOAYREQnHICIi4RhERCQcg4iIhGMQEZFwqoPo8uXLmDhxIuLj49GxY0c88MADKCkp0aNtRGQSqq6srqqqQlZWFrKzs7Ft2zYkJCTgq6++QpcuXfRqHxGZgKogevPNN+F0OrF69ergz5KTQ5icmIioGap2zT788EMMHjwYEyZMQLdu3TBgwACsWLGi2XV8Ph+8Xm+DhYioPlVB9PXXX6OwsBD33XcfduzYgeeffx5z5szB2rVrm1zH5XLBbrcHF6fT2epGE1H7oiqI/H4/Bg4ciIULF2LAgAH4xS9+gWnTpmHZsmVNrlNQUACPxxNc3G53qxtNRO2LqiDq3r070tPTG/zs/vvvx8WLF5tcx2q1wmazNViIiOpTFURZWVk4e/Zsg599+eWX6NWrl6aNIiJzURVEL774Ig4dOoSFCxeivLwc69evx1//+lfk5+fr1T4iMgFVQTRkyBBs2bIFGzZsQL9+/fD6669j8eLFyMvL06t9RGQCqqeKzc3NRW5urh5tISKT4r1mRCQcg4iIhGMQEZFwDCIiEo5BRETCMYiISDgGEREJxyAiIuEkWZZlIwt6vV7Y7XZAAmIdRlZWnsMt+wEpSnkWuFlqi67Pvpuz7zUVAGTA4/G0eLO7uCAiIlMIJYhU3+KhGY6ITFOffTdn3wMjolAIC6JOiUDeJWNrFicBNZeVL8RMtUXXZ9/N2fd1DiUIQ8GD1UQkHIOIiIRjEBGRcAwiIhKOQUREwjGIiEg4BhERCccgIiLhVAVR7969IUnSHQsfJ0REraHqyuqjR4+irq4u+Pr06dMYOXIkJkyYoHnDiMg8VAVRQkJCg9eLFi1Cnz598PDDD2vaKCIyl7DvNfv++++xbt06vPTSS5AkqcnP+Xw++Hy+4Guv1xtuSSJqp8I+WP3BBx/g2rVrmDx5crOfc7lcsNvtwcXpdIZbkojaqbCDqKioCDk5OXA4mp/Lo6CgAB6PJ7i43e5wSxJROxXWrtm///1v7N69G++//36Ln7VarbBareGUISKTCGtEtHr1anTr1g1jx47Vuj1EZEKqg8jv92P16tWYNGkSLBZxEzwSUfuhOoh2796Nixcv4rnnntOjPURkQqqHNKNGjYLB8+0TUTvHe82ISDgGEREJxyAiIuEYREQkHIOIiIRjEBGRcAwiIhJOkg2+KMjr9cJutwMSENv8/bKa4zPQ2Xf23Tg1FQBkwOPxwGazNftZcUFERKYQShCJu1mMIyLT1Gffzdn3wIgoFMKCqFMikHfJ2JrFSUDNZeULMVNt0fXZd3P2fZ1DCcJQ8GA1EQnHICIi4RhERCQcg4iIhGMQEZFwDCIiEo5BRETCMYiISDhVQVRXV4fXXnsNycnJ6NixI/r06YPXX3+dc1gTUauourL6zTffRGFhIdauXYuMjAyUlJTg2Wefhd1ux5w5c/RqIxG1c6qC6NNPP8X48eODD1bs3bs3NmzYgCNHjujSOCIyB1W7Zj/60Y+wZ88efPnllwCAkydP4pNPPkFOTo4ujSMic1A1Ipo3bx68Xi/S0tIQHR2Nuro6LFiwAHl5eU2u4/P54PP5gq+9Xm/4rSWidknViGjz5s0oLi7G+vXrcfz4caxduxZ//OMfsXbt2ibXcblcsNvtwcXpdLa60UTUvqgKol/96leYN28ennrqKTzwwAP4+c9/jhdffBEul6vJdQoKCuDxeIKL2+1udaOJqH1RtWt248YNREU1zK7o6Gj4/f4m17FarbBareG1johMQVUQjRs3DgsWLEDPnj2RkZGBzz//HH/+85/x3HPP6dU+IjIBVUH0zjvv4LXXXsPMmTNx9epVOBwOTJ8+Hb/97W/1ah8RmYCqIIqLi8PixYuxePFinZpDRGbEe82ISDgGEREJxyAiIuEYREQkHIOIiIRjEBGRcAwiIhKOQUREwkmywfO8ejwe3H333QCU53Eb6UYlABmABHRKNE9t0fXZdzG1RdcPPPf+2rVrsNvtzX7W8CC6dOkSpwIhMhG3242kpKRmP2N4EPn9flRUVCAuLg6SJKla1+v1wul0wu12w2az6dTCyKzPvpuvtuj6ra0tyzKuX78Oh8Nxx6wdt1N1r5kWoqKiWkzHlthsNiG/FJFQn303X23R9VtTu6VdsgAerCYi4RhERCRcmwoiq9WK3/3ud8JmfBRZn303X23R9Y2sbfjBaiKi27WpERERtU8MIiISjkFERMIxiIhIuDYVRJ999hmio6MxduxYw2pOnjwZkiQFl/j4eIwePRr/+te/DGtDZWUlZs+ejZSUFFitVjidTowbNw579uzRtW79vsfExODee+/FyJEjsWrVqmafZadH/frL6NGjda/dXP3y8nLda1dWVmLu3LlITU1Fhw4dcO+99yIrKwuFhYW4ceOGbnUnT56MJ5544o6f79u3D5Ik4dq1a7rUbVNBVFRUhNmzZ+PAgQOoqKgwrO7o0aNx5coVXLlyBXv27IHFYkFubq4htS9cuIBBgwbh448/xltvvYVTp05h+/btyM7ORn5+vu71A32/cOECtm3bhuzsbMydOxe5ubmora01rH79ZcOGDbrXba5+cnKyrjW//vprDBgwADt37sTChQvx+eef47PPPsOvf/1rbN26Fbt379a1vgiG3+IRrurqamzatAklJSWorKzEmjVr8MorrxhS22q1IjFRuXU5MTER8+bNw7Bhw/DNN98gISFB19ozZ86EJEk4cuQIYmNjgz/PyMgw5MGW9fveo0cPDBw4ED/84Q/x6KOPYs2aNZg6daph9UUQUX/mzJmwWCwoKSlp8J2npKRg/PjxaI9X3LSZEdHmzZuRlpaGvn37YuLEiVi1apWQL6S6uhrr1q1Damoq4uPjda313//+F9u3b0d+fn6DX8iAwHQqRnvkkUfQv39/vP/++0Lqt2fffvstdu7c2eR3DkD1zeJtQZsJoqKiIkycOBGAMlz2eDzYv3+/IbW3bt2Kzp07o3PnzoiLi8OHH36ITZs2tXhHcWuVl5dDlmWkpaXpWiccaWlpuHDhgu516v/dB5aFCxfqXrep+hMmTNC1XuA779u3b4Of33PPPcE2vPzyy7q2obG/85ycHF1rtolds7Nnz+LIkSPYsmULAMBiseBnP/sZioqKMGLECN3rZ2dno7CwEABQVVWFd999Fzk5OThy5Ah69eqlW91IHoLLsmzIv8z1/+4Dunbtqnvdpuo3NUrR25EjR+D3+5GXlwefz6drrcb+zg8fPhwcCOihTQRRUVERamtr4XA4gj+TZRlWqxVLliwJeaqBcMXGxiI1NTX4euXKlbDb7VixYgXeeOMN3ered999kCQJZWVlutUI15kzZ3Q/aAvc+XdvNKPrp6amQpIknD17tsHPU1JSAAAdO3bUvQ2N9fnSpUu61oz4XbPa2lr87W9/w5/+9CecOHEiuJw8eRIOh8PQMygBkiQhKioK//vf/3St07VrVzz++ONYunQpampq7nhfr1OpLfn4449x6tQpPPnkk0Lqt2fx8fEYOXIklixZ0uh33l5F/Iho69atqKqqwpQpU+4Y+Tz55JMoKirCjBkzdG2Dz+dDZWUlAGXXbMmSJaiursa4ceN0rQsAS5cuRVZWFh588EH84Q9/QGZmJmpra7Fr1y4UFhbizJkzutYP9L2urg7/+c9/sH37drhcLuTm5uKZZ57RtXb9+vVZLBbcc889utcW5d1330VWVhYGDx6M+fPnIzMzE1FRUTh69CjKysowaNAg0U3UnhzhcnNz5TFjxjT63uHDh2UA8smTJ3WrP2nSJBnK9OMyADkuLk4eMmSI/Pe//123mrerqKiQ8/Pz5V69esl33XWX3KNHD/nHP/6xvHfvXl3r1u+7xWKRExIS5Mcee0xetWqVXFdXp2vt2+vXX/r27at77UD98ePHG1LrdhUVFfKsWbPk5ORkOSYmRu7cubP84IMPym+99ZZcU1OjW92m+rx3714ZgFxVVaVLXU4DQkTCRfwxIiJq/xhERCQcg4iIhGMQEZFwDCIiEo5BRETCMYiISDgGEREJxyAiIuEYREQkHIOIiIRjEBGRcP8P3ZHAPKDQyJ0AAAAASUVORK5CYII=\n" }, "metadata": {}, "output_type": "display_data" } ], "source": [ "def do_moves(boards: np.ndarray, moves: np.ndarray) -> np.ndarray:\n", " \"\"\"Executes a single move on a stack o Othello boards.\n", "\n", " Args:\n", " boards: A stack of Othello boards where the next stone should be placed.\n", " moves: A stack of stone placement orders for the game. Formatted as coordinates in an array [x, y] of the place where the stone should be placed. Should contain [-1,-1] if no new placement is possible.\n", "\n", " Returns:\n", " The new state of the board.\n", " \"\"\"\n", "\n", " def _do_directional_move(\n", " board: np.ndarray, rec_move: np.ndarray, rev_direction, step_one=True\n", " ) -> bool:\n", " \"\"\"Changes the color of enemy stones in one direction.\n", "\n", " This function works recursive. The argument step_one should always be used in its default value.\n", "\n", " Args:\n", " board: A bord on which a stone was placed.\n", " rec_move: The position on the board in x and y where this function is called from. Will be moved by recursive called.\n", " rev_direction: The position where the stone was placed. Inside this recursion it will also be the last step that was checked.\n", " step_one: Set to true if this is the first step in the recursion. False later on.\n", "\n", " Returns:\n", " True if a stone could be flipped.\n", " All changes are made on the view of the numpy array and therefore not included in the return value.\n", " \"\"\"\n", " rec_position = rec_move + rev_direction\n", " if np.any((rec_position >= 8) | (rec_position < 0)):\n", " return False\n", " next_field = board[tuple(rec_position.tolist())]\n", " if next_field == 0:\n", " return False\n", " if next_field == 1:\n", " return not step_one\n", " if next_field == -1:\n", " if _do_directional_move(board, rec_position, rev_direction, step_one=False):\n", " board[tuple(rec_position.tolist())] = 1\n", " return True\n", " return False\n", "\n", " def _do_move(_board: np.ndarray, move: np.ndarray) -> None:\n", " \"\"\"Executes a turn on a board.\n", "\n", " Args:\n", " _board: The game board on wich to place a stone.\n", " move: The coordinates of a stone that should be placed. Should be formatted as an array of the form [x, y]. The value [-1, -1] is expected if no turn is possible.\n", "\n", " Returns:\n", " All changes are made on the view of the numpy array.\n", " \"\"\"\n", " if np.all(move == -1):\n", " if not move_possible(_board, move):\n", " raise InvalidTurn(\"An action should be taken. A turn is possible.\")\n", " return\n", "\n", " # noinspection PyTypeChecker\n", " if _board[tuple(move.tolist())] != 0:\n", " raise InvalidTurn(\"This turn is not possible.\")\n", "\n", " action = False\n", " for direction in DIRECTIONS:\n", " if _do_directional_move(_board, move, direction):\n", " action = True\n", " if not action:\n", " raise InvalidTurn(\"This turn is not possible.\")\n", "\n", " # noinspection PyTypeChecker\n", " _board[tuple(move.tolist())] = 1\n", "\n", " boards = boards.copy()\n", " for game in range(boards.shape[0]):\n", " _do_move(boards[game], moves[game])\n", " return boards\n", "\n", "\n", "%timeit do_moves(get_new_games(EXAMPLE_STACK_SIZE), np.array([[2, 3]] * EXAMPLE_STACK_SIZE))[0]\n", "plot_othello_board(\n", " do_moves(\n", " get_new_games(EXAMPLE_STACK_SIZE), np.array([[2, 3]] * EXAMPLE_STACK_SIZE)\n", " )[0]\n", ")" ], "metadata": { "collapsed": false } }, { "cell_type": "markdown", "source": [ "## An abstract reversi game policy\n", "\n", "For an easy use of policies an abstract class containing the policy generation / requests an action in an inherited instance of this class.\n", "This class filters the policy to only propose valid actions. Inherited instance do not need to care about this." ], "metadata": { "collapsed": false } }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class GamePolicy(ABC):\n", " \"\"\"\n", " A game policy. Proposes where to place a stone next.\n", " \"\"\"\n", "\n", " @property\n", " @abc.abstractmethod\n", " def policy_name(self) -> str:\n", " \"\"\"The name of this policy\"\"\"\n", " raise NotImplementedError()\n", "\n", " @abc.abstractmethod\n", " def _internal_policy(self, boards: np.ndarray) -> np.ndarray:\n", " \"\"\"The internal policy is an unfiltered policy. It should only be called from inside this function\n", "\n", " Args:\n", " boards: A board where a policy should be calculated for.\n", "\n", " Returns:\n", " The policy for this board. Should have the same size as the boards array.\n", " \"\"\"\n", " raise NotImplementedError()\n", "\n", " def get_policy(\n", " self, boards: np.ndarray, epsilon: float = 1\n", " ) -> tuple[np.ndarray, np.ndarray]:\n", " assert len(boards.shape) == 3\n", " assert boards.shape == (BOARD_SIZE, BOARD_SIZE)\n", "\n", " # todo possibly change this function to only validate the purpose turn and\n", "\n", " policies = self._internal_policy(boards)\n", " raw_policy = policies.copy()\n", " if epsilon < 1:\n", " policies = policies + np.random.rand(*boards.shape)\n", "\n", " # todo talk to team about backpropagation epsilon for greedy factor\n", "\n", " possible_turns = get_possible_turns(boards)\n", " policies[possible_turns == False] = -1.0\n", " max_indices = [\n", " np.unravel_index(policy.argmax(), policy.shape) for policy in policies\n", " ]\n", " policy_vector = np.array(max_indices)\n", " max_policy = policy_vector\n", " no_turn_possible = np.all(policy_vector == 0, 1) & (policies[:, 0, 0] == -1.0)\n", "\n", " policy_vector[no_turn_possible] = IMPOSSIBLE\n", " max_policy[no_turn_possible] = 0\n", " return policy_vector, raw_policy" ] }, { "cell_type": "markdown", "source": [ "## A first policy" ], "metadata": { "collapsed": false } }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class RandomPolicy(GamePolicy):\n", " @property\n", " def policy_name(self) -> str:\n", " return \"random\"\n", "\n", " def internal_policy(self, boards: np.ndarray) -> np.ndarray:\n", " random_values = np.random.rand(*boards.shape)\n", " return random_values\n", " # return np.argmax(random_values, (1, 2))\n", "\n", "\n", "rnd_policy = RandomPolicy()\n", "assert rnd_policy.policy_name == \"random\"\n", "rnd_policy_result = rnd_policy.get_policy(get_new_games(1))\n", "assert np.any((5 >= rnd_policy_result) & (rnd_policy_result >= 3))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def single_turn(\n", " current_boards: np, policy: GamePolicy\n", ") -> tuple[np.ndarray, np.ndarray]:\n", " policy_results = policy.get_policy(current_boards)\n", "\n", " assert np.all(moves_possible(current_boards, policy_results)), (\n", " current_boards[(moves_possible(current_boards, policy_results) == False)],\n", " policy_results[(moves_possible(current_boards, policy_results) == False)],\n", " np.where(moves_possible(current_boards, policy_results) == False),\n", " )\n", "\n", " return do_moves(current_boards, policy_results), policy_results\n", "\n", "\n", "%timeit single_turn(get_new_games(100), RandomPolicy())\n", "single_turn(get_new_games(100), RandomPolicy())[0]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "\n", "\n", "\n", "def simulate_game(\n", " nr_of_games: int,\n", " policies: tuple[GamePolicy, GamePolicy],\n", ") -> tuple[np.ndarray, np.ndarray]:\n", "\n", " board_history_stack = np.zeros((SIMULATE_TURNS, nr_of_games, 8, 8))\n", " action_history_stack = np.zeros((SIMULATE_TURNS, nr_of_games, 2))\n", " current_boards = get_new_games(nr_of_games)\n", " for turn_index in range(SIMULATE_TURNS):\n", " policy_index = turn_index % 2\n", " policy = policies[policy_index]\n", " board_history_stack[turn_index] = current_boards\n", " if policy_index == 0:\n", " current_boards = current_boards * -1\n", " current_boards, action_taken = single_turn(current_boards, policy)\n", " action_history_stack[turn_index] = action_taken\n", "\n", " if policy_index == 0:\n", " current_boards = current_boards * -1\n", "\n", " return board_history_stack, action_history_stack\n", "\n", "\n", "%timeit simulate_game(100, (RandomPolicy(), RandomPolicy()))\n", "simulate_game(10, (RandomPolicy(), RandomPolicy()))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "\n", "\n", "def create_test_game():\n", " test_array = [\n", " [\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 0, 1, 2, 0, 0, 0],\n", " [0, 0, 0, 2, 1, 0, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " ],\n", " [\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 0, 2, 0, 0, 0, 0],\n", " [0, 0, 0, 2, 2, 0, 0, 0],\n", " [0, 0, 0, 2, 1, 0, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " ],\n", " [\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 0, 2, 0, 0, 0, 0],\n", " [0, 0, 0, 2, 2, 0, 0, 0],\n", " [0, 0, 1, 1, 1, 0, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " ],\n", " [\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 0, 2, 0, 0, 0, 0],\n", " [0, 0, 0, 2, 2, 0, 0, 0],\n", " [0, 0, 2, 1, 1, 0, 0, 0],\n", " [0, 2, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " ],\n", " [\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 0, 1, 0, 0, 0, 0],\n", " [0, 0, 0, 1, 0, 0, 0, 0],\n", " [0, 0, 0, 1, 2, 0, 0, 0],\n", " [0, 0, 2, 1, 1, 0, 0, 0],\n", " [0, 2, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " ],\n", " [\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 2, 1, 0, 0, 0, 0],\n", " [0, 0, 0, 2, 0, 0, 0, 0],\n", " [0, 0, 0, 1, 2, 0, 0, 0],\n", " [0, 0, 2, 1, 1, 0, 0, 0],\n", " [0, 2, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " ],\n", " [\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 2, 1, 0, 0, 0, 0],\n", " [0, 0, 0, 2, 0, 0, 0, 0],\n", " [0, 0, 0, 1, 2, 0, 0, 0],\n", " [0, 0, 2, 2, 2, 2, 0, 0],\n", " [0, 2, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " ],\n", " [\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 2, 1, 0, 0, 0, 0],\n", " [0, 0, 0, 2, 0, 0, 0, 0],\n", " [0, 0, 0, 1, 1, 1, 0, 0],\n", " [0, 0, 2, 2, 2, 2, 0, 0],\n", " [0, 2, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " ],\n", " [\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 2, 1, 0, 0, 0, 0],\n", " [0, 0, 0, 2, 0, 2, 0, 0],\n", " [0, 0, 0, 1, 2, 2, 0, 0],\n", " [0, 0, 2, 2, 2, 2, 0, 0],\n", " [0, 2, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " ],\n", " [\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 2, 1, 0, 0, 0, 0],\n", " [0, 0, 0, 2, 0, 2, 0, 0],\n", " [0, 0, 0, 1, 2, 2, 0, 0],\n", " [0, 0, 2, 2, 1, 2, 0, 0],\n", " [0, 2, 0, 0, 0, 1, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " ],\n", " [\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 2, 1, 0, 0, 0, 0],\n", " [0, 0, 0, 2, 0, 2, 0, 0],\n", " [0, 0, 0, 1, 2, 2, 0, 0],\n", " [0, 0, 2, 2, 1, 2, 0, 0],\n", " [0, 2, 0, 0, 0, 2, 0, 0],\n", " [0, 0, 0, 0, 0, 2, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " ],\n", " [\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 2, 1, 0, 0, 0, 0],\n", " [0, 0, 0, 2, 0, 2, 0, 0],\n", " [0, 0, 0, 1, 2, 2, 0, 0],\n", " [0, 1, 1, 1, 1, 2, 0, 0],\n", " [0, 2, 0, 0, 0, 2, 0, 0],\n", " [0, 0, 0, 0, 0, 2, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " ],\n", " [\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 2, 1, 0, 0, 0, 0],\n", " [0, 0, 0, 2, 0, 2, 0, 0],\n", " [0, 0, 0, 1, 2, 2, 0, 0],\n", " [2, 2, 2, 2, 2, 2, 0, 0],\n", " [0, 2, 0, 0, 0, 2, 0, 0],\n", " [0, 0, 0, 0, 0, 2, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " ],\n", " [\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 2, 1, 0, 0, 0, 0],\n", " [0, 0, 0, 2, 0, 2, 0, 0],\n", " [0, 0, 0, 1, 1, 1, 1, 0],\n", " [2, 2, 2, 2, 2, 2, 0, 0],\n", " [0, 2, 0, 0, 0, 2, 0, 0],\n", " [0, 0, 0, 0, 0, 2, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " ],\n", " [\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 2, 1, 0, 0, 0, 0],\n", " [0, 0, 0, 2, 0, 2, 0, 0],\n", " [0, 0, 0, 1, 1, 1, 1, 0],\n", " [2, 2, 2, 1, 2, 2, 0, 0],\n", " [0, 2, 0, 1, 0, 2, 0, 0],\n", " [0, 0, 0, 0, 0, 2, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " ],\n", " [\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 2, 1, 0, 0, 0, 0],\n", " [0, 0, 0, 2, 2, 2, 0, 0],\n", " [0, 0, 0, 2, 2, 1, 1, 0],\n", " [2, 2, 2, 1, 2, 2, 0, 0],\n", " [0, 2, 0, 1, 0, 2, 0, 0],\n", " [0, 0, 0, 0, 0, 2, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " ],\n", " [\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 2, 1, 0, 1, 0, 0],\n", " [0, 0, 0, 2, 2, 1, 0, 0],\n", " [0, 0, 0, 2, 2, 1, 1, 0],\n", " [2, 2, 2, 1, 2, 2, 0, 0],\n", " [0, 2, 0, 1, 0, 2, 0, 0],\n", " [0, 0, 0, 0, 0, 2, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " ],\n", " [\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 2, 1, 0, 1, 0, 0],\n", " [0, 0, 0, 2, 2, 2, 2, 0],\n", " [0, 0, 0, 2, 2, 2, 1, 0],\n", " [2, 2, 2, 1, 2, 2, 0, 0],\n", " [0, 2, 0, 1, 0, 2, 0, 0],\n", " [0, 0, 0, 0, 0, 2, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " ],\n", " [\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 2, 1, 0, 1, 0, 0],\n", " [0, 0, 0, 2, 1, 2, 2, 0],\n", " [0, 0, 0, 2, 2, 1, 1, 0],\n", " [2, 2, 2, 1, 1, 1, 1, 0],\n", " [0, 2, 0, 1, 0, 2, 0, 0],\n", " [0, 0, 0, 0, 0, 2, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " ],\n", " [\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 2, 1, 0, 1, 0, 0],\n", " [0, 0, 0, 2, 1, 2, 2, 0],\n", " [0, 0, 0, 2, 2, 1, 2, 0],\n", " [2, 2, 2, 2, 2, 2, 2, 2],\n", " [0, 2, 0, 1, 0, 2, 0, 0],\n", " [0, 0, 0, 0, 0, 2, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " ],\n", " [\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 2, 1, 0, 1, 0, 0],\n", " [0, 0, 0, 2, 1, 2, 2, 0],\n", " [0, 0, 0, 2, 1, 1, 2, 0],\n", " [2, 2, 2, 2, 1, 2, 2, 2],\n", " [0, 2, 0, 1, 1, 2, 0, 0],\n", " [0, 0, 0, 0, 0, 2, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " ],\n", " [\n", " [0, 0, 0, 0, 2, 0, 0, 0],\n", " [0, 0, 2, 2, 0, 2, 0, 0],\n", " [0, 0, 0, 2, 1, 2, 2, 0],\n", " [0, 0, 0, 2, 1, 1, 2, 0],\n", " [2, 2, 2, 2, 1, 2, 2, 2],\n", " [0, 2, 0, 1, 1, 2, 0, 0],\n", " [0, 0, 0, 0, 0, 2, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 0],\n", " ],\n", " ]\n", " test_array = np.array(test_array)\n", "\n", " # swapp 2 by one. 2 was only there for homogenous formating and easier readability while coading.\n", " test_array[test_array == 2] = -1\n", " assert np.all(\n", " np.count_nonzero(test_array, axis=(1, 2))\n", " == np.arange(4, 4 + test_array.shape[0])\n", " )\n", "\n", " # validated that only one stone is added per turn\n", " zero_array = test_array == 0\n", " diff = zero_array != np.roll(zero_array, 1, axis=0)\n", " turns = np.where(diff[1:])\n", " arr = np.array(turns)[0]\n", " assert len(arr) == len(set(arr))\n", "\n", " return test_array" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot_othello_boards(create_test_game()[-3:])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "array = create_test_game()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Sources\n", "\n", "* Game rules and example board images [https://en.wikipedia.org/wiki/Reversi](https://en.wikipedia.org/wiki/Reversi)\n", "* Game rules and example game images [https://de.wikipedia.org/wiki/Othello_(Spiel)](https://de.wikipedia.org/wiki/Othello_(Spiel))\n", "* Game strategy examples [https://de.wikipedia.org/wiki/Computer-Othello](https://de.wikipedia.org/wiki/Computer-Othello)\n", "* Image for 8 directions [https://www.researchgate.net/journal/EURASIP-Journal-on-Image-and-Video-Processing-1687-5281](https://www.researchgate.net/journal/EURASIP-Journal-on-Image-and-Video-Processing-1687-5281)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.8" } }, "nbformat": 4, "nbformat_minor": 4 }