{ "cells": [ { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "# Initialschätzung von Kurswechselpositionen eines Segelboots auf einer Karte anhang con Wind, Start und Zielpunkt\n", "\n", "## Motivation\n", "\n", "Ziel dieser Semester abschließenden schriftlichen Ausarbeitung im Fach \"Maschine Learning\" an der Fachhochschule Südwestfalen ist das Generieren einer Heatmap von Kurswechselpositionen eines Segelbootes zu einer Karte abhängig von Wind und der Zielpostion. Dies soll das Finden einer guten Route vereinfachen, indem die Qualität einer ersten Route, die danach über ein Quotientenabstiegsverfahren optimiert werden soll verbessern. Da ein solches Quotientenabstiegsverfahren sehr gerne in einem Lokalen minimum festhängt, müssen mehrere routen gefunden und optimiert werden. Hier soll untersucht werden, ob dies durch eine Ersteinschätzung der Lage durch KI verbessert werden kann.\n", "\n", "Eingesetzt werden soll die so erstellte KI in dem Segelroboter des [Sailing Team Darmstadt e.V.](https://www.st-darmstadt.de/) Einer Hochschulgruppe an der TU-Darmstadt welche den [\"roBOOTer\"](https://www.st-darmstadt.de/ueber-uns/boote/prototyp-ii/) ein vollautonomes Segelboot welches eines Tages den Atlantik überqueren soll. [Eine technische Herausforderung welche zuerst von einem norwegischen Team erfolgreich abgeschlossen wurde](https://www.microtransat.org/)." ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## Inhaltsverzeichnis\n", "\n", "1. Einleitung\n", " 1.1. Situation\n", " \n", " 1.2. Vorgehen zur unterstützenden KI\n", "\n", "2. Vorbereitungen\n", "\n", " 2.1. Imports\n", " \n", " 2.2. Parameter und Settings\n", "3. Szenarien und Routen Generieren\n", "4. Daten betrachten und Filtern\n", "5. KI Modell erstellen\n", "6. Training\n", "7. Analyse der KI\n", "8. Ausblick\n", " " ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## Einleitung\n", "\n", "### Situation\n", "\n", "Eine Routenplanung für ein Segelboot hat ein Problem, welches man sonst so eher nicht kennt. Eine relativ freie Fläche auf der Sich das Schiff bewegen kann. Dies verändert die Wegfindung wie man sie von der Straße kennt fundamental.\n", "\n", "Navigiert man auf Straßen, hat man zumindest nach einer ersten abstraction relativ wenige Freiheitsgrade für den Weg.\n", "Die Richtung kann nur an Kreuzungen gewechselt werden und dort nur in Richtungen in die es Straßen gibt. Beim Segeln auf dem freien Meer ist jeder Ort ein potenzieller Wendepunkt von dem aus Potenziell in jede Richtung gesegelt werden kann.\n", "\n", "Dennoch ist es oft auch ohne Hindernisse zwischen Boot und Ziel oft nicht möglich das Ziel direkt anzufahren das sich die Maximalgeschwindigkeiten relativ zur Windrichtung verändern.\n", "Das folgende Diagramm zeigt die Segelgeschwindigkeiten an einem Katamaran.\n", "\n", "\"Ship\n", "\n", "Da der roBOOTer anders als an Katamaran nicht auf Geschwindigkeit, sondern auf mechanische Belastbarkeit ausgelegt wurde hat der Fahrtwind einen geringeren einfluss auf das Fahrtverhalten des Segelboots dies und eine andere Maximalgeschwindigkeit sorgen für ein etwas anderes Fahrverhalten. Die ungefähre Form der Kurven trifft aber auch auf den roBOOTer zu. Man kann deutlich erkennen das auch, wenn man nicht direkt gegen den Wind fahren kann man schräg gegen den wind immer noch erstaunlich schnell ist.\n", "\n", "Das aktuelle Verfahren zum Finden einer Route läuft folgendermaßen ab:\n", "\n", "Eine direkte Route wird berechnet. Die Route wird an jedem Hindernisse geteilt und rechts und links um jedes hindernis herum gelegt. Bei folgenden hindernissen werden die Routen wieder geteilt somit erhält man $2^n$ Vorschläge für Routen wobei $n$ die Anzahl der Hindernisse auf der Route ist. Jeder Abschnitt der Route wird noch einmal zerteilt, um der Route mehr Flexibilität zu geben.\n", "\n", "Die Routen werden dann simuliert, um die Kosten der Route zu berechnen. Die so simulierte Route wird danach über die Kosten in einem Gradientenabstiegsverfahren optimiert.\n", "\n", "Das ganze oben beschriebene Verfahren ist relativ schnell sehr rechenaufwendig und findet nicht immer ein Ergebnis. Wird kein Ergebnis gefunden wird eine mehr oder weniger zufällige Route optimiert.\n", "\n", "Diese Ausarbeitung soll wenigstens bei der alternativen Routenfindung helfen. Im idealfall kann es aber auch genutzt werden, um die auswahl der Routen um Hindernisse frühzeitig zu reduzieren und den Rechenaufwand unter $2^n$ zu senken wobei $n$ die Anzahl von Hindernissen auf der Route ist.\n", "\n", "### Vorgehen zur unterstützenden KI\n", "\n", "#### Eingaben und Ausgeben\n", "\n", "Die Algorithm zur Wegfindung vom Sailing Team Darmstadt e.V. arbeiten intern mit Polygonen als Hindernissen. Diese werden durch die Shapely Bibliothek implementiert. Da eine variable Anzahl an Polygonen mit einer variablen Form und Position eine Relative komplexer Input muss dieser in eine normierte Form gebracht werden. Ein binärfärbens Bild ist dafür die einfachste Form.\n", "\n", "Für den Computer spielen sowohl Zentrierung, Skalierung und Ausrichtung der Karte keine Rolle.\n", "Wir rotieren also die Karte immer so das der Wind von *Norden* kommt und das Boot / die Startposition in der *Mitte* der Karte liegt. Da distanz Liner ist, wird davon ausgegangen das Scenario einfach skaliert passend skaliert werden kann.\n", "\n", "Die nächste eingabe ist die Zielposition relativ zum Startpunkt. Diese kann entweder durch ein einzelnes Pixel in einem zweiten Farbkanal oder aber in abstrakterer Form an die KI übergeben werden.\n", "\n", "Als ausgabe wird eine Heatmap erwartet. Zwei alternative Heatmaps sind relative einfach denkbar.\n", "\n", "1. Eine Headmap der Kurswechselpositionen\n", "2. Eine Headmap des Kursverlaufes\n", "\n", "Headmaps sind in gewisser Weise Bilder. Das Problem wird daher wie ein Bild zu Bild KI Problem betrachtet. Diese werden normalerweise durch ANNs gelöst.\n", "\n", "Um eine ANN zu trenntieren gibt es immer die Wahl zwischen drei Primären prinzipien. Dem unüberwachten Lernen, dem reinforcement Learning und dem überwachten Lernen. Letzteres ist dabei meist am einfachsten wenn auch nicht immer möglich.\n", "\n", "Der Wegfindealgorithmus des Sailing Team Darmstadt e.V. ist zwar noch in der Entwicklung, funktioniert aber hinreichend gut, um auf einem normalen PC Scenarios mit Routen zu paaren oder auch diese zu *labeln*, um beim KI lingo zu bleiben. Um anpassungsfähig an andere Scenarios zu sein wird eine große Menge unterschiedlicher Scenarios und Routen benötigt.\n", "Da das Haupteinsatzgebiet das Meer ist gehen wir von einer Insellandschaft oder Küstenlandschaft aus.\n", "\n", "Zum Finden von Scenarios gibt es zwei Möglichkeiten.\n", "\n", "1. Das Auswählen von umgebungen von der Weltkarte und das Bestimmen eines Zielpunktes.\n", "2. Das Generieren von künstlichen Scenarios.\n", " \n", "Hier wird die Annahme getroffen das sich ANNs von einem Datensatz auf dem anderen Übertragen lassen.\n", "Der Aufwand für künstliche Scenarios wird hierbei als geringer eingestuft und daher gewählt." ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## Vorbereitungen\n", "\n", "Folgende Python Bibliotheken werden verwendet:\n", "\n", "1. `Tensorflow`\n", " Die `Tensorflow` Bibliothek ist das Werkzeug welches verwendet wurde, um neuronale Netz zu modellieren, zu trainieren, zu analysieren und auszuführen.\n", "\n", "2. `pyrate`\n", " Die `Pyrate` Bibliothek ist Teil des ROS Operating Systems, welches den roBOOTer betreibt. Kann Routen zu Scenarios finden.\n", "\n", "3. `Shapley`\n", " Die `shapley` Bibliothek wird genutzt, um geometrische Körper zu generieren, zu mergen und an den Roboter zum Labeln weiterzugeben.\n", "\n", "4. `pandas`\n", " Die `pandas` Bibliothek verwaltet, speichert und analysiert daten.\n", "\n", "5. `numpy`\n", " Eine Bibliothek um Mathematische operations an multidimensionalen Arrays auszuführen.\n", "\n", "6. `matplotlib`\n", " Wird genutzt um Diagramme zu plotted.\n", "\n", "6. `PIL`\n", " Eine Library um Bilder manuell zu zeichnen.\n", "\n", "7. `humanize`\n", " Konvertiert Zahlen, Daten und Zeitabstände in ein für menschen einfach leserliches Format.\n", "\n", "8. `tqdm`\n", " Fügt einen Fortschrittsbalken zu vielen Problemen hinzu." ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "#### Imports\n", "Importiert die Imports the necessary packages from python and pypi." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "ExecuteTime": { "end_time": "2022-07-15T18:59:16.416888Z", "start_time": "2022-07-15T18:59:12.921020Z" }, "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "import sys\n", "\n", "# Pins the python version executing the Jupyter Notebook\n", "assert sys.version_info.major == 3\n", "assert sys.version_info.minor == 10\n", "\n", "import os\n", "from typing import Optional, Final, Literal\n", "import glob\n", "import pickle\n", "\n", "from tqdm.notebook import tqdm\n", "import matplotlib.pyplot as plt\n", "import numpy as np\n", "import pandas as pd\n", "from PIL import ImageDraw, Image\n", "from shapely.geometry import Polygon, Point, LineString\n", "from shapely.ops import unary_union\n", "import tensorflow as tf\n", "import humanize" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "Definiert den Pfad an dem das Jupyter Notebook ausgeführt werden soll.\n", "Importiert die pyrate module. Wird nur ausgeführt, wenn innerhalb des Pyrate Containers ausgeführt." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "# Import route generation if started in the docker container\n", "if os.getenv(\"PYRATE\"):\n", " %cd /pyrate/\n", " import experiments\n", " from pyrate.plan.nearplanner.timing_frame import TimingFrame\n", "\n", "# Protection against multi exection\n", "if not os.path.exists(\"experiments\"):\n", " %cd ../" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "if os.getenv(\"PYRATE\"):\n", " # Sets the maximum number of optimization steps that can be performed to find a route.\n", " # Significantly lowered for more speed.\n", " experiments.optimization_param.n_iter_grad = 50\n", "\n", " # Disables verbose outputs from the pyrate library.\n", " experiments.optimization_param.verbose = False" ] }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "# The scale the route should lie in. Only a mathematical limit.\n", "SIZE_ROUTE: Final[int] = 100\n", "\n", "# The outer limit in with the goal need to be palced.\n", "# Should be smaller than\n", "SIZE_INNER: Final[int] = 75\n", "assert SIZE_ROUTE > SIZE_INNER, \"The goal should be well inside the limit placed \"\n", "\n", "# The minimum destance from the start that should\n", "MIN_DESTINATION_DISTANCE: Final[int] = 25\n", "assert (\n", " SIZE_INNER > MIN_DESTINATION_DISTANCE\n", "), \"The goal should be well closer to the outer limit the\"\n", "\n", "# The size the ANN input has. Equal to the image size. Should be an on of $n^2$ to be easier compatible with ANNs.\n", "IMG_SIZE: Final[int] = 128\n", "\n", "# The size an image should be in to be easily visible by eye.\n", "IMG_SHOW_SIZE: Final[int] = 400\n", "\n", "# The number of Files that should be read to train the ANNs\n", "NUMBER_OF_FILES_LIMIT: Final[int] = 1000\n", "\n", "#\n", "NO_SHOW = False\n", "GENERATE_NEW = True" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "markdown", "source": [ "## Szenarien und Routen Generieren\n", "\n", "Um das neuronale Netz zu trainieren werden Datensätze benötigt. Für die Abschätzung der Routen wird eine Karte mit Hindernissen und eine zugehörige Route benötigt. Hier wurde die Designentscheidung getroffen die Karten nicht auszuwählen, sondern zu generieren.\n", "\n", "### Generieren von Karten\n", "\n", "Eine Karte ist für das Sailing Team Darstadt eine Mange von statischen und dynamischen Hindernissen. Statische Hindernisse sind Inseln, Landmassen und Untiefen und Fahrferbotszonen. Dynamische Hindernisse sind andere Teilnehmer am Schiffsverkehr und Wetterereignisse.\n", "In dieser KI wird sich auf statische Hindernisse beschränkt. Daher ist eine Scenario eine Maenge an Hindernispoligonen.\n", "Um das generieren der Poligone einfacher zu regeln und größere statistische Kontrolle über die den Generationsvorgang zu haben sind alle generierten Basispolinome als Abschnitte auf einem Umkreis definiert die Zufällig über die Karte verteilt werden.\n", "\n", "Ein einzelnes Polygon wird hier Folgendermaßen generiert:\n", "1. Die Anuzahl der Kanten/Ecken wird festgelegt.\n", "2. Ein lognormal verteilter Radius wird zufällig ausgewählt.\n", "3. Auf dem Radius werden n winkel abgetragen.\n", "4. Die Winkel werden sortiert damit sich das Polygon nicht selbstschneidet.\n", "5. Die durch Radius und Winkel entstehenden Punkte werden in das kartesische Koordinatesnsystem Umgewandelt.\n", "6. Der zufällige Offset / Polygonmittelpunkt wird aufaddiert.\n", "7. Aus den so generierten `np.ndarray` wird ein `shapely.geometry.Polygon` erstellt.\n", "\n", "So wird eine Festgelegte Anzahl von Polygonen generiert.\n", "Setzt man vor dem generieren des ersten Polygons eines Scenarios eine random seed über `np.random.seed` so erhält man zu jedem seed ein eindeutiges mänge an Polygonen wenn auch alle anderen Parameter übereinstimmen. Diese Polygonmänge hat nun mit hoher Warscheinlichkeit überlappende Polygone. Dies ist für den Algorithmus des Sailing Teams Darmstadt e.V. ein Problem. Die Shaeply libraray besitzt eine Union function die vereinigungsmängen von Polygonen bildet wenn möglich. So erhält man eine reduzierte mänge an Polygonen. Diese kann später an einen Solver übergeben werden." ], "metadata": { "collapsed": false, "pycharm": { "name": "#%% md\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "# https://stackoverflow.com/questions/16444719/python-numpy-complex-numbers-is-there-a-function-for-polar-to-rectangular-co\n", "def polar_to_cartesian(\n", " radii: np.ndarray,\n", " angles: np.ndarray,\n", "):\n", " \"\"\"Transforms polar coordinates into cartesian coordinates.\n", "\n", " Args:\n", " radii: A array of radii.\n", " angles: A array of angles.\n", "\n", " Returns:\n", " An array of cartesian coordinates.\n", " \"\"\"\n", " return radii * np.exp(2j * angles * np.pi)" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "def random_polygon(\n", " radius_mean: float = 2,\n", " radius_sigma: float = 1.5,\n", "):\n", " \"\"\"Generates the simplest of polygons, a triangle with a size described by a random polygon.\n", "\n", " Args:\n", " radius_mean: The average radius defining a circumcircle of a triangle.\n", " radius_sigma: The variance of a radius defining a circumcircle of a triangle.\n", "\n", " Returns:\n", " A single triangle.\n", " \"\"\"\n", " number_of_corners = np.random.randint(3, 10)\n", " array = polar_to_cartesian(\n", " np.random.lognormal(radius_mean, radius_sigma),\n", " np.sort(np.random.rand(number_of_corners)),\n", " )\n", " offset = np.random.randint(low=-SIZE_ROUTE, high=SIZE_ROUTE, size=(2,))\n", " return_values = np.zeros((number_of_corners, 2), dtype=float)\n", " # return_values[1, :] = np.real(offset)\n", " return_values[:] = offset\n", " return_values[:, :] += np.array((np.real(array), np.imag(array))).T\n", " return Polygon(return_values)\n", " # return np.array( + offset[0], np.imag(array) + offset[1])\n", "\n", "\n", "np.random.seed(42)\n", "random_polygon()" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "markdown", "source": [], "metadata": { "collapsed": false, "pycharm": { "name": "#%% md\n" } } }, { "cell_type": "markdown", "source": [], "metadata": { "collapsed": false, "pycharm": { "name": "#%% md\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "def generate_obstacles(\n", " seed: Optional[int] = None,\n", " number_of_polygons: int = 40,\n", " radius_mean: float = 2,\n", " radius_sigma: float = 1,\n", ") -> dict[str, Polygon]:\n", " \"\"\"Generates a set of obstacles from a union of triangles.\n", "\n", " The union of triangles meas that if polygons overlap o polygon containing the union of those polygons is returned.\n", " Args:\n", " seed: A seed to generate a set of obstacles from.\n", " number_of_polygons: The number of polygons that should be drawn.\n", " radius_mean: The average radius defining a circumcircle of an obstacle triangle.\n", " radius_sigma: The variance of a radius defining a circumcircle of an obstacle triangle.\n", "\n", " Returns:\n", " A list of unified obstacles.\n", " \"\"\"\n", " if seed is not None:\n", " np.random.seed(seed)\n", " polygons = []\n", " for _ in range(number_of_polygons):\n", " poly = random_polygon(radius_mean, radius_sigma)\n", " if poly.contains(Point(0, 0)):\n", " continue\n", " if poly.exterior.distance(Point(0, 0)) < 1:\n", " continue\n", " polygons.append(poly)\n", " polygon_list = list(unary_union(polygons).geoms)\n", " return {str(i): p for i, p in enumerate(polygon_list)}" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "def generate_destination(\n", " obstacles: dict[str, Polygon],\n", " seed: Optional[int] = None,\n", ") -> Point:\n", " \"\"\"Generates for a map.\n", "\n", " Can be used to generate a valid destination for list of obstacles.\n", " Args:\n", " obstacles: A list of obstacles.\n", " seed: The seed determining the point.\n", "\n", " Returns:\n", " A goal that should be reached by the ship.\n", " \"\"\"\n", " # sets the seed\n", " if seed is not None:\n", " np.random.seed(seed)\n", "\n", " # generates the point\n", " point: Optional[Point] = None\n", " while (\n", " point is None\n", " or abs(point.x) < MIN_DESTINATION_DISTANCE\n", " or abs(point.y) < MIN_DESTINATION_DISTANCE\n", " or any(obstacle.contains(point) for obstacle in obstacles.values())\n", " ):\n", " point = Point(np.random.randint(-SIZE_INNER, SIZE_INNER, size=(2,), dtype=int))\n", " return point\n", "\n", "\n", "print(generate_destination(generate_obstacles(42), 42))" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "def plot_situation(\n", " obstacles: dict[str, Polygon],\n", " destination: Point,\n", " obstacle_color: str | None = None,\n", " route=None,\n", " legend: bool = True,\n", " title: str | None = None,\n", ") -> None:\n", " \"\"\"PLots the obstacles into a matplotlib plot.\n", "\n", " Args:\n", " obstacles: A list of obstacles.\n", " destination: The destination that should be reached by the boat.\n", " obstacle_color: The color the obstacles should have. Can be None.\n", " If none all obstacles will have different colors.\n", " route: The route that should be plotted.\n", " legend: If true plots a legend.\n", " title: The title of the plot.\n", " Returns:\n", " None\n", " \"\"\"\n", " # x.figure(figsize=(8, 8))\n", " # plt.axis([70.9481331655341 - 5, 70.9481331655341 + 5, 43.24219045432384-5, 43.24219045432384+5])\n", " plt.axis([-SIZE_ROUTE, SIZE_ROUTE, -SIZE_ROUTE, SIZE_ROUTE])\n", "\n", " # Sets a title if one is demanded\n", " if title:\n", " plt.title(title)\n", "\n", " # Plots the obstacles.\n", " if obstacles:\n", " for polygon in obstacles.values():\n", " if obstacle_color is not None:\n", " plt.fill(*polygon.exterior.xy, color=obstacle_color, label=\"Obstacle\")\n", " else:\n", " plt.fill(*polygon.exterior.xy)\n", "\n", " # Plots the wind direction\n", " # https://www.geeksforgeeks.org/matplotlib-pyplot-arrow-in-python/\n", " plt.arrow(\n", " 0,\n", " +int(SIZE_ROUTE * 0.9),\n", " 0,\n", " -int(SIZE_ROUTE * 0.1),\n", " head_width=10,\n", " width=4,\n", " label=\"Wind (3Bft)\",\n", " )\n", "\n", " if route is not None:\n", " if isinstance(route, np.ndarray):\n", " plt.plot(route[:, 0], route[:, 1], color=\"BLUE\", marker=\".\")\n", " else:\n", " if isinstance(route, TimingFrame):\n", " plt.plot(\n", " route.points[:, 0], route.points[:, 1], color=\"BLUE\", marker=\".\"\n", " )\n", " else:\n", " raise TypeError()\n", "\n", " # Plots the estimation\n", " if destination:\n", " plt.scatter(*destination.xy, marker=\"X\", color=\"green\", label=\"Destination\")\n", " plt.scatter(0, 0, marker=\"o\", color=\"green\", label=\"Start\")\n", "\n", " if legend:\n", " # https://stackoverflow.com/questions/13588920/stop-matplotlib-repeating-labels-in-legend\n", " handles, labels = plt.gca().get_legend_handles_labels()\n", " by_label = dict(zip(labels, handles))\n", " plt.legend(by_label.values(), by_label.keys())\n", " return None" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "if not NO_SHOW:\n", " plt.figure(figsize=(17.5, 25))\n", " for seed in tqdm(range(12)):\n", " plt.subplot(4, 3, seed + 1)\n", " generated_obstacles = generate_obstacles(seed)\n", " generated_destination = generate_destination(generated_obstacles, seed)\n", " route_generated = None\n", "\n", " # noinspection PyBroadException\n", " try:\n", " route_generated, _ = experiments.generate_route(\n", " position=Point(0, 0),\n", " goal=generated_destination,\n", " obstacles=generated_obstacles,\n", " wind=(18, 180),\n", " )\n", " except Exception:\n", " route_generated = None\n", "\n", " plot_situation(\n", " obstacles=generated_obstacles,\n", " destination=generated_destination,\n", " obstacle_color=\"RED\",\n", " route=route_generated,\n", " title=f\"Seed: {seed}, Cost: {route_generated.cost:.3f}\"\n", " if route_generated\n", " else f\"Seed: {seed}\",\n", " legend=seed == 0,\n", " )\n", " plt.show()" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "def generate_image_from_map(\n", " obstacles: dict[str, Polygon],\n", " destination: Point,\n", " route=None,\n", " route_type: Literal[\"line\", \"dot\"] = \"dot\",\n", ") -> Image:\n", " \"\"\"Generate an image from the map.\n", "\n", " Can be used to feed an ANN.\n", " - Obstacles are marked as reed.\n", " - The destination is marked as green.\n", " - The points where the route will likely change are blue.\n", "\n", " Args:\n", " obstacles: A dict of obstacles as shapely Polygons. Keyed as a string.\n", " destination: A destination that should be navigated to.\n", " route: The calculated route that should be followed.\n", " route_type: How the route is drawn. If 'line' is selected the complete route is selected.\n", " If 'dot' is selected the turning points a drawn in.\n", " \"\"\"\n", " img = Image.new(\n", " \"RGB\",\n", " (IMG_SIZE, IMG_SIZE),\n", " \"#000000\",\n", " )\n", " draw = ImageDraw.Draw(img)\n", " for polygon in obstacles.values():\n", " draw.polygon(\n", " list(\n", " (np.dstack(polygon.exterior.xy).reshape((-1)) + SIZE_ROUTE)\n", " / (2 * SIZE_ROUTE)\n", " * IMG_SIZE\n", " ),\n", " fill=\"#FF0000\",\n", " outline=\"#FF0000\",\n", " )\n", " if os.getenv(\"PYRATE\"):\n", " if isinstance(route, TimingFrame):\n", " route = route.points\n", " if route is not None:\n", " route = ((route + SIZE_ROUTE) / (2 * SIZE_ROUTE) * IMG_SIZE).astype(int)\n", " if route_type == \"line\":\n", " draw.line([tuple(point) for point in route], fill=(0, 0, 0xFF))\n", " elif route_type == \"dot\":\n", " for point in route[1:]:\n", " img.putpixel(point, (0, 0, 0xFF))\n", " else:\n", " raise ValueError(\"Route type unknown.\")\n", " img.putpixel(\n", " (\n", " int((destination.x + SIZE_ROUTE) / (2 * SIZE_ROUTE) * IMG_SIZE),\n", " int((destination.y + SIZE_ROUTE) / (2 * SIZE_ROUTE) * IMG_SIZE),\n", " ),\n", " (0, 0xFF, 0),\n", " )\n", " return img" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "def generate_example_image(route_type: Literal[\"line\", \"dot\"]):\n", " \"\"\"\n", " Generates an example image with the seed 42.\n", "\n", " Args:\n", " route_type: How the route is drawn. If 'line' is selected the complete route is selected.\n", " If 'dot' is selected the turning points a drawn in.\n", "\n", " Returns:\n", " The example image.\n", " \"\"\"\n", " obstacles = generate_obstacles(42)\n", " destination = generate_destination(obstacles, 42)\n", " try:\n", " route, _ = experiments.generate_route(\n", " position=Point(0, 0),\n", " goal=destination,\n", " obstacles=obstacles,\n", " wind=(18, 180),\n", " )\n", " except Exception:\n", " route = None\n", " return generate_image_from_map(\n", " obstacles=obstacles,\n", " destination=destination,\n", " route=route,\n", " route_type=route_type,\n", " )" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "generate_example_image(route_type=\"dot\").resize(\n", " (IMG_SHOW_SIZE, IMG_SHOW_SIZE), Image.Resampling.BICUBIC\n", ")" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "generate_example_image(route_type=\"line\").resize(\n", " (IMG_SHOW_SIZE, IMG_SHOW_SIZE), Image.Resampling.BICUBIC\n", ")" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "if not NO_SHOW:\n", " for seed in tqdm([42]):\n", " plt.figure(figsize=(8, 8))\n", " wind_dir = 180\n", " generated_obstacles = generate_obstacles(seed)\n", " generated_destination = generate_destination(generated_obstacles, seed)\n", " route_generated = None\n", " try:\n", " route_generated, _ = experiments.generate_route(\n", " position=Point(0, 0),\n", " goal=generated_destination,\n", " obstacles=generated_obstacles,\n", " wind=(18, wind_dir),\n", " )\n", " except Exception as e:\n", " route_generated = None\n", " plot_situation(\n", " obstacles=generated_obstacles,\n", " destination=generated_destination,\n", " obstacle_color=\"RED\",\n", " route=route_generated,\n", " title=f\"Seed: {seed}, Cost: {route_generated.cost:.3f}\"\n", " if route_generated\n", " else f\"Seed: {seed}\",\n", " legend=seed == 0,\n", " )\n", " plt.show()" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "def generate_all_to_series(\n", " seed: Optional[int] = None, image: bool = False\n", ") -> pd.Series:\n", " \"\"\"Generates everything and aggregates all data into a `pd:Series`.\n", "\n", " Args:\n", " seed:The seed that should be used to generate map and destination.\n", " image: If an image should be generated or if that should be postponed to save memory.\n", " Returns:\n", " Contains a `pd.Series`containing the following.\n", " - The seed tha generated the map.\n", " - The destination in x\n", " - The destination in y\n", " - A list of Obstacle polygons.\n", " - The route generated for this map by the roBOOTer navigation system.\n", " - Optionally the image containing all the information.\n", " Can be generated at a later date without the fear for a loss of accuracy.\n", " \"\"\"\n", " obstacles = generate_obstacles(seed)\n", " destination = generate_destination(obstacles, seed)\n", "\n", " try:\n", " route, _ = experiments.generate_route(\n", " position=Point(0, 0),\n", " goal=destination,\n", " obstacles=obstacles,\n", " wind=(18, wind_dir),\n", " )\n", " except Exception:\n", " route = None\n", " return pd.Series(\n", " data={\n", " \"seed\": str(seed),\n", " \"obstacles\": obstacles,\n", " \"destination_x\": destination.x,\n", " \"destination_y\": destination.y,\n", " \"image\": generate_image_from_map(obstacles, destination, route)\n", " if image\n", " else pd.NA,\n", " \"route\": route.points if route else pd.NA,\n", " \"cost\": route.cost if route else pd.NA,\n", " },\n", " name=str(seed),\n", " )" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "if not NO_SHOW:\n", " df = pd.DataFrame(\n", " [generate_all_to_series(i, image=False) for i in tqdm(range(2))]\n", " ).set_index(\"seed\")\n", " df.to_pickle(\"test.pickle\")\n", " df" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "markdown", "source": [ "https://programtalk.com/python-examples/PIL.ImageDraw.Draw.polygon/)\n", "https://stackoverflow.com/questions/3654289/scipy-create-2d-polygon-mask" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%% md\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "if os.getenv(\"PYRATE\"):\n", " save_frequency = int(os.getenv(\"save_frequency\", \"50\"))\n", " start_seed = int(os.getenv(\"seed_start\", \"0\"))\n", " continues = bool(os.getenv(\"continues\", \"false\"))\n", "\n", " files = glob.glob(\"data/*.pickle\")\n", " seed_groups = {int(file[9:-7]) for file in files}\n", " for next_seeds in range(start_seed, 1_000_000, save_frequency):\n", " if next_seeds in seed_groups:\n", " continue\n", " print(f\"Start generating routes for seed: {next_seeds}\")\n", " tmp_pickle_str: str = f\"data/tmp_{next_seeds:010}.pickle\"\n", " pd.DataFrame().to_pickle(tmp_pickle_str)\n", " df = pd.DataFrame(\n", " [\n", " generate_all_to_series(i, image=False)\n", " for i in tqdm(range(next_seeds, next_seeds + save_frequency, 1))\n", " ]\n", " ).set_index(\"seed\")\n", " pickle_to_file = f\"data/raw_{next_seeds:010}.pickle\"\n", " df.to_pickle(pickle_to_file)\n", " os.remove(tmp_pickle_str)\n", " if not continues:\n", " break" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "DATA_COLLECTION_PATH: Final[str] = \"data/collected.pickle\"\n", "if os.path.exists(DATA_COLLECTION_PATH) and not GENERATE_NEW:\n", " collected_data = pd.read_pickle(DATA_COLLECTION_PATH)\n", "else:\n", " collected_data = pd.concat(\n", " [\n", " pd.read_pickle(filename)\n", " for filename in tqdm(glob.glob(\"data/raw_*.pickle\")[:NUMBER_OF_FILES_LIMIT])\n", " ]\n", " )\n", " number_of_maps = len(collected_data.index)\n", " print(f\"{number_of_maps: 10} maps collected\")\n", " collected_data.dropna(subset=[\"route\"], inplace=True)\n", " number_of_routes = len(collected_data.index)\n", " print(f\"{number_of_routes: 10} routes collected\")\n", " collected_data.to_pickle(DATA_COLLECTION_PATH)\n", "collected_data" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "markdown", "source": [ "# find and drop all routes that exit the map!" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%% md\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "def check_route_in_bounds(route):\n", "\n", " # easier to debut in multiple lines\n", " if route is None:\n", " return False\n", " if route is pd.NA:\n", " return False\n", " if not isinstance(route, np.ndarray):\n", " return False\n", " if np.array(\n", " abs(route) > 100,\n", " ).any():\n", " return False\n", " return True\n", "\n", "\n", "data_before = len(collected_data.index)\n", "\n", "df_filter = collected_data[\"route\"].mapply(check_route_in_bounds)\n", "filtered = collected_data[~df_filter]\n", "collected_data = collected_data[df_filter]\n", "\n", "data_after = len(collected_data.index)\n", "\n", "print(\n", " f\"{data_before} - {data_before-data_after} = {data_after} sets of data remaining.\"\n", ")\n", "del data_before, data_after, filtered, df_filter" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "markdown", "source": [ "# find and drop all routes with errors!\n" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%% md\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "def check_route_self_crossing(route):\n", " if isinstance(route, float):\n", " print(float)\n", " return not LineString(route).is_simple\n", "\n", "\n", "data_before = len(collected_data.index)\n", "collected_data = collected_data[\n", " ~collected_data[\"route\"].mapply(check_route_self_crossing)\n", "]\n", "data_after = len(collected_data.index)\n", "print(\n", " f\"{data_before} - {data_before-data_after} = {data_after} sets of data remaining.\"\n", ")\n", "del data_before, data_after" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "markdown", "source": [ "# distribution over costs and points in routes!" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%% md\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "QUANTILE_LIMIT: Final[float] = 0.95\n", "if \"DATA_UPPER_LIMIT_QUANTIL\" not in locals():\n", " DATA_UPPER_LIMIT_QUANTIL: Final[float] = collected_data[\"cost\"].quantile(\n", " QUANTILE_LIMIT\n", " )\n", " OVER_QUANTILE: Final[int] = int(len(collected_data.index) * (1 - QUANTILE_LIMIT))\n", "print(\n", " f\"{OVER_QUANTILE} entries over the {QUANTILE_LIMIT} quantile at {DATA_UPPER_LIMIT_QUANTIL:.3f}\"\n", ")" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "collected_data[\"cost\"].plot.hist(bins=10, log=False) # find a drop limit\n", "plt.axvline(x=DATA_UPPER_LIMIT_QUANTIL, color=\"RED\", label=\"95% Quantil\")\n", "plt.legend()\n", "plt.show()" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "plt.figure(figsize=(15, 25))\n", "for count, (seed, row) in tqdm(\n", " enumerate(\n", " collected_data[collected_data[\"cost\"] > DATA_UPPER_LIMIT_QUANTIL]\n", " .sort_values(\"cost\")\n", " .iloc[0 :: int(OVER_QUANTILE / 12)]\n", " .iloc[:12]\n", " .iterrows()\n", " ),\n", " total=12,\n", "):\n", " plt.subplot(5, 3, count + 1)\n", " plot_situation(\n", " destination=Point(row.destination_x, row.destination_y),\n", " obstacles=row.generated_obstacles,\n", " obstacle_color=\"RED\",\n", " route=row.route_generated,\n", " title=f\"Cost: {row.cost}\",\n", " )\n", "plt.show()" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "collected_data = collected_data.loc[collected_data[\"cost\"] < DATA_UPPER_LIMIT_QUANTIL]\n", "collected_data" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "collected_data[\"cost\"].plot.hist(log=True)\n", "plt.show()" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "collected_data[collected_data[\"cost\"] < DATA_UPPER_LIMIT_QUANTIL]\n", "\n", "plt.figure(figsize=(17.5, 25))\n", "for count, (seed, row) in enumerate(\n", " collected_data[collected_data[\"cost\"] < DATA_UPPER_LIMIT_QUANTIL]\n", " .sort_values(\"cost\")\n", " .iloc[1:600:51]\n", " .iterrows()\n", "):\n", " plt.subplot(4, 3, count + 1)\n", " plot_situation(\n", " destination=Point(row.destination_x, row.destination_y),\n", " obstacles=row.generated_obstacles,\n", " obstacle_color=\"RED\",\n", " route=row.route_generated,\n", " title=f\"Cost: {row.cost:.3f}\",\n", " legend=count == 0,\n", " )\n", "plt.show()\n", "del seed" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "markdown", "source": [ "# Visualize Complexity" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%% md\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "def get_route_points(data):\n", " df = data[\"route\"].apply(lambda r: r.shape[0] - 1)\n", " df.name = \"route complexity\"\n", " return df\n", "\n", "\n", "route_points = get_route_points(collected_data)" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "route_points.plot.hist()\n", "plt.show()" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "routes_before = len(collected_data.index)\n", "collected_data = collected_data[route_points <= 15]\n", "routes_after = len(collected_data.index)\n", "print(\n", " f\"{routes_before} - {routes_before - routes_after} = {routes_after} \"\n", " f\"if only routes with less then 15 course changes remain.\"\n", ")" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "get_route_points(collected_data).plot.hist(bins=13)\n", "plt.show()" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "get_route_points(collected_data).value_counts().sort_index()" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "markdown", "source": [ "# Dropping routes that are too easy" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%% md\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "LIMIT_SIMPLE_CASES = 0.05\n", "values = get_route_points(collected_data).value_counts().sort_index()\n", "chance_limit = (\n", " (len(collected_data.index) * LIMIT_SIMPLE_CASES * (1 - LIMIT_SIMPLE_CASES))\n", " / values.get(1, 1)\n", " if 1 in values.index\n", " else 1\n", ")\n", "print(\n", " f\"Limiting simple cases to {LIMIT_SIMPLE_CASES * 100:.1f}% of the total routes. Reducing simple routes to {(chance_limit * 100):.1f}% of their volume.\"\n", ")" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "collected_data = collected_data[\n", " (\n", " (get_route_points(collected_data) > 1)\n", " | (np.random.random(len(collected_data.index)) < chance_limit)\n", " )\n", "]\n", "get_route_points(collected_data).plot.hist(bins=13)\n", "plt.show()" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "get_route_points(collected_data).value_counts().sort_index()" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "collected_data" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "del chance_limit" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "markdown", "source": [ "# Memory consumption" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%% md\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "collected_data" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "def generate_image_maps(row, route_type: Literal[\"dot\", \"line\"]):\n", " img = np.expand_dims(\n", " np.asarray(\n", " generate_image_from_map(\n", " obstacles=row.generated_obstacles,\n", " destination=Point(row.destination_x, row.destination_y),\n", " route=row.route_generated,\n", " route_type=route_type,\n", " seed=row.name,\n", " )\n", " ),\n", " axis=0,\n", " )\n", " img = img // 0xFF\n", " return img\n", "\n", "\n", "generated = collected_data.head().apply(generate_image_maps, axis=1, args=(\"dot\",))\n", "humanize.naturalsize(generated.memory_usage(deep=True))" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "if \"image\" in collected_data.columns:\n", " del collected_data[\"image\"]" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "DATA_WITH_IMG_PATH: Final[str] = \"data/collected_and_filtered.pickle\"\n", "if os.path.exists(DATA_WITH_IMG_PATH) and not GENERATE_NEW:\n", " collected_data = pd.read_pickle(DATA_WITH_IMG_PATH)\n", "else:\n", " collected_data.to_pickle(DATA_WITH_IMG_PATH)" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "image_series = collected_data.progress_apply(\n", " generate_image_maps, axis=1, args=(\"line\",)\n", ")\n", "\n", "# collected_data[\"image_lines\"] = collected_data.apply(\n", "# generate_image_maps, axis=1, args=(\"line\",)\n", "# )" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "collected_routes = np.concatenate(image_series)\n", "del image_series" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "humanize.naturalsize(sys.getsizeof(collected_routes))" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "collected_routes.dtype" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "memory = sorted(\n", " [\n", " (x, sys.getsizeof(globals().get(x)))\n", " for x in dir()\n", " if not x.startswith(\"_\") and x not in sys.modules\n", " ],\n", " key=lambda x: x[1],\n", " reverse=True,\n", ")\n", "memory = {name: humanize.naturalsize(mem) for name, mem in memory[:10]}\n", "memory" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "COLLECTED_ROUTES_DUMP = \"data/collected_routes_np.pickle\"\n", "with open(COLLECTED_ROUTES_DUMP, \"wb\") as f:\n", " pickle.dump(collected_routes, f)\n", "\n", "# with open(COLLECTED_ROUTES_DUMP,'rb') as f: collected_routes = pickle.load(f)" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "markdown", "source": [ "[Pix2Pix Tensorflow](https://www.tensorflow.org/tutorials/generative/pix2pix)" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%% md\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "# Source: https://www.tensorflow.org/tutorials/generative/pix2pix\n", "def downsample(filters, size, apply_batchnorm=True):\n", " initializer = tf.random_normal_initializer(mean=0.0, stddev=0.02)\n", "\n", " result = tf.keras.Sequential()\n", " result.add(\n", " tf.keras.layers.Conv2D(\n", " filters,\n", " size,\n", " strides=2,\n", " padding=\"same\",\n", " kernel_initializer=initializer,\n", " use_bias=False,\n", " )\n", " )\n", "\n", " if apply_batchnorm:\n", " result.add(tf.keras.layers.BatchNormalization())\n", "\n", " result.add(tf.keras.layers.LeakyReLU())\n", "\n", " return result\n", "\n", "\n", "downsample(64, 4)" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "collected_routes[0].shape" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "tf.expand_dims(collected_routes[0], 0).shape" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "down_model = downsample(3, 4)\n", "tf.cast(tf.expand_dims(collected_routes[1], 0), \"float16\", name=None)\n", "\n", "down_result = down_model(\n", " tf.cast(tf.expand_dims(collected_routes[1], 0), \"float16\", name=None)\n", ")\n", "print(down_result.shape)" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "# Source: https://www.tensorflow.org/tutorials/generative/pix2pix\n", "def upsample(filters, size, apply_dropout=False):\n", " initializer = tf.random_normal_initializer(0.0, 0.02)\n", "\n", " result = tf.keras.Sequential()\n", " result.add(\n", " tf.keras.layers.Conv2DTranspose(\n", " filters,\n", " size,\n", " strides=2,\n", " padding=\"same\",\n", " kernel_initializer=initializer,\n", " use_bias=False,\n", " )\n", " )\n", "\n", " result.add(tf.keras.layers.BatchNormalization())\n", "\n", " if apply_dropout:\n", " result.add(tf.keras.layers.Dropout(0.5))\n", "\n", " result.add(tf.keras.layers.ReLU())\n", "\n", " return result" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "up_model = upsample(3, 4)\n", "up_result = up_model(down_result)\n", "up_result.shape" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "def model_generator():\n", "\n", " inputs = tf.keras.layers.Input(shape=[IMG_SIZE, IMG_SIZE, 2])\n", "\n", " # down_stack = [\n", " # downsample(64, 4, apply_batchnorm=False), # (batch_size, 64, 64, 128)\n", " # downsample(128, 4), # (batch_size, 8, 8, 512)\n", " # downsample(512, 4), # (batch_size, 4, 4, 512)\n", " # downsample(512, 4), # (batch_size, 2, 2, 512)\n", " # downsample(512, 4), # (batch_size, 1, 1, 512)\n", " # downsample(512, 4), # (batch_size, 1, 1, 512)\n", " # downsample(512, 4), # (batch_size, 1, 1, 512)\n", " # ]\n", " #\n", " # up_stack = [\n", " # upsample(512, 4, apply_dropout=True), # (batch_size, 4, 4, 1024)\n", " # upsample(512, 4, apply_dropout=True), # (batch_size, 4, 4, 1024)\n", " # upsample(512, 4, apply_dropout=True), # (batch_size, 4, 4, 1024)\n", " # upsample(512, 4), # (batch_size, 16, 16, 1024)\n", " # upsample(128, 4), # (batch_size, 32, 32, 512)\n", " # upsample(64, 4), # (batch_size, 64, 64, 256)\n", " # ]\n", "\n", " down_stack = [\n", " downsample(64, 4, apply_batchnorm=False), # (batch_size, 64, 64, 128)\n", " downsample(128, 4), # (batch_size, 8, 8, 512)\n", " downsample(256, 4), # (batch_size, 4, 4, 512)\n", " downsample(256, 4), # (batch_size, 2, 2, 512)\n", " downsample(256, 4), # (batch_size, 1, 1, 512)\n", " downsample(512, 4), # (batch_size, 1, 1, 512)\n", " downsample(512, 4), # (batch_size, 1, 1, 512)\n", " ]\n", "\n", " up_stack = [\n", " upsample(512, 4, apply_dropout=True), # (batch_size, 4, 4, 1024)\n", " upsample(256, 4, apply_dropout=True), # (batch_size, 4, 4, 1024)\n", " upsample(256, 4, apply_dropout=True), # (batch_size, 4, 4, 1024)\n", " upsample(256, 4), # (batch_size, 16, 16, 1024)\n", " upsample(128, 4), # (batch_size, 32, 32, 512)\n", " upsample(64, 4), # (batch_size, 64, 64, 256)\n", " ]\n", "\n", " initializer = tf.random_normal_initializer(0.0, 0.02)\n", " last = tf.keras.layers.Conv2DTranspose(\n", " 1,\n", " 4,\n", " strides=2,\n", " padding=\"same\",\n", " kernel_initializer=initializer,\n", " activation=\"tanh\",\n", " ) # (batch_size, 256, 256, 3)\n", "\n", " x = inputs\n", "\n", " # Down sampling through the model\n", " skips = []\n", " for down in down_stack:\n", " x = down(x)\n", " skips.append(x)\n", "\n", " skips = reversed(skips[:-1])\n", "\n", " # Up sampling and establishing the skip connections\n", " for up, skip in zip(up_stack, skips):\n", " x = up(x)\n", " x = tf.keras.layers.Concatenate()([x, skip])\n", "\n", " x = last(x)\n", "\n", " return tf.keras.Model(inputs=inputs, outputs=x)\n", "\n", "\n", "generator = model_generator()\n", "tf.keras.utils.plot_model(generator, show_shapes=True, dpi=64)" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2022-07-15T18:58:57.586314Z", "start_time": "2022-07-15T18:58:57.586314Z" }, "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "def model_generator():\n", "\n", " inputs = tf.keras.layers.Input(shape=[IMG_SIZE, IMG_SIZE, 2])\n", "\n", " # down_stack = [\n", " # downsample(64, 4, apply_batchnorm=False), # (batch_size, 64, 64, 128)\n", " # downsample(128, 4), # (batch_size, 8, 8, 512)\n", " # downsample(512, 4), # (batch_size, 4, 4, 512)\n", " # downsample(512, 4), # (batch_size, 2, 2, 512)\n", " # downsample(512, 4), # (batch_size, 1, 1, 512)\n", " # downsample(512, 4), # (batch_size, 1, 1, 512)\n", " # downsample(512, 4), # (batch_size, 1, 1, 512)\n", " # ]\n", " #\n", " # up_stack = [\n", " # upsample(512, 4, apply_dropout=True), # (batch_size, 4, 4, 1024)\n", " # upsample(512, 4, apply_dropout=True), # (batch_size, 4, 4, 1024)\n", " # upsample(512, 4, apply_dropout=True), # (batch_size, 4, 4, 1024)\n", " # upsample(512, 4), # (batch_size, 16, 16, 1024)\n", " # upsample(128, 4), # (batch_size, 32, 32, 512)\n", " # upsample(64, 4), # (batch_size, 64, 64, 256)\n", " # ]\n", "\n", " down_stack = [\n", " downsample(64, 4, apply_batchnorm=False), # (batch_size, 64, 64, 128)\n", " downsample(128, 4), # (batch_size, 8, 8, 512)\n", " downsample(256, 4), # (batch_size, 4, 4, 512)\n", " downsample(256, 4), # (batch_size, 2, 2, 512)\n", " downsample(256, 4), # (batch_size, 1, 1, 512)\n", " downsample(512, 4), # (batch_size, 1, 1, 512)\n", " downsample(512, 4), # (batch_size, 1, 1, 512)\n", " ]\n", "\n", " up_stack = [\n", " upsample(512, 4, apply_dropout=True), # (batch_size, 4, 4, 1024)\n", " upsample(256, 4, apply_dropout=True), # (batch_size, 4, 4, 1024)\n", " upsample(256, 4, apply_dropout=True), # (batch_size, 4, 4, 1024)\n", " upsample(256, 4), # (batch_size, 16, 16, 1024)\n", " upsample(128, 4), # (batch_size, 32, 32, 512)\n", " upsample(64, 4), # (batch_size, 64, 64, 256)\n", " ]\n", "\n", " initializer = tf.random_normal_initializer(0.0, 0.02)\n", " last = tf.keras.layers.Conv2DTranspose(\n", " 1,\n", " 4,\n", " strides=2,\n", " padding=\"same\",\n", " kernel_initializer=initializer,\n", " activation=\"tanh\",\n", " ) # (batch_size, 256, 256, 3)\n", "\n", " x = inputs\n", "\n", " # Down sampling through the model\n", " skips = []\n", " for down in down_stack:\n", " x = down(x)\n", " skips.append(x)\n", "\n", " skips = reversed(skips[:-1])\n", "\n", " # Up sampling and establishing the skip connections\n", " for up, skip in zip(up_stack, skips):\n", " x = up(x)\n", " x = tf.keras.layers.Concatenate()([x, skip])\n", "\n", " x = last(x)\n", "\n", " return tf.keras.Model(inputs=inputs, outputs=x)\n", "\n", "\n", "generator = model_generator()\n", "tf.keras.utils.plot_model(generator, show_shapes=True, dpi=64)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2022-07-15T18:58:57.587312Z", "start_time": "2022-07-15T18:58:57.587312Z" }, "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "!pip install pydot" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2022-07-15T18:58:57.588314Z", "start_time": "2022-07-15T18:58:57.588314Z" }, "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "!pip install pydotplus" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2022-07-15T18:58:57.589313Z", "start_time": "2022-07-15T18:58:57.589313Z" }, "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "generator.compile(\n", " optimizer=tf.keras.optimizers.RMSprop(), # Optimizer\n", " # Loss function to minimize\n", " loss=\"mean_squared_error\",\n", " # tf.keras.losses.SparseCategoricalCrossentropy(),\n", " # List of metrics to monitor\n", " metrics=[\n", " \"binary_crossentropy\",\n", " \"mean_squared_error\",\n", " \"mean_absolute_error\",\n", " ], # root_mean_squared_error\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2022-07-15T18:58:57.592314Z", "start_time": "2022-07-15T18:58:57.592314Z" }, "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "early_stop = tf.keras.callbacks.EarlyStopping(\n", " monitor=\"mean_squared_error\",\n", " min_delta=0.0005,\n", " patience=2,\n", " verbose=0,\n", " mode=\"auto\",\n", " restore_best_weights=True,\n", ")\n", "\n", "tf_board = tf.keras.callbacks.TensorBoard(\n", " log_dir=\"./log_dir\",\n", " histogram_freq=100,\n", " write_graph=False,\n", " write_images=False,\n", " write_steps_per_second=True,\n", " update_freq=\"epoch\",\n", " profile_batch=(20, 40),\n", " embeddings_freq=0,\n", " embeddings_metadata=None,\n", ")\n", "\n", "reduce_learning_rate = tf.keras.callbacks.ReduceLROnPlateau(\n", " monitor=\"some metric\", factor=0.2, patience=5, min_lr=000.1, verbose=1\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2022-07-15T18:58:57.594314Z", "start_time": "2022-07-15T18:58:57.594314Z" }, "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "plt.figure(figsize=(17.5, 25))\n", "np_array = np.flip(collected_routes[1, :, :, :], axis=0)\n", "\n", "for chanel in tqdm(range(3)):\n", " plt.subplot(1, 4, chanel + 1)\n", " plt.imshow(np_array[:, :, chanel], interpolation=\"nearest\")\n", "plt.subplot(1, 4, 4)\n", "plt.imshow(0x88 * np_array[:, :, 0] + 0xFF * np_array[:, :, 2], interpolation=\"nearest\")\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2022-07-15T18:58:57.596313Z", "start_time": "2022-07-15T18:58:57.596313Z" }, "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "collected_routes[:, :, :, :2].shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2022-07-15T18:58:57.624444Z", "start_time": "2022-07-15T18:58:57.598317Z" }, "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "train_dataset = tf.data.Dataset.from_tensor_slices(\n", " (collected_routes[:, :, :, :2], collected_routes[:, :, :, 2])\n", ")\n", "# test_dataset = tf.data.Dataset.from_tensor_slices((test_examples, test_labels))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2022-07-15T18:58:57.626439Z", "start_time": "2022-07-15T18:58:57.626439Z" }, "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "train_dataset" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2022-07-15T18:58:57.628442Z", "start_time": "2022-07-15T18:58:57.628442Z" }, "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "BATCH_SIZE = 64\n", "SHUFFLE_BUFFER_SIZE = 100\n", "# train_dataset = train_dataset.shuffle(SHUFFLE_BUFFER_SIZE).batch(BATCH_SIZE)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2022-07-15T18:58:57.629441Z", "start_time": "2022-07-15T18:58:57.629441Z" }, "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "train_dataset = train_dataset.batch(BATCH_SIZE)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2022-07-15T18:58:57.630444Z", "start_time": "2022-07-15T18:58:57.630444Z" }, "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "history = generator.fit(\n", " train_dataset,\n", " epochs=20,\n", " batch_size=512,\n", " use_multiprocessing=True,\n", " workers=5,\n", " callbacks=[early_stop, tf_board],\n", " # tqdm_callback,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2022-07-15T18:58:57.632443Z", "start_time": "2022-07-15T18:58:57.632443Z" }, "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "plt.plot(history.history[\"loss\"])" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2022-07-15T18:58:57.633443Z", "start_time": "2022-07-15T18:58:57.633443Z" }, "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "collected_routes[0:1, :, :, :2].shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2022-07-15T18:58:57.634439Z", "start_time": "2022-07-15T18:58:57.634439Z" }, "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "predicted = generator.predict(\n", " collected_routes[:100, :, :, :2],\n", " batch_size=None,\n", " verbose=\"auto\",\n", " steps=None,\n", " callbacks=None,\n", " max_queue_size=10,\n", " workers=3,\n", " use_multiprocessing=True,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2022-07-15T18:58:57.635444Z", "start_time": "2022-07-15T18:58:57.635444Z" }, "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "predicted.shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2022-07-15T18:58:57.637443Z", "start_time": "2022-07-15T18:58:57.637443Z" }, "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "plt.imshow(predicted[1, :, :, 0], interpolation=\"nearest\")\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2022-07-15T18:58:57.639225Z", "start_time": "2022-07-15T18:58:57.639225Z" }, "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "for pos in range(5):\n", " plt.imshow(\n", " predicted[pos, :, :, 0] * 0xFF + collected_routes[pos, :, :, 0] * 20,\n", " interpolation=\"nearest\",\n", " )\n", " plt.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2022-07-15T18:58:57.641233Z", "start_time": "2022-07-15T18:58:57.640226Z" }, "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "# tf.keras.utils.plot_model(generator)" ] }, { "cell_type": "raw", "metadata": { "ExecuteTime": { "end_time": "2022-07-11T16:47:19.020872Z", "start_time": "2022-07-11T16:47:17.607427Z" }, "pycharm": { "name": "#%% raw\n" } }, "source": [ "!pip install pydot pydotplus graphviz" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "@article{article,\n", "author = {Jang, Hoyun and Lee, Inwon and Seo, Hyoungseock},\n", "year = {2017},\n", "month = {09},\n", "pages = {4109-4117},\n", "title = {Effectiveness of CFRP rudder aspect ratio for scale model catamaran racing yacht test},\n", "volume = {31},\n", "journal = {Journal of Mechanical Science and Technology},\n", "doi = {10.1007/s12206-017-0807-8}\n", "}" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "Ich würde auch zu 1. tendieren, stimme Ihnen aber zu, dass das Thema sehr umfangreich ist. Könnte man sich nicht einen Teilbereich herauspicken? Ich verstehe nicht viel vom Segeln, daher lassen Sie mich kurz zusammenfassen, was Sie vorhaben: - Sie generieren Trainingsdaten mit dem existierenden aber langsamen GD Algorithmus. Ich nehme an, es handelt sich um lokale Routen in einem relativ kleinen Kartenausschnitt. Lässt es die Laufzeit zu, dass Sie eine große Menge an Routen berechnen. - Sie haben dann eine Karte und als Ausgabe eine Liste der Wendepunkte - Warum wollen Sie daraus eine Heatmap berechnen? Diesen Schritt habe ich noch nicht verstanden - Wenn Sie aus einer Karte eine Heatmap trainieren wollen und dafür genügend Beispiele haben, könnnten GANs hilfreich sein: https://arxiv.org/abs/1611.07004 Ich würde Ihnen raten, das Problem möglichst so zu reduzieren, dass es im Rahmen des Moduls noch handhabbar bleibt. Alles Weitere kann man sich auch für spätere Arbeiten aufbewahren. Das 2. Thema ist auch ok. Aber vielleicht nicht ganz so spannend. Ich überlasse Ihnen die Entscheidung. Freundliche Grüße Heiner Giefers" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.2" }, "toc": { "base_numbering": 1, "nav_menu": {}, "number_sections": true, "sideBar": true, "skip_h1_title": false, "title_cell": "Table of Contents", "title_sidebar": "Contents", "toc_cell": false, "toc_position": {}, "toc_section_display": true, "toc_window_display": false } }, "nbformat": 4, "nbformat_minor": 1 }