|
18 | 18 | " \n",
|
19 | 19 | "## MPI real world example problem\n",
|
20 | 20 | "\n",
|
21 |
| - "In a previous lesson we have seen *multi-processing* being used to solve the generation of the Julia set. An alternative approach is to use *message passing*.\n", |
| 21 | + "The problem we will attempt to solve is constructing a fractal. This kind of problem is often known as \"embarrassingly parallel\" meaning that each element of the result has no dependency on any of the other elements, meaning that we can solve this problem in parallel without too much difficulty. \\Let's get started by creating a new script - `mpi_fractal.py`:\n", |
22 | 22 | "\n",
|
23 |
| - "As mentioned earlier, this is a relatively simple problem to parallelise. If we consider running the program with multiple processes, all we need to do to divide the work is to divide the complex grid up between the processes. Thinking back to previous sections, we covered an MPI function that can achieve this - the `scatter` method of the MPI communicator.\n", |
| 23 | + "### Setting up our serial problem\n", |
| 24 | + "\n", |
| 25 | + "Let's first think about our problem in serial - we want to construct the [Julia set](https://en.wikipedia.org/wiki/Julia_set) fractal, so we need to create a grid of complex numbers to operate over. We can create a simple function to do this:\n", |
| 26 | + "\n", |
| 27 | + "```python\n", |
| 28 | + "# fractal.py\n", |
| 29 | + "import numpy as np\n", |
| 30 | + "\n", |
| 31 | + "def complex_grid(extent, n_cells, grid_range):\n", |
| 32 | + " mesh_range = np.arange(-extent, extent, extent/ncells)\n", |
| 33 | + " x, y = np.meshgrid(grid_range * 1j, grid_range)\n", |
| 34 | + " z = x + y\n", |
| 35 | + "\n", |
| 36 | + " return z\n", |
| 37 | + "```\n", |
| 38 | + "\n", |
| 39 | + "Now, we can create a function that will calculate the Julia set convergence for each element in the complex grid:\n", |
| 40 | + "\n", |
| 41 | + "```python\n", |
| 42 | + "import warnings\n", |
| 43 | + "\n", |
| 44 | + "...\n", |
| 45 | + "\n", |
| 46 | + "def julia_set(grid):\n", |
| 47 | + "\n", |
| 48 | + " fractal = np.zeros(np.shape(grid))\n", |
| 49 | + "\n", |
| 50 | + " # Iterate through the operation z := z**2 + c.\n", |
| 51 | + " for j in range(num_iter):\n", |
| 52 | + " grid = grid ** 2 + c\n", |
| 53 | + " # Catch the overflow warning because it's annoying\n", |
| 54 | + " with warnings.catch_warnings():\n", |
| 55 | + " warnings.simplefilter(\"ignore\")\n", |
| 56 | + " index = np.abs(grid) < np.inf\n", |
| 57 | + " fractal[index] = fractal[index] + 1\n", |
| 58 | + "\n", |
| 59 | + " return fractal\n", |
| 60 | + "```\n", |
| 61 | + "\n", |
| 62 | + "This function calculates how many iterations it takes for each element in the complex grid to reach infinity (if ever) when operated on with the equation `x = x**2 + c`. The function itself is not the focus of this exercise as much as it is a way to make the computer perform some work! Let's use these functions to set up our problem in serial, without any parallelism:\n", |
| 63 | + "\n", |
| 64 | + "```python\n", |
| 65 | + "\n", |
| 66 | + "...\n", |
| 67 | + "\n", |
| 68 | + "c = -0.8 - 0.22 * 1j\n", |
| 69 | + "extent = 2\n", |
| 70 | + "cells = 2000\n", |
| 71 | + "\n", |
| 72 | + "grid = complex_grid(extent, cells)\n", |
| 73 | + "fractal = julia_set(grid, 80, c)\n", |
| 74 | + "```\n", |
| 75 | + "\n", |
| 76 | + "If we run the python script (`python fractal.py`) it takes a few seconds to complete (this will vary depending on your machine), so we can already see that we are making our computer work reasonably hard with just a few lines of code. If we use the `time` command we can get a simple overview of how much time and resource are being used:\n", |
| 77 | + "\n", |
| 78 | + "```\n", |
| 79 | + "$ time python parallel_fractal_complete.py\n", |
| 80 | + "python parallel_fractal_complete.py 5.96s user 3.37s system 123% cpu 7.558 total\n", |
| 81 | + "```\n", |
| 82 | + "\n", |
| 83 | + "\n", |
| 84 | + "\n", |
| 85 | + "```{note}\n", |
| 86 | + " We can also visualise the Julia set with the code snippet:\n", |
| 87 | + "`\n", |
| 88 | + "import matplotlib.pyplot as plt\n", |
| 89 | + "\n", |
| 90 | + "...\n", |
| 91 | + "\n", |
| 92 | + "plt.imshow(fractal, extent=[-extent, extent, -extent, extent], aspect='equal')\n", |
| 93 | + "plt.show()\n", |
| 94 | + "`\n", |
| 95 | + "but doing so will impact the numbers returned when we time our function, so it's important to remember this before trying to measure how long the function takes.\n", |
| 96 | + "```\n", |
| 97 | + "\n", |
| 98 | + "### Download Complete Serial File \n", |
| 99 | + "[Download complete serial fractal example file](complete_files/fractal_complete.py)\n", |
| 100 | + "\n", |
| 101 | + "### Parallelising our serial problem\n", |
| 102 | + "\n", |
| 103 | + "Next we are going to sovle the Julia set problem in parallel using *message passing*. As mentioned earlier, this is a relatively simple problem to parallelise. If we consider running the program with multiple processes, all we need to do to divide the work is to divide the complex grid up between the processes. Thinking back to previous sections, we covered an MPI function that can achieve this - the `scatter` method of the MPI communicator.\n", |
24 | 104 | "\n",
|
25 | 105 | "We can directly take the example from the previous chapter and apply it to the complex mesh creation function:\n",
|
26 | 106 | "\n",
|
|
0 commit comments