You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
method that will give us the indices where a given condition is True.
@@ -259,7 +259,7 @@ method that will give us the indices where a given condition is True.
259
259
Even if this first version does not use nested loops, it is far from optimal
260
260
because of the use of the four `argwhere` calls that may be quite slow. We can
261
261
instead factorize the rules into cells that will survive (stay at 1) and cells
262
-
that will give birth. For doing this, we can take advantage of Numpy boolean
262
+
that will give birth. For doing this, we can take advantage of NumPy boolean
263
263
capability and write quite naturally:
264
264
265
265
.. note::
@@ -442,17 +442,17 @@ actually computes the sequence :math:`f_c(f_c(f_c ...)))`. The vectorization of
442
442
such code is not totally straightforward because the internal `return` implies a
443
443
differential processing of the element. Once it has diverged, we don't need to
444
444
iterate any more and we can safely return the iteration count at
445
-
divergence. The problem is to then do the same in numpy. But how?
445
+
divergence. The problem is to then do the same in NumPy. But how?
446
446
447
-
Numpy implementation
447
+
NumPy implementation
448
448
++++++++++++++++++++
449
449
450
450
The trick is to search at each iteration values that have not yet
451
451
diverged and update relevant information for these values and only
452
452
these values. Because we start from :math:`Z = 0`, we know that each
453
453
value will be updated at least once (when they're equal to :math:`0`,
454
454
they have not yet diverged) and will stop being updated as soon as
455
-
they've diverged. To do that, we'll use numpy fancy indexing with the
455
+
they've diverged. To do that, we'll use NumPy fancy indexing with the
456
456
`less(x1,x2)` function that return the truth value of `(x1 < x2)`
457
457
element-wise.
458
458
@@ -484,14 +484,14 @@ Here is the benchmark:
484
484
1 loops, best of 3: 1.15 sec per loop
485
485
486
486
487
-
Faster numpy implementation
487
+
Faster NumPy implementation
488
488
+++++++++++++++++++++++++++
489
489
490
490
The gain is roughly a 5x factor, not as much as we could have
491
491
expected. Part of the problem is that the `np.less` function implies
492
492
:math:`xn \times yn` tests at every iteration while we know that some
493
493
values have already diverged. Even if these tests are performed at the
494
-
C level (through numpy), the cost is nonetheless
494
+
C level (through NumPy), the cost is nonetheless
495
495
significant. Another approach proposed by `Dan Goodman
496
496
<https://thesamovar.wordpress.com/>`_ is to work on a dynamic array at
497
497
each iteration that stores only the points which have not yet
@@ -574,7 +574,7 @@ We now want to measure the fractal dimension of the Mandelbrot set using the
574
574
<https://en.wikipedia.org/wiki/Minkowski–Bouligand_dimension>`_. To do that, we
575
575
need to do box-counting with a decreasing box size (see figure below). As you
576
576
can imagine, we cannot use pure Python because it would be way too slow. The goal of
577
-
the exercise is to write a function using numpy that takes a two-dimensional
577
+
the exercise is to write a function using NumPy that takes a two-dimensional
578
578
float array and returns the dimension. We'll consider values in the array to be
579
579
normalized (i.e. all values are between 0 and 1).
580
580
@@ -602,7 +602,7 @@ References
602
602
603
603
* `How To Quickly Compute the Mandelbrot Set in Python <https://www.ibm.com/developerworks/community/blogs/jfp/entry/How_To_Compute_Mandelbrodt_Set_Quickly?lang=en>`_, Jean Francois Puget, 2015.
604
604
* `My Christmas Gift: Mandelbrot Set Computation In Python <https://www.ibm.com/developerworks/community/blogs/jfp/entry/My_Christmas_Gift?lang=en>`_, Jean Francois Puget, 2015.
605
-
* `Fast fractals with Python and Numpy<https://thesamovar.wordpress.com/2009/03/22/fast-fractals-with-python-and-numpy/>`_, Dan Goodman, 2009.
605
+
* `Fast fractals with Python and NumPy<https://thesamovar.wordpress.com/2009/03/22/fast-fractals-with-python-and-numpy/>`_, Dan Goodman, 2009.
606
606
* `Renormalizing the Mandelbrot Escape <http://linas.org/art-gallery/escape/escape.html>`_, Linas Vepstas, 1997.
607
607
608
608
@@ -724,7 +724,7 @@ To complete the picture, we can also create a `Flock` object:
724
724
725
725
Using this approach, we can have up to 50 boids until the computation
726
726
time becomes too slow for a smooth animation. As you may have guessed,
727
-
we can do much better using numpy, but let me first point out the main
727
+
we can do much better using NumPy, but let me first point out the main
728
728
problem with this Python implementation. If you look at the code, you
729
729
will certainly notice there is a lot of redundancy. More precisely, we
730
730
do not exploit the fact that the Euclidean distance is reflexive, that
@@ -736,10 +736,10 @@ caching the result for the other functions. In the end, we are
736
736
computing :math:`3n^2` distances instead of :math:`\frac{n^2}{2}`.
737
737
738
738
739
-
Numpy implementation
739
+
NumPy implementation
740
740
++++++++++++++++++++
741
741
742
-
As you might expect, the numpy implementation takes a different approach and
742
+
As you might expect, the NumPy implementation takes a different approach and
743
743
we'll gather all our boids into a `position` array and a `velocity` array:
0 commit comments