@@ -333,6 +333,13 @@ illustrated below (reading from left to right, top to bottom). Once this is
333333done, we can ascent the gradient from the starting node. You can check on the
334334figure this leads to the shortest path.
335335
336+ .. admonition :: **Figure**
337+ :class: legend
338+
339+ Value iteration algorithm on a simple maze. Once entrance has been reached,
340+ it is easy to find the shortest path by ascending the value gradient.
341+
342+
336343
337344.. image :: data/value-iteration-1.pdf
338345 :width: 19%
@@ -561,13 +568,21 @@ References
561568* `Animating Sand as a Fluid <https://www.cs.ubc.ca/%7Erbridson/docs/zhu-siggraph05-sandfluid.pdf >`_, Yongning Zhu & Robert Bridson, 2005.
562569
563570
564- Blue noise
565- ----------
571+ Blue noise sampling
572+ -------------------
573+
574+ Blue noise refers to sample sets that have random and yet uniform distributions
575+ with absence of any spectral bias. Such noise is very useful in a variety of
576+ graphics applications like rendering, dithering, stippling, etc. Many different
577+ methods have been proposed to achieve such noise whose most simple is certainly
578+ the DART method.
579+
566580
567581.. admonition :: **Figure 10**
568582 :class: legend
569583
570- Detail of "The Starry Night", Vincent van Gogh, 1889.
584+ Detail of "The Starry Night", Vincent van Gogh, 1889. The detail has been
585+ resampled using voronoi cells whose centers are a blue noise sample.
571586
572587.. image :: data/mosaic.png
573588 :width: 100%
@@ -577,23 +592,117 @@ Blue noise
577592DART method
578593+++++++++++
579594
580- Numpy implementation
581- ++++++++++++++++++++
595+ The DART method is one of the earliest and simplest method. It works by
596+ sequentially drawing uniform random point and only accept those who lies at a
597+ minimum distance from every previous accepted sample. This sequential method is
598+ therefore extremely slow because each new candidate needs to be tested against
599+ previous accepted candidates. The more points you accept, the slower is the
600+ method. Let's consider the unit surface and a minimum radius `r ` to be enforced
601+ between each point.
602+
603+ Knowing that the densest packing of circles in the plane is the hexagonal
604+ lattice of the bee's honeycomb, we know this density is :math: `d =
605+ \frac {1 }{6 }\pi \sqrt {3 }` (in fact `I learned it
606+ <https://en.wikipedia.org/wiki/Circle_packing> `_ while writing this book).
607+ Considering circles with radius r, we can pack at most :math: `\frac {d}{\pi r^2 }
608+ = \frac {\sqrt {3 }}{6 r^2 } = \frac {1 }{2 r^2 \sqrt {3 }}`. We know the theoretical
609+ upper limit for the number of discs we can pack onto the surface but we'll
610+ likely not reach this upper limit because of random placements. Furthermore,
611+ because a lot of points will be rejected after a few have been accepted, we
612+ need to set a limit in the number of successive failed trials before we stop
613+ the whole process.
582614
583615
584- .. admonition :: **Figure 11**
585- :class: legend
616+ .. code :: python
586617
587- Comparison of uniform, grid-jittered and Poisson disc sampling.
618+ import math
619+ import random
620+
621+ def DART_sampling (width = 1.0 , height = 1.0 , r = 0.025 , k = 100 ):
622+ def distance (p0 , p1 ):
623+ dx, dy = p0[0 ]- p1[0 ], p0[1 ]- p1[1 ]
624+ return math.hypot(dx, dy)
625+
626+ points = []
627+ i = 0
628+ last_success = 0
629+ while True :
630+ x = random.uniform(0 , width)
631+ y = random.uniform(0 , height)
632+ accept = True
633+ for p in points:
634+ if distance(p, (x, y)) < r:
635+ accept = False
636+ break
637+ if accept is True :
638+ points.append((x, y))
639+ if i- last_success > k:
640+ break
641+ last_success = i
642+ i += 1
643+ return points
644+
645+ I left as an exercise the vectorization of the DART method. The idea is to
646+ pre-compute enough uniform random samples as well as paired distances and to
647+ test for their sequential inclusion.
648+
649+
650+ Bridson method
651+ ++++++++++++++
652+
653+ If the vectoriation of the previous method poses no real difficulty, the speed
654+ improvement is not so good and the quality remains low and dependent on the `k `
655+ parameter. The higher the better since it basically governs how hard to try to
656+ insert a new sample. But, when there is already a large number of accepted
657+ samples, only chance allows us to find a position to insert a new sample. We
658+ could increase the `k ` value but this would make the method even more slow
659+ without any guarantee in quality. It's time to think out of the box and luckily
660+ enough, Robert Bridson did that for us and proposed a simple yet efficient
661+ method:
662+
663+ **Step 0 **. *Initialize an n-dimensional background grid for storing samples and
664+ accelerating spatial searches. We pick the cell size to be bounded by r/√n, so
665+ that each grid cell will contain at most one sample, and thus the grid can be
666+ implemented as a simple n- dimensional array of integers: the default −1
667+ indicates no sample, a non-negative integer gives the index of the sample
668+ located in a cell. *
669+
670+ **Step 1 **. *Select the initial sample, x0, randomly chosen uniformly from the
671+ domain. Insert it into the background grid, and initialize the “active list”
672+ (an array of sample indices) with this index (zero). *
673+
674+ **Step 2 **. *While the active list is not empty, choose a random index from it
675+ (say i). Generate up to k points chosen uniformly from the spherical annulus
676+ between radius r and 2r around xi. For each point in turn, check if it is
677+ within distance r of existing samples (using the background grid to only test
678+ nearby samples). If a point is adequately far from existing samples, emit it
679+ as the next sample and add it to the active list. If after k attempts no such
680+ point is found, instead remove i from the active list. *
681+
682+
683+ Implementation poses no real problem and is left as an exercise for the
684+ reader. Note that not only this method is fast, but it also offers a better
685+ quality (more samples) than the DART method even with a high `k `
686+ parameter.
687+
688+ .. admonition :: **Figure**
689+ :class: legend
588690
691+ Comparison of uniform, grid-jittered and Bridson sampling.
589692
590693.. image :: data/sampling.png
591694 :width: 100%
592695
696+
697+
698+
593699Sources
594700+++++++
595701
596- * `sampling.py <code/sampling.py >`_
702+ * `DART-sampling-python.py <code/DART-sampling-python.py >`_
703+ * `DART-sampling-numpy.py <code/DART-sampling-numpy.py >`_ (solution to the exercise)
704+ * `Bridson-sampling.py <code/Bridson-sampling.py >`_ (solution to the exercise)
705+ * `sampling.py <code/sampling.py >`_
597706* `mosaic.py <code/mosaic.py >`_
598707* `voronoi.py <code/voronoi.py >`_
599708
@@ -606,9 +715,19 @@ References
606715 Jose Esteve, 2012.
607716* `Poisson Disk Sampling <http://devmag.org.za/2009/05/03/poisson-disk-sampling/ >`_
608717 Herman Tulleken, 2009.
609- * `Fast Poisson Disk Sampling in Arbitrary Dimensions <http://www.cs.ubc.ca/~rbridson/docs/bridson-siggraph07-poissondisk.pdf >`_
718+ * `Fast Poisson Disk Sampling in Arbitrary Dimensions <http://www.cs.ubc.ca/~rbridson/docs/bridson-siggraph07-poissondisk.pdf >`_,
610719 Robert Bridson, SIGGRAPH, 2007.
611720
612721
613722Conclusion
614723----------
724+
725+ The last example we'been studying is indeed a nice example where it is more
726+ important to vectorize the problem rather than to vectorize the code (and too
727+ early). In this spefici case we were lucky enough to have the work done for us
728+ but it won't be always the case and in such a case, the temptation might be
729+ high to vectorize the first solution we've found. I hope you're now convinced
730+ it might be a good idea in general to look for alternative solutions once
731+ you've found one. You'll (almost) always improve speed by vectorizing your
732+ code, but in the process, you may miss huge improvements.
733+
0 commit comments