Skip to content

Commit 7592870

Browse files
committed
Update antialiasing chapter
This addresses some of the criticisms of the prior text included in the camera update, which discussed square versus non-square pixels. Resolves #695
1 parent debf21d commit 7592870

File tree

4 files changed

+59
-56
lines changed

4 files changed

+59
-56
lines changed

books/RayTracingInOneWeekend.html

Lines changed: 59 additions & 56 deletions
Original file line numberDiff line numberDiff line change
@@ -577,9 +577,6 @@
577577
To help navigate the pixel grid, we'll use a vector from the left edge to the right edge
578578
($\mathbf{V_u}$), and a vector from the upper edge to the lower edge ($\mathbf{V_v}$).
579579

580-
Our pixel grid will be inset from the viewport edges by half the pixel-to-pixel distance, so each
581-
pixel will gather light from the area around it.
582-
583580
Our pixel grid will be inset from the viewport edges by half the pixel-to-pixel distance.
584581
This way, our viewport area is evenly divided into width × height identical regions.
585582
Here's what our viewport and pixel grid look like:
@@ -1810,22 +1807,50 @@
18101807

18111808
Antialiasing
18121809
====================================================================================================
1813-
When a real camera takes a picture, there are usually no jaggies along edges because the edge pixels
1814-
are a blend of some foreground and some background. We can get the same effect by averaging a bunch
1815-
of samples inside each pixel. We will not bother with stratification. This is controversial, but is
1816-
usual for my programs. For some ray tracers it is critical, but the kind of general one we are
1817-
writing doesn’t benefit very much from it and it makes the code uglier. We abstract the camera class
1818-
a bit so we can make a cooler camera later.
1810+
If you zoom into the rendered images so far, you might notice the harsh "stair step" nature of edges
1811+
in our rendered images.
1812+
This stair-stepping is commonly referred to as "aliasing", or "jaggies".
1813+
When a real camera takes a picture, there are usually no jaggies along edges, because the edge
1814+
pixels are a blend of some foreground and some background.
1815+
Consider that unlike our rendered images, a true image of the world is continuous.
1816+
Put another way, the world (and any true image of it) has effectively infinite resolution.
1817+
We can get the same effect by averaging a bunch of samples for each pixel.
1818+
1819+
With a single ray through the center of each pixel, we are performing what is commonly called _point
1820+
sampling_.
1821+
The problem with point sampling can be illustrated by rendering a small checkerboard far away.
1822+
If this checkerboard consists of an 8×8 grid of black and white tiles, but only four rays hit
1823+
it, then all four rays might intersect only white tiles, or only black, or some odd combination.
1824+
In the real world, when we perceive a checkerboard far away with our eyes, we perceive it as a gray
1825+
color, instead of sharp points of black and white.
1826+
That's because our eyes are naturally doing what we want our ray tracer to do: integrate the
1827+
(continuous function of) light falling on a particular (discrete) region of our rendered image.
1828+
1829+
Clearly we don't gain anything by just resampling the same ray through the pixel center multiple
1830+
times -- we'd just get the same result each time.
1831+
Instead, we want to sample the light falling _around_ the pixel, and then integrate those samples to
1832+
approximate the true continuous result.
1833+
So, how do we integrate the light falling around the pixel?
1834+
1835+
We'll adopt the simplest model: sampling the square region centered at the pixel that extends
1836+
halfway to each of the four neighboring pixels.
1837+
This is not the optimal approach, but it is the most straight-forward.
1838+
(See [_A Pixel is Not a Little Square_][square-pixels] for a deeper dive into this topic.)
1839+
1840+
![Figure [pixel-samples]: Pixel samples](../images/fig-1.08-pixel-samples.jpg)
1841+
18191842

18201843
Some Random Number Utilities
18211844
-----------------------------
1822-
One thing we need is a random number generator that returns real random numbers. We need a function
1823-
that returns a canonical random number which by convention returns a random real in the range
1824-
$0 ≤ r < 1$. The “less than” before the 1 is important as we will sometimes take advantage of that.
1845+
We're going to need need a random number generator that returns real random numbers.
1846+
This function should return a canonical random number, which by convention falls in the range
1847+
$0 ≤ n < 1$.
1848+
The “less than” before the 1 is important, as we will sometimes take advantage of that.
18251849

1826-
A simple approach to this is to use the `rand()` function that can be found in `<cstdlib>`. This
1827-
function returns a random integer in the range 0 and `RAND_MAX`. Hence we can get a real random
1828-
number as desired with the following code snippet, added to `rtweekend.h`:
1850+
A simple approach to this is to use the `rand()` function that can be found in `<cstdlib>`, which
1851+
returns a random integer in the range 0 and `RAND_MAX`.
1852+
Hence we can get a real random number as desired with the following code snippet, added to
1853+
`rtweekend.h`:
18291854

18301855
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
18311856
#include <cstdlib>
@@ -1864,13 +1889,10 @@
18641889
For a single pixel composed of multiple samples, we'll select samples from the area surrounding the
18651890
pixel and average the resulting light (color) values together.
18661891

1867-
![Figure [pixel-samples]: Pixel samples](../images/fig-1.08-pixel-samples.jpg)
1868-
1869-
Now to update the `write_color()` function to account for the number of samples we use.
1870-
We need to find the average across all of the samples that we take.
1871-
We'll just add the full color from each iteration, and then finish with a single division (by the
1872-
number of samples) at the end, before writing out the color.
1873-
1892+
First we'll update the `write_color()` function to account for the number of samples we use: we need
1893+
to find the average across all of the samples that we take.
1894+
To do this, we'll add the full color from each iteration, and then finish with a single division (by
1895+
the number of samples) at the end, before writing out the color.
18741896
To ensure that the color components of the final result remain within the proper $[0,1]$ bounds,
18751897
we'll add and use a small helper function: `interval::clamp(x)`.
18761898

@@ -1890,6 +1912,9 @@
18901912
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
18911913
[Listing [clamp]: <kbd>[interval.h]</kbd> The interval::clamp() utility function]
18921914

1915+
And here's the updated `write_color()` function that takes the sum total of all light for the pixel
1916+
and the number of samples involved:
1917+
18931918
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
18941919
void write_color(std::ostream &out, color pixel_color, int samples_per_pixel) {
18951920
auto r = pixel_color.x();
@@ -1911,40 +1936,12 @@
19111936
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
19121937
[Listing [write-color-clamped]: <kbd>[color.h]</kbd> The multi-sample write_color() function]
19131938

1914-
So now we need to gather multiple samples for each pixel.
1915-
We'll select samples from the area around the pixel center.
1916-
Which begs the question: _what's the area surrounding the pixel_?
1917-
1918-
It is very common to think of pixels as the square region around each pixel location, such that
1919-
these squares perfectly tile the pixel grid.
1920-
However, there's no real reason why this must be the case.
1921-
In fact, there's nothing that says that pixels have to sample all of the image area, or even that
1922-
they cannot have their regions overlap in some way.
1923-
1924-
For example, consider the human eye: our "pixels" (photoreceptor cells) are randomly scattered with
1925-
varying density, respond to different wavelengths, cover differently sized areas, and frequently
1926-
overlap each other.
1927-
(See figure 1 of [_Cell Populations of the Retina_](https://iovs.arvojournals.org/article.aspx?articleid=2188127)
1928-
for an example of the different photoreceptor types in the human retina. There are many more than
1929-
the single rod + three cone types that most people think of.)
1930-
There's much more to say about this, but if you want to learn more about image sampling and
1931-
reconstruction, see the excellent technical memo
1932-
[_A Pixel is Not a Little Square, A Pixel is Not a Little Square, A Pixel is Not a Little Square_](https://www.researchgate.net/publication/244986797),
1933-
by Alvy Ray Smith, for a fantastic introduction.
1934-
1935-
In spite of all this, we will proceed with the trivial square pixel model.
1936-
It may not be the best model, but it's the simplest.
1937-
Still, we'll design our code in a way that makes it easy for you to play with different pixel types,
1938-
and throw in a circular pixel model to experiment with if you wish.
1939-
1940-
Ok, let's update the camera class to define and use a new `camera::get_ray(i,j)` function, and to
1941-
incorporate multiple samples per pixel.
1942-
This function will use a new helper function `pixel_sample_square()`, which generates a random
1943-
sample point within the square pixel centered at the origin.
1944-
This random sample will then be transformed back to the particular pixel we're sampling.
1945-
(In addition, we've included in our source code (but don't use) the function `pixel_sample_disk()`
1946-
for you to play with if you'd like to experiment with non-square pixels.
1947-
This depends on the function `random_in_unit_disk()` defined later in this book.)
1939+
Now let's update the camera class to define and use a new `camera::get_ray(i,j)` function, which
1940+
will generate different samples for each pixel.
1941+
This function will use a new helper function `pixel_sample_square()` that generates a random sample
1942+
point within the unit square centered at the origin.
1943+
We then transform the random sample from this ideal square back to the particular pixel we're
1944+
currently sampling.
19481945

19491946
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
19501947
class camera {
@@ -2019,6 +2016,11 @@
20192016

20202017
</div>
20212018

2019+
(In addition to the new `pixel_sample_square()` function above, we've included in our source code
2020+
(but don't use) the function `pixel_sample_disk()` for you to play with if you'd like to
2021+
experiment with non-square pixels.
2022+
This depends on the function `random_in_unit_disk()` defined later in this book.)
2023+
20222024
<div class='together'>
20232025
Main is updated to set the new camera parameter.
20242026

@@ -4032,6 +4034,7 @@
40324034
[discussions]: https://github.com/RayTracing/raytracing.github.io/discussions/
40334035
[gfx-codex]: https://graphicscodex.com/
40344036
[repo]: https://github.com/RayTracing/raytracing.github.io/
4037+
[square-pixels]: https://www.researchgate.net/publication/244986797
40354038

40364039

40374040

images/fig-1.04-pixel-grid.jpg

-2.59 KB
Loading

images/fig-1.x1-rand-vec.jpg

-86.4 KB
Binary file not shown.

images/fig-1.x2-rand-unitvec.jpg

-84 KB
Binary file not shown.

0 commit comments

Comments
 (0)