Skip to content

Commit 730c56c

Browse files
committed
Merge branch 'master' into dev-patch
2 parents edf3d44 + 4552ec3 commit 730c56c

File tree

3 files changed

+128
-126
lines changed

3 files changed

+128
-126
lines changed

books/RayTracingInOneWeekend.html

Lines changed: 40 additions & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717
====================================================================================================
1818

1919
I’ve taught many graphics classes over the years. Often I do them in ray tracing, because you are
20-
forced to write all the code but you can still get cool images with no API. I decided to adapt my
20+
forced to write all the code, but you can still get cool images with no API. I decided to adapt my
2121
course notes into a how-to, to get you to a cool program as quickly as possible. It will not be a
2222
full-featured ray tracer, but it does have the indirect lighting which has made ray tracing a staple
2323
in movies. Follow these steps, and the architecture of the ray tracer you produce will be good for
@@ -64,7 +64,7 @@
6464
====================================================================================================
6565

6666
Whenever you start a renderer, you need a way to see an image. The most straightforward way is to
67-
write it to a file. The catch is, there are so many formats and many of those are complex. I always
67+
write it to a file. The catch is, there are so many formats. Many of those are complex. I always
6868
start with a plain text ppm file. Here’s a nice description from Wikipedia:
6969

7070
![](../images/img.ppm-example.jpg)
@@ -355,9 +355,9 @@
355355
====================================================================================================
356356

357357
<div class='together'>
358-
The one thing that all ray tracers have is a ray class, and a computation of what color is seen
359-
along a ray. Let’s think of a ray as a function $\mathbf{p}(t) = \mathbf{a} + t \vec{\mathbf{b}}$.
360-
Here $\mathbf{p}$ is a 3D position along a line in 3D. $\mathbf{a}$ is the ray origin and
358+
The one thing that all ray tracers have is a ray class and a computation of what color is seen along
359+
a ray. Let’s think of a ray as a function $\mathbf{p}(t) = \mathbf{a} + t \vec{\mathbf{b}}$. Here
360+
$\mathbf{p}$ is a 3D position along a line in 3D. $\mathbf{a}$ is the ray origin, and
361361
$\vec{\mathbf{b}}$ is the ray direction. The ray parameter $t$ is a real number (`double` in the
362362
code). Plug in a different $t$ and $p(t)$ moves the point along the ray. Add in negative $t$ and you
363363
can go anywhere on the 3D line. For positive $t$, you get only the parts in front of $\mathbf{a}$,
@@ -411,7 +411,7 @@
411411
often, so I’ll stick with a 200×100 image. I’ll put the “eye” (or camera center if you think of a
412412
camera) at $(0,0,0)$. I will have the y-axis go up, and the x-axis to the right. In order to respect
413413
the convention of a right handed coordinate system, into the screen is the negative z-axis. I will
414-
traverse the screen from the lower left hand corner and use two offset vectors along the screen
414+
traverse the screen from the lower left hand corner, and use two offset vectors along the screen
415415
sides to move the ray endpoint across the screen. Note that I do not make the ray direction a unit
416416
length vector because I think not doing that makes for simpler and slightly faster code.
417417

@@ -535,8 +535,8 @@
535535
</div>
536536

537537
<div class='together'>
538-
The rules of vector algebra are all that we would want here, and if we expand that equation and
539-
move all the terms to the left hand side we get:
538+
The rules of vector algebra are all that we would want here. If we expand that equation and move all
539+
the terms to the left hand side we get:
540540

541541
$$ t^2 \vec{\mathbf{b}}\cdot\vec{\mathbf{b}}
542542
+ 2t \vec{\mathbf{b}} \cdot \vec{(\mathbf{a}-\mathbf{c})}
@@ -715,11 +715,11 @@
715715

716716

717717
Now, how about several spheres? While it is tempting to have an array of spheres, a very clean
718-
solution is the make an “abstract class” for anything a ray might hit and make both a sphere and a
718+
solution is the make an “abstract class” for anything a ray might hit, and make both a sphere and a
719719
list of spheres just something you can hit. What that class should be called is something of a
720720
quandary -- calling it an “object” would be good if not for “object oriented” programming. “Surface”
721721
is often used, with the weakness being maybe we will want volumes. “hittable” emphasizes the member
722-
function that unites them. I don’t love any of these but I will go with “hittable”.
722+
function that unites them. I don’t love any of these, but I will go with “hittable”.
723723

724724
<div class='together'>
725725
This `hittable` abstract class will have a hit function that takes in a ray. Most ray tracers have
@@ -864,11 +864,12 @@
864864
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
865865
[Listing [normals-point-against]: <kbd>[sphere.h]</kbd> Remembering the side of the surface]
866866

867-
The decision whether to have normals always point out or always point against the ray is based on
868-
whether you want to determine the side of the surface at the time of geometry or at the time of
869-
coloring. In this book we have more material types than we have geometry types, so we'll go for
870-
less work and put the determination at geometry time. This is simply a matter of preference and
871-
you'll see both implementations in the literature.
867+
We can set things up so that normals always point “outward” from the surface, or always point
868+
against the incident ray. This decision is determined by whether you want to determine the side of
869+
the surface at the time of geometry intersection or at the time of coloring. In this book we have
870+
more material types than we have geometry types, so we'll go for less work and put the determination
871+
at geometry time. This is simply a matter of preference, and you'll see both implementations in the
872+
literature.
872873

873874
We add the `front_face` bool to the `hit_record` struct. I know that we’ll also want motion blur at
874875
some point, so I’ll also add a time input variable.
@@ -1178,7 +1179,7 @@
11781179

11791180
When a real camera takes a picture, there are usually no jaggies along edges because the edge pixels
11801181
are a blend of some foreground and some background. We can get the same effect by averaging a bunch
1181-
of samples inside each pixel. We will not bother with stratification, which is controversial but is
1182+
of samples inside each pixel. We will not bother with stratification. This is controversial, but is
11821183
usual for my programs. For some ray tracers it is critical, but the kind of general one we are
11831184
writing doesn’t benefit very much from it and it makes the code uglier. We abstract the camera class
11841185
a bit so we can make a cooler camera later.
@@ -1385,7 +1386,7 @@
13851386
Lambertian.)
13861387

13871388
(Reader Vassillen Chizhov proved that the lazy hack is indeed just a lazy hack and is inaccurate.
1388-
The correct representation of ideal Lambertian isn't much more work and is presented at the end of
1389+
The correct representation of ideal Lambertian isn't much more work, and is presented at the end of
13891390
the chapter.)
13901391

13911392
<div class='together'>
@@ -1394,7 +1395,7 @@
13941395
sphere with a center at $(p - \vec{N})$ is considered _inside_ the surface, whereas the sphere with
13951396
center $(p + \vec{N})$ is considered _outside_ the surface. Select the tangent unit radius sphere
13961397
that is on the same side of the surface as the ray origin. Pick a random point $s$ inside this unit
1397-
radius sphere and send a ray from the hit point $p$ to the random point $s$ (this is the vector
1398+
radius sphere, and send a ray from the hit point $p$ to the random point $s$ (this is the vector
13981399
$(s-p)$):
13991400

14001401
![Figure [rand-vector]: Generating a random diffuse bounce ray](../images/fig.rand-vector.jpg)
@@ -1869,7 +1870,7 @@
18691870

18701871
<div class='together'>
18711872
For the Lambertian (diffuse) case we already have, it can either scatter always and attenuate by its
1872-
reflectance $R$, or it can scatter with no attenuation but absorb the fraction $1-R$ of the rays. Or
1873+
reflectance $R$, or it can scatter with no attenuation but absorb the fraction $1-R$ of the rays, or
18731874
it could be a mixture of those strategies. For Lambertian materials we get this simple class:
18741875

18751876
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
@@ -2090,7 +2091,7 @@
20902091

20912092
Clear materials such as water, glass, and diamonds are dielectrics. When a light ray hits them, it
20922093
splits into a reflected ray and a refracted (transmitted) ray. We’ll handle that by randomly
2093-
choosing between reflection or refraction and only generating one scattered ray per interaction.
2094+
choosing between reflection or refraction, and only generating one scattered ray per interaction.
20942095

20952096
<div class='together'>
20962097
The hardest part to debug is the refracted ray. I usually first just have all the light refract if
@@ -2208,7 +2209,7 @@
22082209

22092210
<div class='together'>
22102211
That definitely doesn't look right. One troublesome practical issue is that when the ray is in the
2211-
material with the higher refractive index, there is no real solution to Snell’s law and thus there
2212+
material with the higher refractive index, there is no real solution to Snell’s law, and thus there
22122213
is no refraction possible. If we refer back to Snell's law and the derivation of $\sin\theta'$:
22132214

22142215
$$ \sin\theta' = \frac{\eta}{\eta'} \cdot \sin\theta $$
@@ -2221,8 +2222,8 @@
22212222

22222223
$$ \frac{1.5}{1.0} \cdot \sin\theta > 1.0 $$
22232224

2224-
The equality between the two sides of the equation is broken and a solution cannot exist. If a
2225-
solution does not exist the glass cannot refract and must reflect the ray:
2225+
The equality between the two sides of the equation is broken, and a solution cannot exist. If a
2226+
solution does not exist, the glass cannot refract, and therefore must reflect the ray:
22262227

22272228
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
22282229
if(etai_over_etat * sin_theta > 1.0) {
@@ -2386,8 +2387,8 @@
23862387

23872388
<div class='together'>
23882389
An interesting and easy trick with dielectric spheres is to note that if you use a negative radius,
2389-
the geometry is unaffected but the surface normal points inward, so it can be used as a bubble
2390-
to make a hollow glass sphere:
2390+
the geometry is unaffected, but the surface normal points inward. This can be used as a bubble to
2391+
make a hollow glass sphere:
23912392

23922393
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
23932394
world.add(make_shared<sphere>(vec3(0,0,-1), 0.5, make_shared<lambertian>(vec3(0.1, 0.2, 0.5))));
@@ -2606,9 +2607,9 @@
26062607

26072608
<div class="together">
26082609
A real camera has a complicated compound lens. For our code we could simulate the order: sensor,
2609-
then lens, then aperture, and figure out where to send the rays and flip the image once computed
2610-
(the image is projected upside down on the film). Graphics people usually use a thin lens
2611-
approximation:
2610+
then lens, then aperture. Then we could figure out where to send the rays, and flip the image after
2611+
it's computed (the image is projected upside down on the film). Graphics people, however, usually
2612+
use a thin lens approximation:
26122613

26132614
![Figure [cam-lens]: Camera lens model](../images/fig.cam-lens.jpg)
26142615

@@ -2803,35 +2804,35 @@
28032804
</div>
28042805

28052806
An interesting thing you might note is the glass balls don’t really have shadows which makes them
2806-
look like they are floating. This is not a bug (you don’t see glass balls much in real life, where
2807-
they also look a bit strange and indeed seem to float on cloudy days). A point on the big sphere
2807+
look like they are floating. This is not a bug -- you don’t see glass balls much in real life, where
2808+
they also look a bit strange, and indeed seem to float on cloudy days. A point on the big sphere
28082809
under a glass ball still has lots of light hitting it because the sky is re-ordered rather than
28092810
blocked.
28102811

28112812
You now have a cool ray tracer! What next?
28122813

2813-
1. Lights. You can do this explicitly, by sending shadow rays to lights, or it can be done
2814+
1. Lights -- You can do this explicitly, by sending shadow rays to lights, or it can be done
28142815
implicitly by making some objects emit light, biasing scattered rays toward them, and then
28152816
downweighting those rays to cancel out the bias. Both work. I am in the minority in favoring
28162817
the latter approach.
28172818

2818-
2. Triangles. Most cool models are in triangle form. The model I/O is the worst and almost
2819+
2. Triangles -- Most cool models are in triangle form. The model I/O is the worst and almost
28192820
everybody tries to get somebody else’s code to do this.
28202821

2821-
3. Surface textures. This lets you paste images on like wall paper. Pretty easy and a good thing
2822+
3. Surface Textures -- This lets you paste images on like wall paper. Pretty easy and a good thing
28222823
to do.
28232824

2824-
4. Solid textures. Ken Perlin has his code online. Andrew Kensler has some very cool info at his
2825+
4. Solid textures -- Ken Perlin has his code online. Andrew Kensler has some very cool info at his
28252826
blog.
28262827

2827-
5. Volumes and media. Cool stuff and will challenge your software architecture. I favor making
2828+
5. Volumes and Media -- Cool stuff and will challenge your software architecture. I favor making
28282829
volumes have the hittable interface and probabilistically have intersections based on density.
28292830
Your rendering code doesn’t even have to know it has volumes with that method.
28302831

2831-
6. Parallelism. Run $N$ copies of your code on $N$ cores with different random seeds. Average the
2832-
$N$ runs. This averaging can also be done hierarchically where $N/2$ pairs can be averaged to
2833-
get $N/4$ images, and pairs of those can be averaged. That method of parallelism should extend
2834-
well into the thousands of cores with very little coding.
2832+
6. Parallelism -- Run $N$ copies of your code on $N$ cores with different random seeds. Average
2833+
the $N$ runs. This averaging can also be done hierarchically where $N/2$ pairs can be averaged
2834+
to get $N/4$ images, and pairs of those can be averaged. That method of parallelism should
2835+
extend well into the thousands of cores with very little coding.
28352836

28362837
Have fun, and please send me your cool images!
28372838

0 commit comments

Comments
 (0)