|
17 | 17 | ====================================================================================================
|
18 | 18 |
|
19 | 19 | I’ve taught many graphics classes over the years. Often I do them in ray tracing, because you are
|
20 |
| -forced to write all the code but you can still get cool images with no API. I decided to adapt my |
| 20 | +forced to write all the code, but you can still get cool images with no API. I decided to adapt my |
21 | 21 | course notes into a how-to, to get you to a cool program as quickly as possible. It will not be a
|
22 | 22 | full-featured ray tracer, but it does have the indirect lighting which has made ray tracing a staple
|
23 | 23 | in movies. Follow these steps, and the architecture of the ray tracer you produce will be good for
|
|
33 | 33 | don’t need to. However, I suggest you do, because it’s fast, portable, and most production movie and
|
34 | 34 | video game renderers are written in C++. Note that I avoid most “modern features” of C++, but
|
35 | 35 | inheritance and operator overloading are too useful for ray tracers to pass on. I do not provide the
|
36 |
| -code online, but the code is real and I show all of it except for a few straightforward operators in |
37 |
| -the `vec3` class. I am a big believer in typing in code to learn it, but when code is available I |
| 36 | +code online, but the code is real, and I show all of it except for a few straightforward operators |
| 37 | +in the `vec3` class. I am a big believer in typing in code to learn it, but when code is available I |
38 | 38 | use it, so I only practice what I preach when the code is not available. So don’t ask!
|
39 | 39 |
|
40 | 40 | I have left that last part in because it is funny what a 180 I have done. Several readers ended up
|
|
64 | 64 | ====================================================================================================
|
65 | 65 |
|
66 | 66 | Whenever you start a renderer, you need a way to see an image. The most straightforward way is to
|
67 |
| -write it to a file. The catch is, there are so many formats and many of those are complex. I always |
| 67 | +write it to a file. The catch is, there are so many formats, and many of those are complex. I always |
68 | 68 | start with a plain text ppm file. Here’s a nice description from Wikipedia:
|
69 | 69 |
|
70 | 70 | 
|
|
140 | 140 |
|
141 | 141 | <div class='together'>
|
142 | 142 | Hooray! This is the graphics “hello world”. If your image doesn’t look like that, open the output
|
143 |
| -file in a text editor and see what it looks like. It should start something like this: |
| 143 | +file in a text editor, and see what it looks like. It should start something like this: |
144 | 144 |
|
145 | 145 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
146 | 146 | P3
|
|
171 | 171 | loop or other problem.
|
172 | 172 |
|
173 | 173 | <div class='together'>
|
174 |
| -Our program outputs the image to the standard output stream (`std::cout`), so leave that alone and |
| 174 | +Our program outputs the image to the standard output stream (`std::cout`), so leave that alone, and |
175 | 175 | instead write to the error output stream (`std::cerr`):
|
176 | 176 |
|
177 | 177 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
|
|
357 | 357 | <div class='together'>
|
358 | 358 | The one thing that all ray tracers have is a ray class, and a computation of what color is seen
|
359 | 359 | along a ray. Let’s think of a ray as a function $\mathbf{p}(t) = \mathbf{a} + t \vec{\mathbf{b}}$.
|
360 |
| -Here $\mathbf{p}$ is a 3D position along a line in 3D. $\mathbf{a}$ is the ray origin and |
| 360 | +Here $\mathbf{p}$ is a 3D position along a line in 3D. $\mathbf{a}$ is the ray origin, and |
361 | 361 | $\vec{\mathbf{b}}$ is the ray direction. The ray parameter $t$ is a real number (`double` in the
|
362 | 362 | code). Plug in a different $t$ and $p(t)$ moves the point along the ray. Add in negative $t$ and you
|
363 | 363 | can go anywhere on the 3D line. For positive $t$, you get only the parts in front of $\mathbf{a}$,
|
|
401 | 401 | </div>
|
402 | 402 |
|
403 | 403 | Now we are ready to turn the corner and make a ray tracer. At the core, the ray tracer sends rays
|
404 |
| -through pixels and computes the color seen in the direction of those rays. The involved steps are |
| 404 | +through pixels, and computes the color seen in the direction of those rays. The involved steps are |
405 | 405 | (1) calculate the ray from the eye to the pixel, (2) determine which objects the ray intersects, and
|
406 | 406 | (3) compute a color for that intersection point. When first developing a ray tracer, I always do a
|
407 | 407 | simple camera for getting the code up and running. I also make a simple `color(ray)` function that
|
|
411 | 411 | often, so I’ll stick with a 200×100 image. I’ll put the “eye” (or camera center if you think of a
|
412 | 412 | camera) at $(0,0,0)$. I will have the y-axis go up, and the x-axis to the right. In order to respect
|
413 | 413 | the convention of a right handed coordinate system, into the screen is the negative z-axis. I will
|
414 |
| -traverse the screen from the lower left hand corner and use two offset vectors along the screen |
| 414 | +traverse the screen from the lower left hand corner, and use two offset vectors along the screen |
415 | 415 | sides to move the ray endpoint across the screen. Note that I do not make the ray direction a unit
|
416 | 416 | length vector because I think not doing that makes for simpler and slightly faster code.
|
417 | 417 |
|
|
535 | 535 | </div>
|
536 | 536 |
|
537 | 537 | <div class='together'>
|
538 |
| -The rules of vector algebra are all that we would want here, and if we expand that equation and |
| 538 | +The rules of vector algebra are all that we would want here, and if we expand that equation, and |
539 | 539 | move all the terms to the left hand side we get:
|
540 | 540 |
|
541 | 541 | $$ t^2 \vec{\mathbf{b}}\cdot\vec{\mathbf{b}}
|
|
715 | 715 |
|
716 | 716 |
|
717 | 717 | Now, how about several spheres? While it is tempting to have an array of spheres, a very clean
|
718 |
| -solution is the make an “abstract class” for anything a ray might hit and make both a sphere and a |
| 718 | +solution is the make an “abstract class” for anything a ray might hit, and make both a sphere and a |
719 | 719 | list of spheres just something you can hit. What that class should be called is something of a
|
720 | 720 | quandary -- calling it an “object” would be good if not for “object oriented” programming. “Surface”
|
721 | 721 | is often used, with the weakness being maybe we will want volumes. “hittable” emphasizes the member
|
722 |
| -function that unites them. I don’t love any of these but I will go with “hittable”. |
| 722 | +function that unites them. I don’t love any of these, but I will go with “hittable”. |
723 | 723 |
|
724 | 724 | <div class='together'>
|
725 | 725 | This `hittable` abstract class will have a hit function that takes in a ray. Most ray tracers have
|
|
864 | 864 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
865 | 865 | [Listing [normals-point-against]: <kbd>[sphere.h]</kbd> Remembering the side of the surface]
|
866 | 866 |
|
867 |
| -The decision whether to have normals always point out or always point against the ray is based on |
| 867 | +The decision whether to have normals always point out or to always point against the ray is based on |
868 | 868 | whether you want to determine the side of the surface at the time of geometry or at the time of
|
869 | 869 | coloring. In this book we have more material types than we have geometry types, so we'll go for
|
870 |
| -less work and put the determination at geometry time. This is simply a matter of preference and |
| 870 | +less work, and put the determination at geometry time. This is simply a matter of preference, and |
871 | 871 | you'll see both implementations in the literature.
|
872 | 872 |
|
873 | 873 | We add the `front_face` bool to the `hit_record` struct. I know that we’ll also want motion blur at
|
|
1029 | 1029 |
|
1030 | 1030 | We'll use shared pointers in our code, because it allows multiple geometries to share a common
|
1031 | 1031 | instance (for example, a bunch of spheres that all use the same texture map material), and because
|
1032 |
| -it makes memory management automatic and easier to reason about. |
| 1032 | +it makes memory management automatic, and easier to reason about. |
1033 | 1033 |
|
1034 | 1034 | `std::shared_ptr` is included with the `<memory>` header.
|
1035 | 1035 |
|
|
1178 | 1178 |
|
1179 | 1179 | When a real camera takes a picture, there are usually no jaggies along edges because the edge pixels
|
1180 | 1180 | are a blend of some foreground and some background. We can get the same effect by averaging a bunch
|
1181 |
| -of samples inside each pixel. We will not bother with stratification, which is controversial but is |
| 1181 | +of samples inside each pixel. We will not bother with stratification, which is controversial, but is |
1182 | 1182 | usual for my programs. For some ray tracers it is critical, but the kind of general one we are
|
1183 |
| -writing doesn’t benefit very much from it and it makes the code uglier. We abstract the camera class |
1184 |
| -a bit so we can make a cooler camera later. |
| 1183 | +writing doesn’t benefit very much from it, and it makes the code uglier. We abstract the camera |
| 1184 | +class a bit so we can make a cooler camera later. |
1185 | 1185 |
|
1186 | 1186 | One thing we need is a random number generator that returns real random numbers. We need a function
|
1187 | 1187 | that returns a canonical random number which by convention returns random real in the range
|
|
1384 | 1384 | ideal diffuse surfaces. (I used to do it as a lazy hack that approximates mathematically ideal
|
1385 | 1385 | Lambertian.)
|
1386 | 1386 |
|
1387 |
| -(Reader Vassillen Chizhov proved that the lazy hack is indeed just a lazy hack and is inaccurate. |
1388 |
| -The correct representation of ideal Lambertian isn't much more work and is presented at the end of |
| 1387 | +(Reader Vassillen Chizhov proved that the lazy hack is indeed just a lazy hack, and is inaccurate. |
| 1388 | +The correct representation of ideal Lambertian isn't much more work, and is presented at the end of |
1389 | 1389 | the chapter.)
|
1390 | 1390 |
|
1391 | 1391 | <div class='together'>
|
|
1394 | 1394 | sphere with a center at $(p - \vec{N})$ is considered _inside_ the surface, whereas the sphere with
|
1395 | 1395 | center $(p + \vec{N})$ is considered _outside_ the surface. Select the tangent unit radius sphere
|
1396 | 1396 | that is on the same side of the surface as the ray origin. Pick a random point $s$ inside this unit
|
1397 |
| -radius sphere and send a ray from the hit point $p$ to the random point $s$ (this is the vector |
| 1397 | +radius sphere, and send a ray from the hit point $p$ to the random point $s$ (this is the vector |
1398 | 1398 | $(s-p)$):
|
1399 | 1399 |
|
1400 | 1400 | ![Figure [rand-vector]: Generating a random diffuse bounce ray](../images/fig.rand-vector.jpg)
|
|
1404 | 1404 | <div class='together'>
|
1405 | 1405 | We need a way to pick a random point in a unit radius sphere. We’ll use what is usually the easiest
|
1406 | 1406 | algorithm: a rejection method. First, pick a random point in the unit cube where x, y, and z all
|
1407 |
| -range from -1 to +1. Reject this point and try again if the point is outside the sphere. |
| 1407 | +range from -1 to +1. Reject this point, and try again if the point is outside the sphere. |
1408 | 1408 |
|
1409 | 1409 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
|
1410 | 1410 | class vec3 {
|
|
1869 | 1869 |
|
1870 | 1870 | <div class='together'>
|
1871 | 1871 | For the Lambertian (diffuse) case we already have, it can either scatter always and attenuate by its
|
1872 |
| -reflectance $R$, or it can scatter with no attenuation but absorb the fraction $1-R$ of the rays. Or |
| 1872 | +reflectance $R$, or it can scatter with no attenuation but absorb the fraction $1-R$ of the rays, or |
1873 | 1873 | it could be a mixture of those strategies. For Lambertian materials we get this simple class:
|
1874 | 1874 |
|
1875 | 1875 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
|
|
2030 | 2030 | </div>
|
2031 | 2031 |
|
2032 | 2032 | <div class='together'>
|
2033 |
| -We can also randomize the reflected direction by using a small sphere and choosing a new endpoint |
| 2033 | +We can also randomize the reflected direction by using a small sphere, and choosing a new endpoint |
2034 | 2034 | for the ray:
|
2035 | 2035 |
|
2036 | 2036 | ![Figure [reflect-fuzzy]: Generating fuzzed reflection rays](../images/fig.reflect-fuzzy.jpg)
|
|
2090 | 2090 |
|
2091 | 2091 | Clear materials such as water, glass, and diamonds are dielectrics. When a light ray hits them, it
|
2092 | 2092 | splits into a reflected ray and a refracted (transmitted) ray. We’ll handle that by randomly
|
2093 |
| -choosing between reflection or refraction and only generating one scattered ray per interaction. |
| 2093 | +choosing between reflection or refraction, and only generating one scattered ray per interaction. |
2094 | 2094 |
|
2095 | 2095 | <div class='together'>
|
2096 | 2096 | The hardest part to debug is the refracted ray. I usually first just have all the light refract if
|
|
2107 | 2107 |
|
2108 | 2108 | Is that right? Glass balls look odd in real life. But no, it isn’t right. The world should be
|
2109 | 2109 | flipped upside down and no weird black stuff. I just printed out the ray straight through the middle
|
2110 |
| -of the image and it was clearly wrong. That often does the job. |
| 2110 | +of the image, and it was clearly wrong. That often does the job. |
2111 | 2111 |
|
2112 | 2112 | <div class='together'>
|
2113 | 2113 | The refraction is described by Snell’s law:
|
|
2208 | 2208 |
|
2209 | 2209 | <div class='together'>
|
2210 | 2210 | That definitely doesn't look right. One troublesome practical issue is that when the ray is in the
|
2211 |
| -material with the higher refractive index, there is no real solution to Snell’s law and thus there |
| 2211 | +material with the higher refractive index, there is no real solution to Snell’s law, and thus there |
2212 | 2212 | is no refraction possible. If we refer back to Snell's law and the derivation of $\sin\theta'$:
|
2213 | 2213 |
|
2214 | 2214 | $$ \sin\theta' = \frac{\eta}{\eta'} \cdot \sin\theta $$
|
|
2221 | 2221 |
|
2222 | 2222 | $$ \frac{1.5}{1.0} \cdot \sin\theta > 1.0 $$
|
2223 | 2223 |
|
2224 |
| -The equality between the two sides of the equation is broken and a solution cannot exist. If a |
2225 |
| -solution does not exist the glass cannot refract and must reflect the ray: |
| 2224 | +The equality between the two sides of the equation is broken, and a solution cannot exist. If a |
| 2225 | +solution does not exist the glass cannot refract, and must reflect the ray: |
2226 | 2226 |
|
2227 | 2227 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
|
2228 | 2228 | if(etai_over_etat * sin_theta > 1.0) {
|
|
2386 | 2386 |
|
2387 | 2387 | <div class='together'>
|
2388 | 2388 | An interesting and easy trick with dielectric spheres is to note that if you use a negative radius,
|
2389 |
| -the geometry is unaffected but the surface normal points inward, so it can be used as a bubble |
| 2389 | +the geometry is unaffected, but the surface normal points inward, so it can be used as a bubble |
2390 | 2390 | to make a hollow glass sphere:
|
2391 | 2391 |
|
2392 | 2392 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
|
|
2507 | 2507 |
|
2508 | 2508 | Remember that `vup`, `v`, and `w` are all in the same plane. Note that, like before when our fixed
|
2509 | 2509 | camera faced -Z, our arbitrary view camera faces -w. And keep in mind that we can -- but we don’t
|
2510 |
| -have to -- use world up (0,1,0) to specify vup. This is convenient and will naturally keep your |
| 2510 | +have to -- use world up (0,1,0) to specify vup. This is convenient, and will naturally keep your |
2511 | 2511 | camera horizontally level until you decide to experiment with crazy camera angles.
|
2512 | 2512 |
|
2513 | 2513 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
|
|
2606 | 2606 |
|
2607 | 2607 | <div class="together">
|
2608 | 2608 | A real camera has a complicated compound lens. For our code we could simulate the order: sensor,
|
2609 |
| -then lens, then aperture, and figure out where to send the rays and flip the image once computed |
2610 |
| -(the image is projected upside down on the film). Graphics people usually use a thin lens |
| 2609 | +then lens, then aperture. Then we could figure out where to send the rays, and flip the image once |
| 2610 | +computed (the image is projected upside down on the film). Graphics people usually use a thin lens |
2611 | 2611 | approximation:
|
2612 | 2612 |
|
2613 | 2613 | ![Figure [cam-lens]: Camera lens model](../images/fig.cam-lens.jpg)
|
|
2804 | 2804 |
|
2805 | 2805 | An interesting thing you might note is the glass balls don’t really have shadows which makes them
|
2806 | 2806 | look like they are floating. This is not a bug (you don’t see glass balls much in real life, where
|
2807 |
| -they also look a bit strange and indeed seem to float on cloudy days). A point on the big sphere |
| 2807 | +they also look a bit strange, and indeed seem to float on cloudy days). A point on the big sphere |
2808 | 2808 | under a glass ball still has lots of light hitting it because the sky is re-ordered rather than
|
2809 | 2809 | blocked.
|
2810 | 2810 |
|
|
2816 | 2816 | 2. Biasing scattered rays toward them, and then downweighting those rays to cancel out the bias.
|
2817 | 2817 | Both work. I am in the minority in favoring the latter approach.
|
2818 | 2818 |
|
2819 |
| - 3. Triangles. Most cool models are in triangle form. The model I/O is the worst and almost |
| 2819 | + 3. Triangles. Most cool models are in triangle form. The model I/O is the worst, and almost |
2820 | 2820 | everybody tries to get somebody else’s code to do this.
|
2821 | 2821 |
|
2822 |
| - 4. Surface textures. This lets you paste images on like wall paper. Pretty easy and a good thing |
| 2822 | + 4. Surface textures. This lets you paste images on like wall paper. Pretty easy, and a good thing |
2823 | 2823 | to do.
|
2824 | 2824 |
|
2825 | 2825 | 5. Solid textures. Ken Perlin has his code online. Andrew Kensler has some very cool info at his
|
2826 | 2826 | blog.
|
2827 | 2827 |
|
2828 |
| - 6. Volumes and media. Cool stuff and will challenge your software architecture. I favor making |
| 2828 | + 6. Volumes and media. Cool stuff, and will challenge your software architecture. I favor making |
2829 | 2829 | volumes have the hittable interface and probabilistically have intersections based on density.
|
2830 | 2830 | Your rendering code doesn’t even have to know it has volumes with that method.
|
2831 | 2831 |
|
|
0 commit comments