|
33 | 33 | don’t need to. However, I suggest you do, because it’s fast, portable, and most production movie and
|
34 | 34 | video game renderers are written in C++. Note that I avoid most “modern features” of C++, but
|
35 | 35 | inheritance and operator overloading are too useful for ray tracers to pass on. I do not provide the
|
36 |
| -code online, but the code is real, and I show all of it except for a few straightforward operators |
37 |
| -in the `vec3` class. I am a big believer in typing in code to learn it, but when code is available I |
| 36 | +code online, but the code is real and I show all of it except for a few straightforward operators in |
| 37 | +the `vec3` class. I am a big believer in typing in code to learn it, but when code is available I |
38 | 38 | use it, so I only practice what I preach when the code is not available. So don’t ask!
|
39 | 39 |
|
40 | 40 | I have left that last part in because it is funny what a 180 I have done. Several readers ended up
|
|
64 | 64 | ====================================================================================================
|
65 | 65 |
|
66 | 66 | Whenever you start a renderer, you need a way to see an image. The most straightforward way is to
|
67 |
| -write it to a file. The catch is, there are so many formats, and many of those are complex. I always |
| 67 | +write it to a file. The catch is, there are so many formats. Many of those are complex. I always |
68 | 68 | start with a plain text ppm file. Here’s a nice description from Wikipedia:
|
69 | 69 |
|
70 | 70 | 
|
|
140 | 140 |
|
141 | 141 | <div class='together'>
|
142 | 142 | Hooray! This is the graphics “hello world”. If your image doesn’t look like that, open the output
|
143 |
| -file in a text editor, and see what it looks like. It should start something like this: |
| 143 | +file in a text editor and see what it looks like. It should start something like this: |
144 | 144 |
|
145 | 145 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
146 | 146 | P3
|
|
171 | 171 | loop or other problem.
|
172 | 172 |
|
173 | 173 | <div class='together'>
|
174 |
| -Our program outputs the image to the standard output stream (`std::cout`), so leave that alone, and |
| 174 | +Our program outputs the image to the standard output stream (`std::cout`), so leave that alone and |
175 | 175 | instead write to the error output stream (`std::cerr`):
|
176 | 176 |
|
177 | 177 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
|
|
355 | 355 | ====================================================================================================
|
356 | 356 |
|
357 | 357 | <div class='together'>
|
358 |
| -The one thing that all ray tracers have is a ray class, and a computation of what color is seen |
359 |
| -along a ray. Let’s think of a ray as a function $\mathbf{p}(t) = \mathbf{a} + t \vec{\mathbf{b}}$. |
360 |
| -Here $\mathbf{p}$ is a 3D position along a line in 3D. $\mathbf{a}$ is the ray origin, and |
| 358 | +The one thing that all ray tracers have is a ray class and a computation of what color is seen along |
| 359 | +a ray. Let’s think of a ray as a function $\mathbf{p}(t) = \mathbf{a} + t \vec{\mathbf{b}}$. Here |
| 360 | +$\mathbf{p}$ is a 3D position along a line in 3D. $\mathbf{a}$ is the ray origin, and |
361 | 361 | $\vec{\mathbf{b}}$ is the ray direction. The ray parameter $t$ is a real number (`double` in the
|
362 | 362 | code). Plug in a different $t$ and $p(t)$ moves the point along the ray. Add in negative $t$ and you
|
363 | 363 | can go anywhere on the 3D line. For positive $t$, you get only the parts in front of $\mathbf{a}$,
|
|
401 | 401 | </div>
|
402 | 402 |
|
403 | 403 | Now we are ready to turn the corner and make a ray tracer. At the core, the ray tracer sends rays
|
404 |
| -through pixels, and computes the color seen in the direction of those rays. The involved steps are |
| 404 | +through pixels and computes the color seen in the direction of those rays. The involved steps are |
405 | 405 | (1) calculate the ray from the eye to the pixel, (2) determine which objects the ray intersects, and
|
406 | 406 | (3) compute a color for that intersection point. When first developing a ray tracer, I always do a
|
407 | 407 | simple camera for getting the code up and running. I also make a simple `color(ray)` function that
|
|
535 | 535 | </div>
|
536 | 536 |
|
537 | 537 | <div class='together'>
|
538 |
| -The rules of vector algebra are all that we would want here, and if we expand that equation, and |
539 |
| -move all the terms to the left hand side we get: |
| 538 | +The rules of vector algebra are all that we would want here. If we expand that equation and move all |
| 539 | +the terms to the left hand side we get: |
540 | 540 |
|
541 | 541 | $$ t^2 \vec{\mathbf{b}}\cdot\vec{\mathbf{b}}
|
542 | 542 | + 2t \vec{\mathbf{b}} \cdot \vec{(\mathbf{a}-\mathbf{c})}
|
|
864 | 864 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
865 | 865 | [Listing [normals-point-against]: <kbd>[sphere.h]</kbd> Remembering the side of the surface]
|
866 | 866 |
|
867 |
| -The decision whether to have normals always point out or to always point against the ray is based on |
868 |
| -whether you want to determine the side of the surface at the time of geometry or at the time of |
869 |
| -coloring. In this book we have more material types than we have geometry types, so we'll go for |
870 |
| -less work, and put the determination at geometry time. This is simply a matter of preference, and |
871 |
| -you'll see both implementations in the literature. |
| 867 | +We can set things up so that normals always point “outward” from the surface, or always point |
| 868 | +against the incident ray. This decision is determined by whether you want to determine the side of |
| 869 | +the surface at the time of geometry intersection or at the time of coloring. In this book we have |
| 870 | +more material types than we have geometry types, so we'll go for less work and put the determination |
| 871 | +at geometry time. This is simply a matter of preference, and you'll see both implementations in the |
| 872 | +literature. |
872 | 873 |
|
873 | 874 | We add the `front_face` bool to the `hit_record` struct. I know that we’ll also want motion blur at
|
874 | 875 | some point, so I’ll also add a time input variable.
|
|
1029 | 1030 |
|
1030 | 1031 | We'll use shared pointers in our code, because it allows multiple geometries to share a common
|
1031 | 1032 | instance (for example, a bunch of spheres that all use the same texture map material), and because
|
1032 |
| -it makes memory management automatic, and easier to reason about. |
| 1033 | +it makes memory management automatic and easier to reason about. |
1033 | 1034 |
|
1034 | 1035 | `std::shared_ptr` is included with the `<memory>` header.
|
1035 | 1036 |
|
|
1178 | 1179 |
|
1179 | 1180 | When a real camera takes a picture, there are usually no jaggies along edges because the edge pixels
|
1180 | 1181 | are a blend of some foreground and some background. We can get the same effect by averaging a bunch
|
1181 |
| -of samples inside each pixel. We will not bother with stratification, which is controversial, but is |
| 1182 | +of samples inside each pixel. We will not bother with stratification. This is controversial, but is |
1182 | 1183 | usual for my programs. For some ray tracers it is critical, but the kind of general one we are
|
1183 |
| -writing doesn’t benefit very much from it, and it makes the code uglier. We abstract the camera |
1184 |
| -class a bit so we can make a cooler camera later. |
| 1184 | +writing doesn’t benefit very much from it and it makes the code uglier. We abstract the camera class |
| 1185 | +a bit so we can make a cooler camera later. |
1185 | 1186 |
|
1186 | 1187 | One thing we need is a random number generator that returns real random numbers. We need a function
|
1187 | 1188 | that returns a canonical random number which by convention returns random real in the range
|
|
1384 | 1385 | ideal diffuse surfaces. (I used to do it as a lazy hack that approximates mathematically ideal
|
1385 | 1386 | Lambertian.)
|
1386 | 1387 |
|
1387 |
| -(Reader Vassillen Chizhov proved that the lazy hack is indeed just a lazy hack, and is inaccurate. |
| 1388 | +(Reader Vassillen Chizhov proved that the lazy hack is indeed just a lazy hack and is inaccurate. |
1388 | 1389 | The correct representation of ideal Lambertian isn't much more work, and is presented at the end of
|
1389 | 1390 | the chapter.)
|
1390 | 1391 |
|
|
1404 | 1405 | <div class='together'>
|
1405 | 1406 | We need a way to pick a random point in a unit radius sphere. We’ll use what is usually the easiest
|
1406 | 1407 | algorithm: a rejection method. First, pick a random point in the unit cube where x, y, and z all
|
1407 |
| -range from -1 to +1. Reject this point, and try again if the point is outside the sphere. |
| 1408 | +range from -1 to +1. Reject this point and try again if the point is outside the sphere. |
1408 | 1409 |
|
1409 | 1410 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
|
1410 | 1411 | class vec3 {
|
|
2030 | 2031 | </div>
|
2031 | 2032 |
|
2032 | 2033 | <div class='together'>
|
2033 |
| -We can also randomize the reflected direction by using a small sphere, and choosing a new endpoint |
| 2034 | +We can also randomize the reflected direction by using a small sphere and choosing a new endpoint |
2034 | 2035 | for the ray:
|
2035 | 2036 |
|
2036 | 2037 | ![Figure [reflect-fuzzy]: Generating fuzzed reflection rays](../images/fig.reflect-fuzzy.jpg)
|
|
2107 | 2108 |
|
2108 | 2109 | Is that right? Glass balls look odd in real life. But no, it isn’t right. The world should be
|
2109 | 2110 | flipped upside down and no weird black stuff. I just printed out the ray straight through the middle
|
2110 |
| -of the image, and it was clearly wrong. That often does the job. |
| 2111 | +of the image and it was clearly wrong. That often does the job. |
2111 | 2112 |
|
2112 | 2113 | <div class='together'>
|
2113 | 2114 | The refraction is described by Snell’s law:
|
|
2222 | 2223 | $$ \frac{1.5}{1.0} \cdot \sin\theta > 1.0 $$
|
2223 | 2224 |
|
2224 | 2225 | The equality between the two sides of the equation is broken, and a solution cannot exist. If a
|
2225 |
| -solution does not exist the glass cannot refract, and must reflect the ray: |
| 2226 | +solution does not exist, the glass cannot refract, and therefore must reflect the ray: |
2226 | 2227 |
|
2227 | 2228 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
|
2228 | 2229 | if(etai_over_etat * sin_theta > 1.0) {
|
|
2386 | 2387 |
|
2387 | 2388 | <div class='together'>
|
2388 | 2389 | An interesting and easy trick with dielectric spheres is to note that if you use a negative radius,
|
2389 |
| -the geometry is unaffected, but the surface normal points inward, so it can be used as a bubble |
2390 |
| -to make a hollow glass sphere: |
| 2390 | +the geometry is unaffected, but the surface normal points inward. This can be used as a bubble to |
| 2391 | +make a hollow glass sphere: |
2391 | 2392 |
|
2392 | 2393 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
|
2393 | 2394 | world.add(make_shared<sphere>(vec3(0,0,-1), 0.5, make_shared<lambertian>(vec3(0.1, 0.2, 0.5))));
|
|
2507 | 2508 |
|
2508 | 2509 | Remember that `vup`, `v`, and `w` are all in the same plane. Note that, like before when our fixed
|
2509 | 2510 | camera faced -Z, our arbitrary view camera faces -w. And keep in mind that we can -- but we don’t
|
2510 |
| -have to -- use world up (0,1,0) to specify vup. This is convenient, and will naturally keep your |
| 2511 | +have to -- use world up (0,1,0) to specify vup. This is convenient and will naturally keep your |
2511 | 2512 | camera horizontally level until you decide to experiment with crazy camera angles.
|
2512 | 2513 |
|
2513 | 2514 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
|
|
2606 | 2607 |
|
2607 | 2608 | <div class="together">
|
2608 | 2609 | A real camera has a complicated compound lens. For our code we could simulate the order: sensor,
|
2609 |
| -then lens, then aperture. Then we could figure out where to send the rays, and flip the image once |
2610 |
| -computed (the image is projected upside down on the film). Graphics people usually use a thin lens |
2611 |
| -approximation: |
| 2610 | +then lens, then aperture. Then we could figure out where to send the rays, and flip the image after |
| 2611 | +it's computed (the image is projected upside down on the film). Graphics people, however, usually |
| 2612 | +use a thin lens approximation: |
2612 | 2613 |
|
2613 | 2614 | ![Figure [cam-lens]: Camera lens model](../images/fig.cam-lens.jpg)
|
2614 | 2615 |
|
|
2803 | 2804 | </div>
|
2804 | 2805 |
|
2805 | 2806 | An interesting thing you might note is the glass balls don’t really have shadows which makes them
|
2806 |
| -look like they are floating. This is not a bug (you don’t see glass balls much in real life, where |
2807 |
| -they also look a bit strange, and indeed seem to float on cloudy days). A point on the big sphere |
| 2807 | +look like they are floating. This is not a bug -- you don’t see glass balls much in real life, where |
| 2808 | +they also look a bit strange, and indeed seem to float on cloudy days. A point on the big sphere |
2808 | 2809 | under a glass ball still has lots of light hitting it because the sky is re-ordered rather than
|
2809 | 2810 | blocked.
|
2810 | 2811 |
|
|
0 commit comments