|
17 | 17 | ====================================================================================================
|
18 | 18 |
|
19 | 19 | I’ve taught many graphics classes over the years. Often I do them in ray tracing, because you are
|
20 |
| -forced to write all the code but you can still get cool images with no API. I decided to adapt my |
| 20 | +forced to write all the code, but you can still get cool images with no API. I decided to adapt my |
21 | 21 | course notes into a how-to, to get you to a cool program as quickly as possible. It will not be a
|
22 | 22 | full-featured ray tracer, but it does have the indirect lighting which has made ray tracing a staple
|
23 | 23 | in movies. Follow these steps, and the architecture of the ray tracer you produce will be good for
|
|
64 | 64 | ====================================================================================================
|
65 | 65 |
|
66 | 66 | Whenever you start a renderer, you need a way to see an image. The most straightforward way is to
|
67 |
| -write it to a file. The catch is, there are so many formats and many of those are complex. I always |
| 67 | +write it to a file. The catch is, there are so many formats. Many of those are complex. I always |
68 | 68 | start with a plain text ppm file. Here’s a nice description from Wikipedia:
|
69 | 69 |
|
70 | 70 | 
|
|
355 | 355 | ====================================================================================================
|
356 | 356 |
|
357 | 357 | <div class='together'>
|
358 |
| -The one thing that all ray tracers have is a ray class, and a computation of what color is seen |
359 |
| -along a ray. Let’s think of a ray as a function $\mathbf{p}(t) = \mathbf{a} + t \vec{\mathbf{b}}$. |
360 |
| -Here $\mathbf{p}$ is a 3D position along a line in 3D. $\mathbf{a}$ is the ray origin and |
| 358 | +The one thing that all ray tracers have is a ray class and a computation of what color is seen along |
| 359 | +a ray. Let’s think of a ray as a function $\mathbf{p}(t) = \mathbf{a} + t \vec{\mathbf{b}}$. Here |
| 360 | +$\mathbf{p}$ is a 3D position along a line in 3D. $\mathbf{a}$ is the ray origin, and |
361 | 361 | $\vec{\mathbf{b}}$ is the ray direction. The ray parameter $t$ is a real number (`double` in the
|
362 | 362 | code). Plug in a different $t$ and $p(t)$ moves the point along the ray. Add in negative $t$ and you
|
363 | 363 | can go anywhere on the 3D line. For positive $t$, you get only the parts in front of $\mathbf{a}$,
|
|
411 | 411 | often, so I’ll stick with a 200×100 image. I’ll put the “eye” (or camera center if you think of a
|
412 | 412 | camera) at $(0,0,0)$. I will have the y-axis go up, and the x-axis to the right. In order to respect
|
413 | 413 | the convention of a right handed coordinate system, into the screen is the negative z-axis. I will
|
414 |
| -traverse the screen from the lower left hand corner and use two offset vectors along the screen |
| 414 | +traverse the screen from the lower left hand corner, and use two offset vectors along the screen |
415 | 415 | sides to move the ray endpoint across the screen. Note that I do not make the ray direction a unit
|
416 | 416 | length vector because I think not doing that makes for simpler and slightly faster code.
|
417 | 417 |
|
|
535 | 535 | </div>
|
536 | 536 |
|
537 | 537 | <div class='together'>
|
538 |
| -The rules of vector algebra are all that we would want here, and if we expand that equation and |
539 |
| -move all the terms to the left hand side we get: |
| 538 | +The rules of vector algebra are all that we would want here. If we expand that equation and move all |
| 539 | +the terms to the left hand side we get: |
540 | 540 |
|
541 | 541 | $$ t^2 \vec{\mathbf{b}}\cdot\vec{\mathbf{b}}
|
542 | 542 | + 2t \vec{\mathbf{b}} \cdot \vec{(\mathbf{a}-\mathbf{c})}
|
|
715 | 715 |
|
716 | 716 |
|
717 | 717 | Now, how about several spheres? While it is tempting to have an array of spheres, a very clean
|
718 |
| -solution is the make an “abstract class” for anything a ray might hit and make both a sphere and a |
| 718 | +solution is the make an “abstract class” for anything a ray might hit, and make both a sphere and a |
719 | 719 | list of spheres just something you can hit. What that class should be called is something of a
|
720 | 720 | quandary -- calling it an “object” would be good if not for “object oriented” programming. “Surface”
|
721 | 721 | is often used, with the weakness being maybe we will want volumes. “hittable” emphasizes the member
|
722 |
| -function that unites them. I don’t love any of these but I will go with “hittable”. |
| 722 | +function that unites them. I don’t love any of these, but I will go with “hittable”. |
723 | 723 |
|
724 | 724 | <div class='together'>
|
725 | 725 | This `hittable` abstract class will have a hit function that takes in a ray. Most ray tracers have
|
|
864 | 864 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
865 | 865 | [Listing [normals-point-against]: <kbd>[sphere.h]</kbd> Remembering the side of the surface]
|
866 | 866 |
|
867 |
| -The decision whether to have normals always point out or always point against the ray is based on |
868 |
| -whether you want to determine the side of the surface at the time of geometry or at the time of |
869 |
| -coloring. In this book we have more material types than we have geometry types, so we'll go for |
870 |
| -less work and put the determination at geometry time. This is simply a matter of preference and |
871 |
| -you'll see both implementations in the literature. |
| 867 | +We can set things up so that normals always point “outward” from the surface, or always point |
| 868 | +against the incident ray. This decision is determined by whether you want to determine the side of |
| 869 | +the surface at the time of geometry intersection or at the time of coloring. In this book we have |
| 870 | +more material types than we have geometry types, so we'll go for less work and put the determination |
| 871 | +at geometry time. This is simply a matter of preference, and you'll see both implementations in the |
| 872 | +literature. |
872 | 873 |
|
873 | 874 | We add the `front_face` bool to the `hit_record` struct. I know that we’ll also want motion blur at
|
874 | 875 | some point, so I’ll also add a time input variable.
|
|
1178 | 1179 |
|
1179 | 1180 | When a real camera takes a picture, there are usually no jaggies along edges because the edge pixels
|
1180 | 1181 | are a blend of some foreground and some background. We can get the same effect by averaging a bunch
|
1181 |
| -of samples inside each pixel. We will not bother with stratification, which is controversial but is |
| 1182 | +of samples inside each pixel. We will not bother with stratification. This is controversial, but is |
1182 | 1183 | usual for my programs. For some ray tracers it is critical, but the kind of general one we are
|
1183 | 1184 | writing doesn’t benefit very much from it and it makes the code uglier. We abstract the camera class
|
1184 | 1185 | a bit so we can make a cooler camera later.
|
|
1385 | 1386 | Lambertian.)
|
1386 | 1387 |
|
1387 | 1388 | (Reader Vassillen Chizhov proved that the lazy hack is indeed just a lazy hack and is inaccurate.
|
1388 |
| -The correct representation of ideal Lambertian isn't much more work and is presented at the end of |
| 1389 | +The correct representation of ideal Lambertian isn't much more work, and is presented at the end of |
1389 | 1390 | the chapter.)
|
1390 | 1391 |
|
1391 | 1392 | <div class='together'>
|
|
1394 | 1395 | sphere with a center at $(p - \vec{N})$ is considered _inside_ the surface, whereas the sphere with
|
1395 | 1396 | center $(p + \vec{N})$ is considered _outside_ the surface. Select the tangent unit radius sphere
|
1396 | 1397 | that is on the same side of the surface as the ray origin. Pick a random point $s$ inside this unit
|
1397 |
| -radius sphere and send a ray from the hit point $p$ to the random point $s$ (this is the vector |
| 1398 | +radius sphere, and send a ray from the hit point $p$ to the random point $s$ (this is the vector |
1398 | 1399 | $(s-p)$):
|
1399 | 1400 |
|
1400 | 1401 | ![Figure [rand-vector]: Generating a random diffuse bounce ray](../images/fig.rand-vector.jpg)
|
|
1869 | 1870 |
|
1870 | 1871 | <div class='together'>
|
1871 | 1872 | For the Lambertian (diffuse) case we already have, it can either scatter always and attenuate by its
|
1872 |
| -reflectance $R$, or it can scatter with no attenuation but absorb the fraction $1-R$ of the rays. Or |
| 1873 | +reflectance $R$, or it can scatter with no attenuation but absorb the fraction $1-R$ of the rays, or |
1873 | 1874 | it could be a mixture of those strategies. For Lambertian materials we get this simple class:
|
1874 | 1875 |
|
1875 | 1876 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
|
|
2090 | 2091 |
|
2091 | 2092 | Clear materials such as water, glass, and diamonds are dielectrics. When a light ray hits them, it
|
2092 | 2093 | splits into a reflected ray and a refracted (transmitted) ray. We’ll handle that by randomly
|
2093 |
| -choosing between reflection or refraction and only generating one scattered ray per interaction. |
| 2094 | +choosing between reflection or refraction, and only generating one scattered ray per interaction. |
2094 | 2095 |
|
2095 | 2096 | <div class='together'>
|
2096 | 2097 | The hardest part to debug is the refracted ray. I usually first just have all the light refract if
|
|
2208 | 2209 |
|
2209 | 2210 | <div class='together'>
|
2210 | 2211 | That definitely doesn't look right. One troublesome practical issue is that when the ray is in the
|
2211 |
| -material with the higher refractive index, there is no real solution to Snell’s law and thus there |
| 2212 | +material with the higher refractive index, there is no real solution to Snell’s law, and thus there |
2212 | 2213 | is no refraction possible. If we refer back to Snell's law and the derivation of $\sin\theta'$:
|
2213 | 2214 |
|
2214 | 2215 | $$ \sin\theta' = \frac{\eta}{\eta'} \cdot \sin\theta $$
|
|
2221 | 2222 |
|
2222 | 2223 | $$ \frac{1.5}{1.0} \cdot \sin\theta > 1.0 $$
|
2223 | 2224 |
|
2224 |
| -The equality between the two sides of the equation is broken and a solution cannot exist. If a |
2225 |
| -solution does not exist the glass cannot refract and must reflect the ray: |
| 2225 | +The equality between the two sides of the equation is broken, and a solution cannot exist. If a |
| 2226 | +solution does not exist, the glass cannot refract, and therefore must reflect the ray: |
2226 | 2227 |
|
2227 | 2228 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
|
2228 | 2229 | if(etai_over_etat * sin_theta > 1.0) {
|
|
2386 | 2387 |
|
2387 | 2388 | <div class='together'>
|
2388 | 2389 | An interesting and easy trick with dielectric spheres is to note that if you use a negative radius,
|
2389 |
| -the geometry is unaffected but the surface normal points inward, so it can be used as a bubble |
2390 |
| -to make a hollow glass sphere: |
| 2390 | +the geometry is unaffected, but the surface normal points inward. This can be used as a bubble to |
| 2391 | +make a hollow glass sphere: |
2391 | 2392 |
|
2392 | 2393 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
|
2393 | 2394 | world.add(make_shared<sphere>(vec3(0,0,-1), 0.5, make_shared<lambertian>(vec3(0.1, 0.2, 0.5))));
|
|
2606 | 2607 |
|
2607 | 2608 | <div class="together">
|
2608 | 2609 | A real camera has a complicated compound lens. For our code we could simulate the order: sensor,
|
2609 |
| -then lens, then aperture, and figure out where to send the rays and flip the image once computed |
2610 |
| -(the image is projected upside down on the film). Graphics people usually use a thin lens |
2611 |
| -approximation: |
| 2610 | +then lens, then aperture. Then we could figure out where to send the rays, and flip the image after |
| 2611 | +it's computed (the image is projected upside down on the film). Graphics people, however, usually |
| 2612 | +use a thin lens approximation: |
2612 | 2613 |
|
2613 | 2614 | ![Figure [cam-lens]: Camera lens model](../images/fig.cam-lens.jpg)
|
2614 | 2615 |
|
|
2803 | 2804 | </div>
|
2804 | 2805 |
|
2805 | 2806 | An interesting thing you might note is the glass balls don’t really have shadows which makes them
|
2806 |
| -look like they are floating. This is not a bug (you don’t see glass balls much in real life, where |
2807 |
| -they also look a bit strange and indeed seem to float on cloudy days). A point on the big sphere |
| 2807 | +look like they are floating. This is not a bug -- you don’t see glass balls much in real life, where |
| 2808 | +they also look a bit strange, and indeed seem to float on cloudy days. A point on the big sphere |
2808 | 2809 | under a glass ball still has lots of light hitting it because the sky is re-ordered rather than
|
2809 | 2810 | blocked.
|
2810 | 2811 |
|
|
2816 | 2817 | 2. Biasing scattered rays toward them, and then downweighting those rays to cancel out the bias.
|
2817 | 2818 | Both work. I am in the minority in favoring the latter approach.
|
2818 | 2819 |
|
2819 |
| - 3. Triangles. Most cool models are in triangle form. The model I/O is the worst and almost |
| 2820 | + 3. Triangles. Most cool models are in triangle form. The model I/O is the worst, and almost |
2820 | 2821 | everybody tries to get somebody else’s code to do this.
|
2821 | 2822 |
|
2822 |
| - 4. Surface textures. This lets you paste images on like wall paper. Pretty easy and a good thing |
| 2823 | + 4. Surface textures. This lets you paste images on like wall paper. Pretty easy, and a good thing |
2823 | 2824 | to do.
|
2824 | 2825 |
|
2825 | 2826 | 5. Solid textures. Ken Perlin has his code online. Andrew Kensler has some very cool info at his
|
2826 | 2827 | blog.
|
2827 | 2828 |
|
2828 |
| - 6. Volumes and media. Cool stuff and will challenge your software architecture. I favor making |
| 2829 | + 6. Volumes and media. Cool stuff, and will challenge your software architecture. I favor making |
2829 | 2830 | volumes have the hittable interface and probabilistically have intersections based on density.
|
2830 | 2831 | Your rendering code doesn’t even have to know it has volumes with that method.
|
2831 | 2832 |
|
|
0 commit comments