Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Disclaimer

The content of the original book series is licensed under the CC0 license, which permits copying, modification, and distribution. The explanations are copied directly and have been annotated with Rust-specific details where appropriate. Any sentences that include words such as I, me, or mine are from the original authors, Peter Shirley, Trevor David Black or Steve Hollasch. Once again, thank you for this outstanding book series.

cover

Overview

I’ve taught many graphics classes over the years. Often I do them in ray tracing, because you are forced to write all the code, but you can still get cool images with no API. I decided to adapt my course notes into a how-to, to get you to a cool program as quickly as possible. It will not be a full-featured ray tracer, but it does have the indirect lighting which has made ray tracing a staple in movies. Follow these steps, and the architecture of the ray tracer you produce will be good for extending to a more extensive ray tracer if you get excited and want to pursue that.

When somebody says “ray tracing” it could mean many things. What I am going to describe is technically a path tracer, and a fairly general one. While the code will be pretty simple (let the computer do the work!) I think you’ll be very happy with the images you can make.

I’ll take you through writing a ray tracer in the order I do it, along with some debugging tips. By the end, you will have a ray tracer that produces some great images. You should be able to do this in a weekend. If you take longer, don’t worry about it. I use C++ as the driving language, but you don’t need to.1 However, I suggest you do, because it’s fast, portable, and most production movie and video game renderers are written in C++. Note that I avoid most “modern features” of C++, but inheritance and operator overloading are too useful for ray tracers to pass on.2

I do not provide the code online, but the code is real and I show all of it except for a few straightforward operators in the vec3 class. I am a big believer in typing in code to learn it, but when code is available I use it, so I only practice what I preach when the code is not available. So don’t ask! I have left that last part in because it is funny what a 180 I have done. Several readers ended up with subtle errors that were helped when we compared code. So please do type in the code, but you can find the finished source for each book in the RayTracing project on GitHub.

A note on the implementing code for these books — our philosophy for the included code prioritizes the following goals:

  • The code should implement the concepts covered in the books.
  • We use C++, but as simple as possible. Our programming style is very C-like, but we take advantage of modern features where it makes the code easier to use or understand.
  • Our coding style continues the style established from the original books as much as possible, for continuity.
  • Line length is kept to 96 characters per line, to keep lines consistent between the codebase and code listings in the books.3

The code thus provides a baseline implementation, with tons of improvements left for the reader to enjoy. There are endless ways one can optimize and modernize the code; we prioritize the simple solution.

We assume a little bit of familiarity with vectors (like dot product and vector addition). If you don’t know that, do a little review. If you need that review, or to learn it for the first time, check out the online Graphics Codex by Morgan McGuire, Fundamentals of Computer Graphics by Steve Marschner and Peter Shirley, or Computer Graphics: Principles and Practice by J.D. Foley and Andy Van Dam.

See the project README file for information about this project, the repository on GitHub, directory structure, building & running, and how to make or reference corrections and contributions.

See our Further Reading wiki page for additional project related resources.

These books have been formatted to print well directly from your browser. We also include PDFs of each book with each release, in the “Assets” section.

If you want to communicate with us, feel free to send us an email at:

Finally, if you run into problems with your implementation, have general questions, or would like to share your own ideas or work, see the GitHub Discussions forum on the GitHub project.

Thanks to everyone who lent a hand on this project. You can find them in the acknowledgments section at the end of this book.

Let’s get on with it!


  1. There is where Rust comes into play - a fast, reliable language with better ergonomics. This is an unbiased opinion of course *caugh*.

  2. Inheritance is not supported in Rust. In many places simple composition and traits will do the trick.

  3. rustfmt will be used instead, as it is shipped with most IDEs.

The PPM Image Format

Whenever you start a renderer, you need a way to see an image. The most straightforward way is to write it to a file. The catch is, there are so many formats. Many of those are complex. I always start with a plain text ppm file. Here’s a nice description from Wikipedia:

PPM Example

Figure 1: PPM Example


Let’s make some C++ code to output such a thing:1

fn main() {
    // Image

    const IMAGE_WIDTH: u32 = 256;
    const IMAGE_HEIGHT: u32 = 256;

    // Render

    println!("P3");
    println!("{IMAGE_WIDTH} {IMAGE_HEIGHT}");
    println!("255");

    for j in 0..IMAGE_HEIGHT {
        for i in 0..IMAGE_WIDTH {
            let r = i as f64 / (IMAGE_WIDTH - 1) as f64;
            let g = j as f64 / (IMAGE_HEIGHT - 1) as f64;
            let b = 0.0;

            let ir = (255.999 * r) as i32;
            let ig = (255.999 * g) as i32;
            let ib = (255.999 * b) as i32;

            println!("{ir} {ig} {ib}");
        }
    }
}

Listing 1: [main.rs] Creating your first image


There are some things to note in this code:

  1. The pixels are written out in rows.
  2. Every row of pixels is written out left to right.
  3. These rows are written out from top to bottom.
  4. By convention, each of the red/green/blue components are represented internally by real-valued variables that range from 0.0 to 1.0. These must be scaled to integer values between 0 and 255 before we print them out.
  5. Red goes from fully off (black) to fully on (bright red) from left to right, and green goes from fully off at the top (black) to fully on at the bottom (bright green). Adding red and green light together make yellow so we should expect the bottom right corner to be yellow.

  1. It is Rust code of course. This won't be annotated anymore.

Creating an Image File

Because the file is written to the standard output stream, you'll need to redirect it to an image file. Typically this is done from the command-line by using the > redirection operator.

On Windows, you'd get the debug build from CMake running this command:1

cargo b

Then run your newly-built program like so:2

cargo r > image.ppm

Later, it will be better to run optimized builds for speed. In that case, you would build like this:

cargo b -r

and would run the optimized program like this:

cargo r -r

The examples above assume that you are building with CMake, using the same approach as the CMakeLists.txt file in the included source. Use whatever build environment (and language) you're most comfortable with.

On Mac or Linux, release build, you would launch the program like this:3

cargo r > image.ppm

Complete building and running instructions can be found in the project README.

Opening the output file (in ToyViewer on my Mac, but try it in your favorite image viewer and Google “ppm viewer” if your viewer doesn’t support it) shows this result:

First PPM image

Image 1: First PPM image


Hooray! This is the graphics “hello world”. If your image doesn’t look like that, open the output file in a text editor and see what it looks like. It should start something like this:

P3
256 256
255
0 0 0
1 0 0
2 0 0
3 0 0
4 0 0
5 0 0
6 0 0
7 0 0
8 0 0
9 0 0
10 0 0
11 0 0
12 0 0
...

Listing 2: First image output


If your PPM file doesn't look like this, then double-check your formatting code. If it does look like this but fails to render, then you may have line-ending differences or something similar that is confusing your image viewer. To help debug this, you can find a file test.ppm in the images directory of the Github project. This should help to ensure that your viewer can handle the PPM format and to use as a comparison against your generated PPM file.

Some readers have reported problems viewing their generated files on Windows. In this case, the problem is often that the PPM is written out as UTF-16, often from PowerShell. If you run into this problem, see Discussion 1114 for help with this issue.

If everything displays correctly, then you're pretty much done with system and IDE issues — everything in the remainder of this series uses this same simple mechanism for generated rendered images.

If you want to produce other image formats, I am a fan of stb_image.h, a header-only image library available on GitHub at https://github.com/nothings/stb.4


  1. We are not using CMake; we are relying on the standard cargo build tool.

  2. You can skip the building step since it is part of the run command (or r for short).

  3. It is the same as for Windows.

  4. Rust crate alternativ: https://crates.io/crates/stb_image.

Adding a Progress Indicator

Before we continue, let's add a progress indicator to our output. This is a handy way to track the progress of a long render, and also to possibly identify a run that's stalled out due to an infinite loop or other problem.

Our program outputs the image to the standard output stream (std::cout), so leave that alone and instead write to the logging output stream (std::clog):1

diff --git a/src/main.rs b/src/main.rs
index af636bc..00cad27 100644
--- a/src/main.rs
+++ b/src/main.rs
@@ -1,26 +1,29 @@
 fn main() {
     // Image
 
     const IMAGE_WIDTH: u32 = 256;
     const IMAGE_HEIGHT: u32 = 256;
 
     // Render
 
+    env_logger::init();
     println!("P3");
     println!("{IMAGE_WIDTH} {IMAGE_HEIGHT}");
     println!("255");
 
     for j in 0..IMAGE_HEIGHT {
+        log::info!("Scanlines remaining: {}", IMAGE_HEIGHT - j);
         for i in 0..IMAGE_WIDTH {
             let r = i as f64 / (IMAGE_WIDTH - 1) as f64;
             let g = j as f64 / (IMAGE_HEIGHT - 1) as f64;
             let b = 0.0;
 
             let ir = (255.999 * r) as i32;
             let ig = (255.999 * g) as i32;
             let ib = (255.999 * b) as i32;
 
             println!("{ir} {ig} {ib}");
         }
     }
+    log::info!("Done.");
 }

Listing 3: [main.rs] Main render loop with progress reporting


Now when running, you'll see a running count of the number of scanlines remaining. Hopefully this runs so fast that you don't even see it! Don't worry — you'll have lots of time in the future to watch a slowly updating progress line as we expand our ray tracer.


  1. The log crate with the env_logger implementation is a good alternative to std::clog. Run RUST_LOG=info cargo r > image.ppm for the log output.

The vec3 Class 1

Almost all graphics programs have some class(es) for storing geometric vectors and colors. In many systems these vectors are 4D (3D position plus a homogeneous coordinate for geometry, or RGB plus an alpha transparency component for colors). For our purposes, three coordinates suffice. We’ll use the same class vec3 for colors, locations, directions, offsets, whatever. Some people don’t like this because it doesn’t prevent you from doing something silly, like subtracting a position from a color. They have a good point, but we’re going to always take the “less code” route when not obviously wrong. In spite of this, we do declare two aliases for vec3: point3 and color. Since these two types are just aliases for vec3, you won't get warnings if you pass a color to a function expecting a point3, and nothing is stopping you from adding a point3 to a color, but it makes the code a little bit easier to read and to understand.

We define the vec3 class in the top half of a new vec3.h header file, and define a set of useful vector utility functions in the bottom half:

use std::{
    fmt::Display,
    ops::{Add, AddAssign, Div, DivAssign, Index, IndexMut, Mul, MulAssign, Neg, Sub},
};

#[derive(Debug, Default, Clone, Copy)]
pub struct Vec3 {
    pub e: [f64; 3],
}

pub type Point3 = Vec3;

impl Vec3 {
    pub fn new(e0: f64, e1: f64, e2: f64) -> Self {
        Self { e: [e0, e1, e2] }
    }

    pub fn x(&self) -> f64 {
        self.e[0]
    }

    pub fn y(&self) -> f64 {
        self.e[1]
    }

    pub fn z(&self) -> f64 {
        self.e[2]
    }

    pub fn length(&self) -> f64 {
        f64::sqrt(self.length_squared())
    }

    pub fn length_squared(&self) -> f64 {
        self.e[0] * self.e[0] + self.e[1] * self.e[1] + self.e[2] * self.e[2]
    }
}

impl Neg for Vec3 {
    type Output = Self;

    fn neg(self) -> Self::Output {
        Self::Output {
            e: self.e.map(|e| -e),
        }
    }
}

impl Index<usize> for Vec3 {
    type Output = f64;

    fn index(&self, index: usize) -> &Self::Output {
        &self.e[index]
    }
}

impl IndexMut<usize> for Vec3 {
    fn index_mut(&mut self, index: usize) -> &mut Self::Output {
        &mut self.e[index]
    }
}

impl AddAssign for Vec3 {
    fn add_assign(&mut self, rhs: Self) {
        self.e[0] += rhs.e[0];
        self.e[1] += rhs.e[1];
        self.e[2] += rhs.e[2];
    }
}

impl MulAssign<f64> for Vec3 {
    fn mul_assign(&mut self, rhs: f64) {
        self.e[0] *= rhs;
        self.e[1] *= rhs;
        self.e[2] *= rhs;
    }
}

impl DivAssign<f64> for Vec3 {
    fn div_assign(&mut self, rhs: f64) {
        self.mul_assign(1.0 / rhs);
    }
}

impl Display for Vec3 {
    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
        write!(f, "{} {} {}", self.e[0], self.e[1], self.e[2])
    }
}

impl Add for Vec3 {
    type Output = Self;

    fn add(self, rhs: Self) -> Self::Output {
        Self::Output {
            e: [
                self.e[0] + rhs.e[0],
                self.e[1] + rhs.e[1],
                self.e[2] + rhs.e[2],
            ],
        }
    }
}

impl Sub for Vec3 {
    type Output = Self;

    fn sub(self, rhs: Self) -> Self::Output {
        Self::Output {
            e: [
                self.e[0] - rhs.e[0],
                self.e[1] - rhs.e[1],
                self.e[2] - rhs.e[2],
            ],
        }
    }
}

impl Mul for Vec3 {
    type Output = Self;

    fn mul(self, rhs: Self) -> Self::Output {
        Self::Output {
            e: [
                self.e[0] * rhs.e[0],
                self.e[1] * rhs.e[1],
                self.e[2] * rhs.e[2],
            ],
        }
    }
}

impl Mul<f64> for Vec3 {
    type Output = Self;

    fn mul(self, rhs: f64) -> Self::Output {
        Self::Output {
            e: [self.e[0] * rhs, self.e[1] * rhs, self.e[2] * rhs],
        }
    }
}

impl Mul<Vec3> for f64 {
    type Output = Vec3;

    fn mul(self, rhs: Vec3) -> Self::Output {
        rhs.mul(self)
    }
}

impl Div<f64> for Vec3 {
    type Output = Self;

    fn div(self, rhs: f64) -> Self::Output {
        self * (1.0 / rhs)
    }
}

#[inline]
pub fn dot(u: Vec3, v: Vec3) -> f64 {
    u.e[0] * v.e[0] + u.e[1] * v.e[1] + u.e[2] * v.e[2]
}

#[inline]
pub fn cross(u: Vec3, v: Vec3) -> Vec3 {
    Vec3::new(
        u.e[1] * v.e[2] - u.e[2] * v.e[1],
        u.e[2] * v.e[0] - u.e[0] * v.e[2],
        u.e[0] * v.e[1] - u.e[1] * v.e[0],
    )
}

#[inline]
pub fn unit_vector(v: Vec3) -> Vec3 {
    v / v.length()
}

Listing 4: [vec3.rs] vec3 definitions and helper functions


We use double here, but some ray tracers use float. double has greater precision and range, but is twice the size compared to float. This increase in size may be important if you're programming in limited memory conditions (such as hardware shaders). Either one is fine — follow your own tastes.


  1. There are no classes in Rust. They are replaced with structs.

Color Utility Functions

Using our new vec3 class, we'll create a new color.h header file and define a utility function that writes a single pixel's color out to the standard output stream.

use crate::vec3::Vec3;

pub type Color = Vec3;

pub fn write_color(mut out: impl std::io::Write, pixel_color: Color) -> std::io::Result<()> {
    let r = pixel_color.x();
    let g = pixel_color.y();
    let b = pixel_color.z();

    let rbyte = (255.999 * r) as i32;
    let gbyte = (255.999 * g) as i32;
    let bbyte = (255.999 * b) as i32;

    writeln!(out, "{rbyte} {gbyte} {bbyte}")
}

Listing 5: [color.rs] color utility functions


Now we can change our main to use both of these:

diff --git a/src/main.rs b/src/main.rs
index 00cad27..bb37ee6 100644
--- a/src/main.rs
+++ b/src/main.rs
@@ -1,29 +1,30 @@
-fn main() {
+use code::color::{Color, write_color};
+
+fn main() -> Result<(), Box<dyn std::error::Error>> {
     // Image
 
     const IMAGE_WIDTH: u32 = 256;
     const IMAGE_HEIGHT: u32 = 256;
 
     // Render
 
     env_logger::init();
     println!("P3");
     println!("{IMAGE_WIDTH} {IMAGE_HEIGHT}");
     println!("255");
 
     for j in 0..IMAGE_HEIGHT {
         log::info!("Scanlines remaining: {}", IMAGE_HEIGHT - j);
         for i in 0..IMAGE_WIDTH {
-            let r = i as f64 / (IMAGE_WIDTH - 1) as f64;
-            let g = j as f64 / (IMAGE_HEIGHT - 1) as f64;
-            let b = 0.0;
-
-            let ir = (255.999 * r) as i32;
-            let ig = (255.999 * g) as i32;
-            let ib = (255.999 * b) as i32;
-
-            println!("{ir} {ig} {ib}");
+            let pixel_color = Color::new(
+                i as f64 / (IMAGE_WIDTH - 1) as f64,
+                j as f64 / (IMAGE_HEIGHT - 1) as f64,
+                0.0,
+            );
+            write_color(std::io::stdout(), pixel_color)?;
         }
     }
     log::info!("Done.");
+
+    Ok(())
 }

Listing 6: [main.rs)] Final code for the first PPM image


And you should get the exact same picture as before.

The ray Class

The one thing that all ray tracers have is a ray class and a computation of what color is seen along a ray. Let’s think of a ray as a function \( \mathbf{P} (t) = \mathbf{A} + t \mathbf{b} \). Here \( \mathbf{P} \) is a 3D position along a line in 3D. \( \mathbf{A} \) is the ray origin and \( \mathbf{b} \) is the ray direction. The ray parameter \( t \) is a real number (double in the code). Plug in a different \( t \) and \( \mathbf{P} (t) \) moves the point along the ray. Add in negative \( t \) values and you can go anywhere on the 3D line. For positive \( t \), you get only the parts in front of \( \mathbf{A} \), and this is what is often called a half-line or a ray.

Linear interpolation

Figure 2: Linear interpolation


We can represent the idea of a ray as a class, and represent the function \( \mathbf{P} (t) \) as a function that we'll call ray::at(t):

use crate::vec3::{Point3, Vec3};

#[derive(Debug, Default, Clone, Copy)]
pub struct Ray {
    origin: Point3,
    direction: Vec3,
}

impl Ray {
    pub fn new(origin: Point3, direction: Vec3) -> Self {
        Self { origin, direction }
    }

    pub fn origin(&self) -> Point3 {
        self.origin
    }

    pub fn direction(&self) -> Vec3 {
        self.direction
    }

    pub fn at(&self, t: f64) -> Point3 {
        self.origin + t * self.direction
    }
}

Listing 7: [ray.rs] The ray class


(For those unfamiliar with C++, the functions ray::origin() and ray::direction() both return an immutable reference to their members. Callers can either just use the reference directly, or make a mutable copy depending on their needs.) 1


  1. The careful reader may have noticed that, in the Rust approach, both the Vec3 and Ray structs implement Copy. This method is generally more common than returning a reference and cloning it for mutability when needed.

Sending Rays Into the Scene

Now we are ready to turn the corner and make a ray tracer. At its core, a ray tracer sends rays through pixels and computes the color seen in the direction of those rays. The involved steps are

  1. Calculate the ray from the “eye” through the pixel,
  2. Determine which objects the ray intersects, and
  3. Compute a color for the closest intersection point.

When first developing a ray tracer, I always do a simple camera for getting the code up and running.

I’ve often gotten into trouble using square images for debugging because I transpose 𝑥 and 𝑦 too often, so we’ll use a non-square image. A square image has a 1∶1 aspect ratio, because its width is the same as its height. Since we want a non-square image, we'll choose 16∶9 because it's so common. A 16∶9 aspect ratio means that the ratio of image width to image height is 16∶9. Put another way, given an image with a 16∶9 aspect ratio,

\[ width\,/\,height=16\,/\,9=1.7778 \]

For a practical example, an image 800 pixels wide by 400 pixels high has a 2∶1 aspect ratio.

The image's aspect ratio can be determined from the ratio of its width to its height. However, since we have a given aspect ratio in mind, it's easier to set the image's width and the aspect ratio, and then using this to calculate for its height. This way, we can scale up or down the image by changing the image width, and it won't throw off our desired aspect ratio. We do have to make sure that when we solve for the image height the resulting height is at least 1.

In addition to setting up the pixel dimensions for the rendered image, we also need to set up a virtual viewport through which to pass our scene rays. The viewport is a virtual rectangle in the 3D world that contains the grid of image pixel locations. If pixels are spaced the same distance horizontally as they are vertically, the viewport that bounds them will have the same aspect ratio as the rendered image. The distance between two adjacent pixels is called the pixel spacing, and square pixels is the standard.

To start things off, we'll choose an arbitrary viewport height of 2.0, and scale the viewport width to give us the desired aspect ratio. Here's a snippet of what this code will look like:

use code::color::{Color, write_color};

fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Image

    const ASPECT_RATIO: f64 = 16.0 / 9.0;
    const IMAGE_WIDTH: i32 = 400;

    // Calculate the image height, and ensure that it's at least 1.
    const IMAGE_HEIGHT: i32 = {
        let image_height = (IMAGE_WIDTH as f64 / ASPECT_RATIO) as i32;
        if image_height < 1 { 1 } else { image_height }
    };

    // Viewport widths less than one are ok since they are real valued.
    let viewport_height = 2.0;
    let viewport_width = viewport_height * (IMAGE_WIDTH as f64) / (IMAGE_HEIGHT as f64);

    // Render

    env_logger::init();
    println!("P3");
    println!("{IMAGE_WIDTH} {IMAGE_HEIGHT}");
    println!("255");

    for j in 0..IMAGE_HEIGHT {
        log::info!("Scanlines remaining: {}", IMAGE_HEIGHT - j);
        for i in 0..IMAGE_WIDTH {
            let pixel_color = Color::new(
                i as f64 / (IMAGE_WIDTH - 1) as f64,
                j as f64 / (IMAGE_HEIGHT - 1) as f64,
                0.0,
            );
            write_color(std::io::stdout(), pixel_color)?;
        }
    }
    log::info!("Done.");

    Ok(())
}

Listing 8: Rendered image setup


If you're wondering why we don't just use aspect_ratio when computing viewport_width, it's because the value set to aspect_ratio is the ideal ratio, it may not be the actual ratio between image_width and image_height. If image_height was allowed to be real valued—rather than just an integer—then it would be fine to use aspect_ratio. But the actual ratio between image_width and image_height can vary based on two parts of the code. First, image_height is rounded down to the nearest integer, which can increase the ratio. Second, we don't allow image_height to be less than one, which can also change the actual aspect ratio.

Note that aspect_ratio is an ideal ratio, which we approximate as best as possible with the integer-based ratio of image width over image height. In order for our viewport proportions to exactly match our image proportions, we use the calculated image aspect ratio to determine our final viewport width.

Next we will define the camera center: a point in 3D space from which all scene rays will originate (this is also commonly referred to as the eye point). The vector from the camera center to the viewport center will be orthogonal to the viewport. We'll initially set the distance between the viewport and the camera center point to be one unit. This distance is often referred to as the focal length.

For simplicity we'll start with the camera center at \( (0,0,0) \). We'll also have the y-axis go up, the x-axis to the right, and the negative z-axis pointing in the viewing direction. (This is commonly referred to as right-handed coordinates.)

Camera geometry

Figure 3: Camera geometry


Now the inevitable tricky part. While our 3D space has the conventions above, this conflicts with our image coordinates, where we want to have the zeroth pixel in the top-left and work our way down to the last pixel at the bottom right. This means that our image coordinate Y-axis is inverted: Y increases going down the image.

As we scan our image, we will start at the upper left pixel (pixel \( 0,0 \)), scan left-to-right across each row, and then scan row-by-row, top-to-bottom. To help navigate the pixel grid, we'll use a vector from the left edge to the right edge (\( \mathbf{V_u} \)), and a vector from the upper edge to the lower edge (\( \mathbf{V_v} \)).

Our pixel grid will be inset from the viewport edges by half the pixel-to-pixel distance. This way, our viewport area is evenly divided into width × height identical regions. Here's what our viewport and pixel grid look like:

Viewport and pixel grid

Figure 4: Viewport and pixel grid


In this figure, we have the viewport, the pixel grid for a 7×5 resolution image, the viewport upper left corner \( \mathbf{Q} \), the pixel \( \mathbf{P_{0,0}} \) location, the viewport vector \( \mathbf{V_u} \) (viewport_u), the viewport vector \( \mathbf{V_v} \) (viewport_v), and the pixel delta vectors \( \mathbf{\Delta u} \) and \( \mathbf{\Delta v} \).

Drawing from all of this, here's the code that implements the camera. We'll stub in a function ray_color(const ray& r) that returns the color for a given scene ray — which we'll set to always return black for now.

diff --git a/src/main.rs b/src/main.rs
index bb37ee6..8104ae8 100644
--- a/src/main.rs
+++ b/src/main.rs
@@ -1,30 +1,65 @@
-use code::color::{Color, write_color};
+use code::{
+    color::{Color, write_color},
+    ray::Ray,
+    vec3::{Point3, Vec3},
+};
+
+fn ray_color(r: Ray) -> Color {
+    Color::new(0.0, 0.0, 0.0)
+}
 
 fn main() -> Result<(), Box<dyn std::error::Error>> {
     // Image
 
-    const IMAGE_WIDTH: u32 = 256;
-    const IMAGE_HEIGHT: u32 = 256;
+    const ASPECT_RATIO: f64 = 16.0 / 9.0;
+    const IMAGE_WIDTH: i32 = 400;
+
+    // Calculate the image height, and ensure that it's at least 1.
+    const IMAGE_HEIGHT: i32 = {
+        let image_height = (IMAGE_WIDTH as f64 / ASPECT_RATIO) as i32;
+        if image_height < 1 { 1 } else { image_height }
+    };
+
+    // Camera
+
+    let focal_length = 1.0;
+    let viewport_height = 2.0;
+    let viewport_width = viewport_height * (IMAGE_WIDTH as f64) / (IMAGE_HEIGHT as f64);
+    let camera_center = Point3::new(0.0, 0.0, 0.0);
+
+    // Calculate the vectors across the horizontal and down the vertical viewport edges.
+    let viewport_u = Vec3::new(viewport_width, 0.0, 0.0);
+    let viewport_v = Vec3::new(0.0, -viewport_height, 0.0);
+
+    // Calculate the horizontal and vertical delta vectors from pixel to pixel.
+    let pixel_delta_u = viewport_u / IMAGE_WIDTH as f64;
+    let pixel_delta_v = viewport_v / IMAGE_HEIGHT as f64;
+
+    // Calculate the location of the upper left pixel.
+    let viewport_upper_left =
+        camera_center - Vec3::new(0.0, 0.0, focal_length) - viewport_u / 2.0 - viewport_v / 2.0;
+    let pixel00_loc = viewport_upper_left + 0.5 * (pixel_delta_u + pixel_delta_v);
 
     // Render
 
     env_logger::init();
     println!("P3");
     println!("{IMAGE_WIDTH} {IMAGE_HEIGHT}");
     println!("255");
 
     for j in 0..IMAGE_HEIGHT {
         log::info!("Scanlines remaining: {}", IMAGE_HEIGHT - j);
         for i in 0..IMAGE_WIDTH {
-            let pixel_color = Color::new(
-                i as f64 / (IMAGE_WIDTH - 1) as f64,
-                j as f64 / (IMAGE_HEIGHT - 1) as f64,
-                0.0,
-            );
+            let pixel_center =
+                pixel00_loc + (i as f64) * pixel_delta_u + (j as f64) * pixel_delta_v;
+            let ray_direction = pixel_center - camera_center;
+            let r = Ray::new(camera_center, ray_direction);
+
+            let pixel_color = ray_color(r);
             write_color(std::io::stdout(), pixel_color)?;
         }
     }
     log::info!("Done.");
 
     Ok(())
 }

Listing 9: [main.rs] Creating scene rays


Notice that in the code above, I didn't make ray_direction a unit vector, because I think not doing that makes for simpler and slightly faster code.

Now we'll fill in the ray_color(ray) function to implement a simple gradient. This function will linearly blend white and blue depending on the height of the \( y \) coordinate after scaling the ray direction to unit length (so \( -1.0 < y < 1.0 \)). Because we're looking at the 𝑦 height after normalizing the vector, you'll notice a horizontal gradient to the color in addition to the vertical gradient.

I'll use a standard graphics trick to linearly scale \( 0.0 \leq a \leq 1.0 \). When \( a = 1.0 \), I want blue. When \( a = 0.0 \), I want white. In between, I want a blend. This forms a “linear blend”, or “linear interpolation”. This is commonly referred to as a lerp between two values. A lerp is always of the form

\[ blendedValue = (1 − 𝑎) \cdot startValue + 𝑎 \cdot endValue, \]

with \( a \) going from zero to one.

Putting all this together, here's what we get:

diff --git a/src/main.rs b/src/main.rs
index 8104ae8..f31dc16 100644
--- a/src/main.rs
+++ b/src/main.rs
@@ -1,65 +1,67 @@
 use code::{
     color::{Color, write_color},
     ray::Ray,
-    vec3::{Point3, Vec3},
+    vec3::{Point3, Vec3, unit_vector},
 };
 
 fn ray_color(r: Ray) -> Color {
-    Color::new(0.0, 0.0, 0.0)
+    let unit_direction = unit_vector(r.direction());
+    let a = 0.5 * (unit_direction.y() + 1.0);
+    (1.0 - a) * Color::new(1.0, 1.0, 1.0) + a * Color::new(0.5, 0.7, 1.0)
 }
 
 fn main() -> Result<(), Box<dyn std::error::Error>> {
     // Image
 
     const ASPECT_RATIO: f64 = 16.0 / 9.0;
     const IMAGE_WIDTH: i32 = 400;
 
     // Calculate the image height, and ensure that it's at least 1.
     const IMAGE_HEIGHT: i32 = {
         let image_height = (IMAGE_WIDTH as f64 / ASPECT_RATIO) as i32;
         if image_height < 1 { 1 } else { image_height }
     };
 
     // Camera
 
     let focal_length = 1.0;
     let viewport_height = 2.0;
     let viewport_width = viewport_height * (IMAGE_WIDTH as f64) / (IMAGE_HEIGHT as f64);
     let camera_center = Point3::new(0.0, 0.0, 0.0);
 
     // Calculate the vectors across the horizontal and down the vertical viewport edges.
     let viewport_u = Vec3::new(viewport_width, 0.0, 0.0);
     let viewport_v = Vec3::new(0.0, -viewport_height, 0.0);
 
     // Calculate the horizontal and vertical delta vectors from pixel to pixel.
     let pixel_delta_u = viewport_u / IMAGE_WIDTH as f64;
     let pixel_delta_v = viewport_v / IMAGE_HEIGHT as f64;
 
     // Calculate the location of the upper left pixel.
     let viewport_upper_left =
         camera_center - Vec3::new(0.0, 0.0, focal_length) - viewport_u / 2.0 - viewport_v / 2.0;
     let pixel00_loc = viewport_upper_left + 0.5 * (pixel_delta_u + pixel_delta_v);
 
     // Render
 
     env_logger::init();
     println!("P3");
     println!("{IMAGE_WIDTH} {IMAGE_HEIGHT}");
     println!("255");
 
     for j in 0..IMAGE_HEIGHT {
         log::info!("Scanlines remaining: {}", IMAGE_HEIGHT - j);
         for i in 0..IMAGE_WIDTH {
             let pixel_center =
                 pixel00_loc + (i as f64) * pixel_delta_u + (j as f64) * pixel_delta_v;
             let ray_direction = pixel_center - camera_center;
             let r = Ray::new(camera_center, ray_direction);
 
             let pixel_color = ray_color(r);
             write_color(std::io::stdout(), pixel_color)?;
         }
     }
     log::info!("Done.");
 
     Ok(())
 }

Listing 10: [main.rs] Rendering a blue-to-white gradient


In our case this produces:

A blue-to-white gradient depending on ray Y coordinate

Image 2: A blue-to-white gradient depending on ray Y coordinate


Adding a Sphere

Let’s add a single object to our ray tracer. People often use spheres in ray tracers because calculating whether a ray hits a sphere is relatively simple.

Ray-Sphere Intersection

The equation for a sphere of radius \( r \) that is centered at the origin is an important mathematical equation:

\[ x^2 + y^2 + z^2 = r^2 \]

You can also think of this as saying that if a given point \( (x, y, z) \) is on the surface of the sphere, then \( x^2 + y^2 + z^2 = r^2 \). If a given point \( (x, y, z) \) is inside the sphere, then \( x^2 + y^2 + z^2 < r^2 \), and if a given point \( (x, y, z) \) is outside the sphere, then \( x^2 + y^2 + z^2 > r^2 \).

If we want to allow the sphere center to be at an arbitrary point \( (C_x, C_y, C_z) \), then the equation becomes a lot less nice:

\[ (C_x - x)^2 + (C_y - y)^2 + (C_z - z)^2 = r^2 \]

In graphics, you almost always want your formulas to be in terms of vectors so that all the \( x/y/z \) stuff can be simply represented using a vec3 class. You might note that the vector from point \( \mathbf{P} = (x, y, z) \) to center \( \mathbf{C} = (C_x, C_y, C_z) \) is \( (\mathbf{C} - \mathbf{P}) \).

If we use the definition of the dot product:

\[ (\mathbf{C} - \mathbf{P}) \cdot (\mathbf{C} - \mathbf{P}) = (C_x - x)^2 + (C_y - y)^2 + (C_z - z)^2 \] Then we can rewrite the equation of the sphere in vector form as:

\[ (\mathbf{C} - \mathbf{P}) \cdot (\mathbf{C} - \mathbf{P}) = r^2 \]

We can read this as “any point \( \mathbf{P} \) that satisfies this equation is on the sphere”. We want to know if our ray \( \mathbf{P}(t) = \mathbf{Q} + t \mathbf{d} \) ever hits the sphere anywhere. If it does hit the sphere, there is some \( t \) for which \( \mathbf{P}(t) \) satisfies the sphere equation. So we are looking for any \( t \) where this is true:

\[ (\mathbf{C} - \mathbf{P}(t)) \cdot (\mathbf{C} - \mathbf{P}(t)) = r^2 \]

which can be found by replacing \( \mathbf{P}(t) \) with its expanded form:

\[ (\mathbf{C} - (\mathbf{Q} + t \mathbf{d})) \cdot (\mathbf{C} - (\mathbf{Q} + t \mathbf{d})) = r^2 \]

We have three vectors on the left dotted by three vectors on the right. If we solved for the full dot product we would get nine vectors. You can definitely go through and write everything out, but we don't need to work that hard. If you remember, we want to solve for \( t \), so we'll separate the terms based on whether there is a \( t \) or not:

\[ (-t \mathbf{d} + (\mathbf{C} - \mathbf{Q})) \cdot (-t \mathbf{d} + (\mathbf{C} - \mathbf{Q})) = r^2 \]

And now we follow the rules of vector algebra to distribute the dot product:

\[ t^2 \mathbf{d} \cdot \mathbf{d} - 2 t \mathbf{d} \cdot (\mathbf{C} - \mathbf{Q}) + (\mathbf{C} - \mathbf{Q}) \cdot (\mathbf{C} - \mathbf{Q}) = r^2 \]

Move the square of the radius over to the left hand side:

\[ t^2 \mathbf{d} \cdot \mathbf{d} - 2 t \mathbf{d} \cdot (\mathbf{C} - \mathbf{Q}) + (\mathbf{C} - \mathbf{Q}) \cdot (\mathbf{C} - \mathbf{Q}) - r^2 = 0 \]

It's hard to make out what exactly this equation is, but the vectors and \( r \) in that equation are all constant and known. Furthermore, the only vectors that we have are reduced to scalars by dot product. The only unknown is \( t \), and we have a \( t^2 \), which means that this equation is quadratic. You can solve for a quadratic equation \( ax^2 + bx + c = 0 \) by using the quadratic formula:

\[ \frac{-b \pm \sqrt{b^2 - 4ac}}{2a} \]

So solving for \( t \) in the ray-sphere intersection equation gives us these values for \( a \), \( b \), and \( c \):

\[ a = \mathbf{d} \cdot \mathbf{d} \] \[ b = -2 \mathbf{d} \cdot (\mathbf{C} - \mathbf{Q}) \] \[ c = (\mathbf{C} - \mathbf{Q}) \cdot (\mathbf{C} - \mathbf{Q}) - r^2 \]

Using all of the above you can solve for \( t \), but there is a square root part that can be either positive (meaning two real solutions), negative (meaning no real solutions), or zero (meaning one real solution). In graphics, the algebra almost always relates very directly to the geometry. What we have is:

Ray-sphere intersection results

Figure 5: Ray-sphere intersection results


Creating Our First Raytraced Image

If we take that math and hard-code it into our program, we can test our code by placing a small sphere at \( −1 \) on the z-axis and then coloring red any pixel that intersects it.

diff --git a/src/main.rs b/src/main.rs
index f31dc16..e3d9091 100644
--- a/src/main.rs
+++ b/src/main.rs
@@ -1,67 +1,81 @@
 use code::{
     color::{Color, write_color},
     ray::Ray,
-    vec3::{Point3, Vec3, unit_vector},
+    vec3::{Point3, Vec3, dot, unit_vector},
 };
 
+fn hit_sphere(center: Point3, radius: f64, r: Ray) -> bool {
+    let oc = center - r.origin();
+    let a = dot(r.direction(), r.direction());
+    let b = -2.0 * dot(r.direction(), oc);
+    let c = dot(oc, oc) - radius * radius;
+    let discriminant = b * b - 4.0 * a * c;
+
+    discriminant >= 0.0
+}
+
 fn ray_color(r: Ray) -> Color {
+    if hit_sphere(Point3::new(0.0, 0.0, -1.0), 0.5, r) {
+        return Color::new(1.0, 0.0, 0.0);
+    }
+
     let unit_direction = unit_vector(r.direction());
     let a = 0.5 * (unit_direction.y() + 1.0);
     (1.0 - a) * Color::new(1.0, 1.0, 1.0) + a * Color::new(0.5, 0.7, 1.0)
 }
 
 fn main() -> Result<(), Box<dyn std::error::Error>> {
     // Image
 
     const ASPECT_RATIO: f64 = 16.0 / 9.0;
     const IMAGE_WIDTH: i32 = 400;
 
     // Calculate the image height, and ensure that it's at least 1.
     const IMAGE_HEIGHT: i32 = {
         let image_height = (IMAGE_WIDTH as f64 / ASPECT_RATIO) as i32;
         if image_height < 1 { 1 } else { image_height }
     };
 
     // Camera
 
     let focal_length = 1.0;
     let viewport_height = 2.0;
     let viewport_width = viewport_height * (IMAGE_WIDTH as f64) / (IMAGE_HEIGHT as f64);
     let camera_center = Point3::new(0.0, 0.0, 0.0);
 
     // Calculate the vectors across the horizontal and down the vertical viewport edges.
     let viewport_u = Vec3::new(viewport_width, 0.0, 0.0);
     let viewport_v = Vec3::new(0.0, -viewport_height, 0.0);
 
     // Calculate the horizontal and vertical delta vectors from pixel to pixel.
     let pixel_delta_u = viewport_u / IMAGE_WIDTH as f64;
     let pixel_delta_v = viewport_v / IMAGE_HEIGHT as f64;
 
     // Calculate the location of the upper left pixel.
     let viewport_upper_left =
         camera_center - Vec3::new(0.0, 0.0, focal_length) - viewport_u / 2.0 - viewport_v / 2.0;
     let pixel00_loc = viewport_upper_left + 0.5 * (pixel_delta_u + pixel_delta_v);
 
     // Render
 
     env_logger::init();
     println!("P3");
     println!("{IMAGE_WIDTH} {IMAGE_HEIGHT}");
     println!("255");
 
     for j in 0..IMAGE_HEIGHT {
         log::info!("Scanlines remaining: {}", IMAGE_HEIGHT - j);
         for i in 0..IMAGE_WIDTH {
             let pixel_center =
                 pixel00_loc + (i as f64) * pixel_delta_u + (j as f64) * pixel_delta_v;
             let ray_direction = pixel_center - camera_center;
             let r = Ray::new(camera_center, ray_direction);
 
             let pixel_color = ray_color(r);
             write_color(std::io::stdout(), pixel_color)?;
         }
     }
     log::info!("Done.");
 
     Ok(())
 }

Listing 11: [main.rs] Rendering a red sphere


What we get is this:

A simple red sphere

Image 3: A simple red sphere


Shading with Surface Normals

First, let’s get ourselves a surface normal so we can shade. This is a vector that is perpendicular to the surface at the point of intersection.

We have a key design decision to make for normal vectors in our code: whether normal vectors will have an arbitrary length, or will be normalized to unit length.

It is tempting to skip the expensive square root operation involved in normalizing the vector, in case it's not needed. In practice, however, there are three important observations. First, if a unit-length normal vector is ever required, then you might as well do it up front once, instead of over and over again “just in case” for every location where unit-length is required. Second, we do require unit-length normal vectors in several places. Third, if you require normal vectors to be unit length, then you can often efficiently generate that vector with an understanding of the specific geometry class, in its constructor, or in the hit() function. For example, sphere normals can be made unit length simply by dividing by the sphere radius, avoiding the square root entirely.

Given all of this, we will adopt the policy that all normal vectors will be of unit length.

For a sphere, the outward normal is in the direction of the hit point minus the center:

Sphere surface-normal geometry

Figure 6: Sphere surface-normal geometry


On the earth, this means that the vector from the earth’s center to you points straight up. Let’s throw that into the code now, and shade it. We don’t have any lights or anything yet, so let’s just visualize the normals with a color map. A common trick used for visualizing normals (because it’s easy and somewhat intuitive to assume \( \mathbf{n} \) is a unit length vector — so each component is between \( −1 \) and \( 1 \)) is to map each component to the interval from \( 0 \) to \( 1 \), and then map \( (x, y, z) \) to \( (red, green, blue) \). For the normal, we need the hit point, not just whether we hit or not (which is all we're calculating at the moment). We only have one sphere in the scene, and it's directly in front of the camera, so we won't worry about negative values of \( t \) yet. We'll just assume the closest hit point (smallest \( t \)) is the one that we want. These changes in the code let us compute and visualize \( \mathbf{n} \):

diff --git a/src/main.rs b/src/main.rs
index e3d9091..405ca4b 100644
--- a/src/main.rs
+++ b/src/main.rs
@@ -1,81 +1,82 @@
 use code::{
     color::{Color, write_color},
     ray::Ray,
     vec3::{Point3, Vec3, dot, unit_vector},
 };
 
-fn hit_sphere(center: Point3, radius: f64, r: Ray) -> bool {
+fn hit_sphere(center: Point3, radius: f64, r: Ray) -> Option<f64> {
     let oc = center - r.origin();
     let a = dot(r.direction(), r.direction());
     let b = -2.0 * dot(r.direction(), oc);
     let c = dot(oc, oc) - radius * radius;
     let discriminant = b * b - 4.0 * a * c;
 
-    discriminant >= 0.0
+    (discriminant >= 0.0).then(|| (-b - f64::sqrt(discriminant)) / (2.0 * a))
 }
 
 fn ray_color(r: Ray) -> Color {
-    if hit_sphere(Point3::new(0.0, 0.0, -1.0), 0.5, r) {
-        return Color::new(1.0, 0.0, 0.0);
+    if let Some(t) = hit_sphere(Point3::new(0.0, 0.0, -1.0), 0.5, r) {
+        let n = unit_vector(r.at(t) - Vec3::new(0.0, 0.0, -1.0));
+        return 0.5 * Color::new(n.x() + 1.0, n.y() + 1.0, n.z() + 1.0);
     }
 
     let unit_direction = unit_vector(r.direction());
     let a = 0.5 * (unit_direction.y() + 1.0);
     (1.0 - a) * Color::new(1.0, 1.0, 1.0) + a * Color::new(0.5, 0.7, 1.0)
 }
 
 fn main() -> Result<(), Box<dyn std::error::Error>> {
     // Image
 
     const ASPECT_RATIO: f64 = 16.0 / 9.0;
     const IMAGE_WIDTH: i32 = 400;
 
     // Calculate the image height, and ensure that it's at least 1.
     const IMAGE_HEIGHT: i32 = {
         let image_height = (IMAGE_WIDTH as f64 / ASPECT_RATIO) as i32;
         if image_height < 1 { 1 } else { image_height }
     };
 
     // Camera
 
     let focal_length = 1.0;
     let viewport_height = 2.0;
     let viewport_width = viewport_height * (IMAGE_WIDTH as f64) / (IMAGE_HEIGHT as f64);
     let camera_center = Point3::new(0.0, 0.0, 0.0);
 
     // Calculate the vectors across the horizontal and down the vertical viewport edges.
     let viewport_u = Vec3::new(viewport_width, 0.0, 0.0);
     let viewport_v = Vec3::new(0.0, -viewport_height, 0.0);
 
     // Calculate the horizontal and vertical delta vectors from pixel to pixel.
     let pixel_delta_u = viewport_u / IMAGE_WIDTH as f64;
     let pixel_delta_v = viewport_v / IMAGE_HEIGHT as f64;
 
     // Calculate the location of the upper left pixel.
     let viewport_upper_left =
         camera_center - Vec3::new(0.0, 0.0, focal_length) - viewport_u / 2.0 - viewport_v / 2.0;
     let pixel00_loc = viewport_upper_left + 0.5 * (pixel_delta_u + pixel_delta_v);
 
     // Render
 
     env_logger::init();
     println!("P3");
     println!("{IMAGE_WIDTH} {IMAGE_HEIGHT}");
     println!("255");
 
     for j in 0..IMAGE_HEIGHT {
         log::info!("Scanlines remaining: {}", IMAGE_HEIGHT - j);
         for i in 0..IMAGE_WIDTH {
             let pixel_center =
                 pixel00_loc + (i as f64) * pixel_delta_u + (j as f64) * pixel_delta_v;
             let ray_direction = pixel_center - camera_center;
             let r = Ray::new(camera_center, ray_direction);
 
             let pixel_color = ray_color(r);
             write_color(std::io::stdout(), pixel_color)?;
         }
     }
     log::info!("Done.");
 
     Ok(())
 }

Listing 12: [main.rs] Rendering surface normals on a sphere


And that yields this picture:

A sphere colored according to its normal vectors

Image 4: A sphere colored according to its normal vectors


Simplifying the Ray-Sphere Intersection Code

Let’s revisit the ray-sphere function:

use code::{
    color::{Color, write_color},
    ray::Ray,
    vec3::{Point3, Vec3, dot, unit_vector},
};

fn hit_sphere(center: Point3, radius: f64, r: Ray) -> Option<f64> {
    let oc = center - r.origin();
    let a = dot(r.direction(), r.direction());
    let b = -2.0 * dot(r.direction(), oc);
    let c = dot(oc, oc) - radius * radius;
    let discriminant = b * b - 4.0 * a * c;

    (discriminant >= 0.0).then(|| (-b - f64::sqrt(discriminant)) / (2.0 * a))
}

fn ray_color(r: Ray) -> Color {
    if let Some(t) = hit_sphere(Point3::new(0.0, 0.0, -1.0), 0.5, r) {
        let n = unit_vector(r.at(t) - Vec3::new(0.0, 0.0, -1.0));
        return 0.5 * Color::new(n.x() + 1.0, n.y() + 1.0, n.z() + 1.0);
    }

    let unit_direction = unit_vector(r.direction());
    let a = 0.5 * (unit_direction.y() + 1.0);
    (1.0 - a) * Color::new(1.0, 1.0, 1.0) + a * Color::new(0.5, 0.7, 1.0)
}

fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Image

    const ASPECT_RATIO: f64 = 16.0 / 9.0;
    const IMAGE_WIDTH: i32 = 400;

    // Calculate the image height, and ensure that it's at least 1.
    const IMAGE_HEIGHT: i32 = {
        let image_height = (IMAGE_WIDTH as f64 / ASPECT_RATIO) as i32;
        if image_height < 1 { 1 } else { image_height }
    };

    // Camera

    let focal_length = 1.0;
    let viewport_height = 2.0;
    let viewport_width = viewport_height * (IMAGE_WIDTH as f64) / (IMAGE_HEIGHT as f64);
    let camera_center = Point3::new(0.0, 0.0, 0.0);

    // Calculate the vectors across the horizontal and down the vertical viewport edges.
    let viewport_u = Vec3::new(viewport_width, 0.0, 0.0);
    let viewport_v = Vec3::new(0.0, -viewport_height, 0.0);

    // Calculate the horizontal and vertical delta vectors from pixel to pixel.
    let pixel_delta_u = viewport_u / IMAGE_WIDTH as f64;
    let pixel_delta_v = viewport_v / IMAGE_HEIGHT as f64;

    // Calculate the location of the upper left pixel.
    let viewport_upper_left =
        camera_center - Vec3::new(0.0, 0.0, focal_length) - viewport_u / 2.0 - viewport_v / 2.0;
    let pixel00_loc = viewport_upper_left + 0.5 * (pixel_delta_u + pixel_delta_v);

    // Render

    env_logger::init();
    println!("P3");
    println!("{IMAGE_WIDTH} {IMAGE_HEIGHT}");
    println!("255");

    for j in 0..IMAGE_HEIGHT {
        log::info!("Scanlines remaining: {}", IMAGE_HEIGHT - j);
        for i in 0..IMAGE_WIDTH {
            let pixel_center =
                pixel00_loc + (i as f64) * pixel_delta_u + (j as f64) * pixel_delta_v;
            let ray_direction = pixel_center - camera_center;
            let r = Ray::new(camera_center, ray_direction);

            let pixel_color = ray_color(r);
            write_color(std::io::stdout(), pixel_color)?;
        }
    }
    log::info!("Done.");

    Ok(())
}

Listing 13: [main.rs] Ray-sphere intersection code (before)


First, recall that a vector dotted with itself is equal to the squared length of that vector.

Second, notice how the equation for \( b \) has a factor of negative two in it. Consider what happens to the quadratic equation if \( b = 2h \):

\[ \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}\] \[= \frac{-(-2h) \pm \sqrt{(-2h)^2 - 4ac}}{2a}\] \[= \frac{2h \pm 2 \sqrt{h^2 - ac}}{2a}\] \[= \frac{h \pm \sqrt{h^2 - ac}}{a}\]

This simplifies nicely, so we'll use it. So solving for \( h \):

\[b = -2 \mathbf{d} \cdot (\mathbf{C} - \mathbf{Q}) \] \[b = -2h \] \[h = \frac{b}{-2} = \mathbf{d} \cdot (\mathbf{C} - \mathbf{Q}) \]

Using these observations, we can now simplify the sphere-intersection code to this:

diff --git a/src/main.rs b/src/main.rs
index 405ca4b..1f26e3d 100644
--- a/src/main.rs
+++ b/src/main.rs
@@ -1,82 +1,82 @@
 use code::{
     color::{Color, write_color},
     ray::Ray,
     vec3::{Point3, Vec3, dot, unit_vector},
 };
 
 fn hit_sphere(center: Point3, radius: f64, r: Ray) -> Option<f64> {
     let oc = center - r.origin();
-    let a = dot(r.direction(), r.direction());
-    let b = -2.0 * dot(r.direction(), oc);
-    let c = dot(oc, oc) - radius * radius;
-    let discriminant = b * b - 4.0 * a * c;
+    let a = r.direction().length_squared();
+    let h = dot(r.direction(), oc);
+    let c = oc.length_squared() - radius * radius;
+    let discriminant = h * h - a * c;
 
-    (discriminant >= 0.0).then(|| (-b - f64::sqrt(discriminant)) / (2.0 * a))
+    (discriminant >= 0.0).then(|| (h - f64::sqrt(discriminant)) / a)
 }
 
 fn ray_color(r: Ray) -> Color {
     if let Some(t) = hit_sphere(Point3::new(0.0, 0.0, -1.0), 0.5, r) {
         let n = unit_vector(r.at(t) - Vec3::new(0.0, 0.0, -1.0));
         return 0.5 * Color::new(n.x() + 1.0, n.y() + 1.0, n.z() + 1.0);
     }
 
     let unit_direction = unit_vector(r.direction());
     let a = 0.5 * (unit_direction.y() + 1.0);
     (1.0 - a) * Color::new(1.0, 1.0, 1.0) + a * Color::new(0.5, 0.7, 1.0)
 }
 
 fn main() -> Result<(), Box<dyn std::error::Error>> {
     // Image
 
     const ASPECT_RATIO: f64 = 16.0 / 9.0;
     const IMAGE_WIDTH: i32 = 400;
 
     // Calculate the image height, and ensure that it's at least 1.
     const IMAGE_HEIGHT: i32 = {
         let image_height = (IMAGE_WIDTH as f64 / ASPECT_RATIO) as i32;
         if image_height < 1 { 1 } else { image_height }
     };
 
     // Camera
 
     let focal_length = 1.0;
     let viewport_height = 2.0;
     let viewport_width = viewport_height * (IMAGE_WIDTH as f64) / (IMAGE_HEIGHT as f64);
     let camera_center = Point3::new(0.0, 0.0, 0.0);
 
     // Calculate the vectors across the horizontal and down the vertical viewport edges.
     let viewport_u = Vec3::new(viewport_width, 0.0, 0.0);
     let viewport_v = Vec3::new(0.0, -viewport_height, 0.0);
 
     // Calculate the horizontal and vertical delta vectors from pixel to pixel.
     let pixel_delta_u = viewport_u / IMAGE_WIDTH as f64;
     let pixel_delta_v = viewport_v / IMAGE_HEIGHT as f64;
 
     // Calculate the location of the upper left pixel.
     let viewport_upper_left =
         camera_center - Vec3::new(0.0, 0.0, focal_length) - viewport_u / 2.0 - viewport_v / 2.0;
     let pixel00_loc = viewport_upper_left + 0.5 * (pixel_delta_u + pixel_delta_v);
 
     // Render
 
     env_logger::init();
     println!("P3");
     println!("{IMAGE_WIDTH} {IMAGE_HEIGHT}");
     println!("255");
 
     for j in 0..IMAGE_HEIGHT {
         log::info!("Scanlines remaining: {}", IMAGE_HEIGHT - j);
         for i in 0..IMAGE_WIDTH {
             let pixel_center =
                 pixel00_loc + (i as f64) * pixel_delta_u + (j as f64) * pixel_delta_v;
             let ray_direction = pixel_center - camera_center;
             let r = Ray::new(camera_center, ray_direction);
 
             let pixel_color = ray_color(r);
             write_color(std::io::stdout(), pixel_color)?;
         }
     }
     log::info!("Done.");
 
     Ok(())
 }

Listing 14: [main.rs] Ray-sphere intersection code (after)


An Abstraction for Hittable Objects

Now, how about more than one sphere? While it is tempting to have an array of spheres, a very clean solution is to make an “abstract class” for anything a ray might hit, and make both a sphere and a list of spheres just something that can be hit. What that class should be called is something of a quandary — calling it an “object” would be good if not for “object oriented” programming. “Surface” is often used, with the weakness being maybe we will want volumes (fog, clouds, stuff like that). “hittable” emphasizes the member function that unites them. I don’t love any of these, but we'll go with “hittable”.

This hittable abstract class will have a hit function that takes in a ray. 1 Most ray tracers have found it convenient to add a valid interval for hits \( t_{min} \) to \( t_{max} \), so the hit only “counts” if \( t_{min} < t < t_{max} \). For the initial rays this is positive \( t \), but as we will see, it can simplify our code to have an interval \( t_{min} \) to \( t_{max} \). One design question is whether to do things like compute the normal if we hit something. We might end up hitting something closer as we do our search, and we will only need the normal of the closest thing. I will go with the simple solution and compute a bundle of stuff I will store in some structure. Here’s the abstract class:

use crate::{
    ray::Ray,
    vec3::{Point3, Vec3},
};

#[derive(Debug, Default, Clone, Copy)]
pub struct HitRecord {
    pub p: Point3,
    pub normal: Vec3,
    pub t: f64,
}

pub trait Hittable {
    fn hit(&self, r: Ray, ray_tmin: f64, ray_tmax: f64) -> Option<HitRecord>;
}

Listing 15: [hittable.rs] The hittable class


And here’s the sphere:

use crate::{
    hittable::{HitRecord, Hittable},
    ray::Ray,
    vec3::{Point3, dot},
};

#[derive(Debug, Clone, Copy)]
pub struct Sphere {
    center: Point3,
    radius: f64,
}

impl Sphere {
    pub fn new(center: Point3, radius: f64) -> Self {
        Self {
            center,
            radius: f64::max(0.0, radius),
        }
    }
}

impl Hittable for Sphere {
    fn hit(&self, r: Ray, ray_tmin: f64, ray_tmax: f64) -> Option<HitRecord> {
        let oc = self.center - r.origin();
        let a = r.direction().length_squared();
        let h = dot(r.direction(), oc);
        let c = oc.length_squared() - self.radius * self.radius;

        let discriminant = h * h - a * c;
        if discriminant < 0.0 {
            return None;
        }

        let sqrtd = f64::sqrt(discriminant);

        // Find the nearest root that lies in the acceptable range.
        let mut root = (h - sqrtd) / a;
        if root <= ray_tmin || ray_tmax <= root {
            root = (h + sqrtd) / a;
            if root <= ray_tmin || ray_tmax <= root {
                return None;
            }
        }

        let t = root;
        let p = r.at(t);
        let rec = HitRecord {
            t,
            p,
            normal: (p - self.center) / self.radius,
        };

        Some(rec)
    }
}

Listing 16: [sphere.rs] The sphere class


(Note here that we use the C++ standard function std::fmax(), which returns the maximum of the two floating-point arguments. Similarly, we will later use std::fmin(), which returns the minimum of the two floating-point arguments.) 2


  1. A simple Rust Trait will be used instead.

  2. The Rust standard library provides the functions f64::max() and f64::min().

Front Faces Versus Back Faces

The second design decision for normals is whether they should always point out. At present, the normal found will always be in the direction of the center to the intersection point (the normal points out). If the ray intersects the sphere from the outside, the normal points against the ray. If the ray intersects the sphere from the inside, the normal (which always points out) points with the ray. Alternatively, we can have the normal always point against the ray. If the ray is outside the sphere, the normal will point outward, but if the ray is inside the sphere, the normal will point inward.

Possible directions for sphere surface-normal geometry

Figure 7: Possible directions for sphere surface-normal geometry


We need to choose one of these possibilities because we will eventually want to determine which side of the surface that the ray is coming from. This is important for objects that are rendered differently on each side, like the text on a two-sided sheet of paper, or for objects that have an inside and an outside, like glass balls.

If we decide to have the normals always point out, then we will need to determine which side the ray is on when we color it. We can figure this out by comparing the ray with the normal. If the ray and the normal face in the same direction, the ray is inside the object, if the ray and the normal face in the opposite direction, then the ray is outside the object. This can be determined by taking the dot product of the two vectors, where if their dot is positive, the ray is inside the sphere.

use crate::{
    ray::Ray,
    vec3::{Point3, Vec3, dot},
};

#[derive(Debug, Default, Clone, Copy)]
pub struct HitRecord {
    pub p: Point3,
    pub normal: Vec3,
    pub t: f64,
    pub front_face: bool,
}

impl HitRecord {
    pub fn set_face_normal(&mut self, r: Ray, outward_normal: Vec3) {
        // Sets the hit record normal vector.
        // NOTE: the parameter `outward_normal` is assumed to have unit length.

        let normal;
        let front_face;
        if dot(r.direction(), outward_normal) > 0.0 {
            // ray is inside the sphere
            normal = -outward_normal;
            front_face = false;
        } else {
            // ray is outside the sphere
            normal = outward_normal;
            front_face = true;
        }

        self.front_face = front_face;
        self.normal = normal;
    }
}

pub trait Hittable {
    fn hit(&self, r: Ray, ray_tmin: f64, ray_tmax: f64) -> Option<HitRecord>;
}

Listing 17: Comparing the ray and the normal


If we decide to have the normals always point against the ray, we won't be able to use the dot product to determine which side of the surface the ray is on. Instead, we would need to store that information:

use crate::{
    ray::Ray,
    vec3::{Point3, Vec3, dot},
};

#[derive(Debug, Default, Clone, Copy)]
pub struct HitRecord {
    pub p: Point3,
    pub normal: Vec3,
    pub t: f64,
    pub front_face: bool,
}

impl HitRecord {
    pub fn set_face_normal(&mut self, r: Ray, outward_normal: Vec3) {
        // Sets the hit record normal vector.
        // NOTE: the parameter `outward_normal` is assumed to have unit length.

        let normal;
        let front_face;
        if dot(r.direction(), outward_normal) > 0.0 {
            // ray is inside the sphere
            normal = -outward_normal;
            front_face = false;
        } else {
            // ray is outside the sphere
            normal = outward_normal;
            front_face = true;
        }

        self.front_face = front_face;
        self.normal = normal;
    }
}

pub trait Hittable {
    fn hit(&self, r: Ray, ray_tmin: f64, ray_tmax: f64) -> Option<HitRecord>;
}

Listing 18: Remembering the side of the surface


We can set things up so that normals always point “outward” from the surface, or always point against the incident ray. This decision is determined by whether you want to determine the side of the surface at the time of geometry intersection or at the time of coloring. In this book we have more material types than we have geometry types, so we'll go for less work and put the determination at geometry time. This is simply a matter of preference, and you'll see both implementations in the literature.

We add the front_face bool to the hit_record class. We'll also add a function to solve this calculation for us: set_face_normal(). For convenience we will assume that the vector passed to the new set_face_normal() function is of unit length. We could always normalize the parameter explicitly, but it's more efficient if the geometry code does this, as it's usually easier when you know more about the specific geometry.

diff --git a/src/hittable.rs b/src/hittable.rs
index b8a3fcf..8ced826 100644
--- a/src/hittable.rs
+++ b/src/hittable.rs
@@ -1,15 +1,30 @@
 use crate::{
     ray::Ray,
-    vec3::{Point3, Vec3},
+    vec3::{Point3, Vec3, dot},
 };
 
 #[derive(Debug, Default, Clone, Copy)]
 pub struct HitRecord {
     pub p: Point3,
     pub normal: Vec3,
     pub t: f64,
+    pub front_face: bool,
+}
+
+impl HitRecord {
+    pub fn set_face_normal(&mut self, r: Ray, outward_normal: Vec3) {
+        // Sets the hit record normal vector.
+        // NOTE: the parameter `outward_normal` is assumed to have unit length.
+
+        self.front_face = dot(r.direction(), outward_normal) < 0.0;
+        self.normal = if self.front_face {
+            outward_normal
+        } else {
+            -outward_normal
+        };
+    }
 }
 
 pub trait Hittable {
     fn hit(&self, r: Ray, ray_tmin: f64, ray_tmax: f64) -> Option<HitRecord>;
 }

Listing 19: [hittable.rs] Adding front-face tracking to hit_record


And then we add the surface side determination to the class:

diff --git a/src/sphere.rs b/src/sphere.rs
index aa651e9..86d3cbb 100644
--- a/src/sphere.rs
+++ b/src/sphere.rs
@@ -1,56 +1,57 @@
 use crate::{
     hittable::{HitRecord, Hittable},
     ray::Ray,
     vec3::{Point3, dot},
 };
 
 #[derive(Debug, Clone, Copy)]
 pub struct Sphere {
     center: Point3,
     radius: f64,
 }
 
 impl Sphere {
     pub fn new(center: Point3, radius: f64) -> Self {
         Self {
             center,
             radius: f64::max(0.0, radius),
         }
     }
 }
 
 impl Hittable for Sphere {
     fn hit(&self, r: Ray, ray_tmin: f64, ray_tmax: f64) -> Option<HitRecord> {
         let oc = self.center - r.origin();
         let a = r.direction().length_squared();
         let h = dot(r.direction(), oc);
         let c = oc.length_squared() - self.radius * self.radius;
 
         let discriminant = h * h - a * c;
         if discriminant < 0.0 {
             return None;
         }
 
         let sqrtd = f64::sqrt(discriminant);
 
         // Find the nearest root that lies in the acceptable range.
         let mut root = (h - sqrtd) / a;
         if root <= ray_tmin || ray_tmax <= root {
             root = (h + sqrtd) / a;
             if root <= ray_tmin || ray_tmax <= root {
                 return None;
             }
         }
 
         let t = root;
         let p = r.at(t);
-        let rec = HitRecord {
+        let mut rec = HitRecord {
             t,
             p,
-            normal: (p - self.center) / self.radius,
             ..Default::default()
         };
+        let outward_normal = (p - self.center) / self.radius;
+        rec.set_face_normal(r, outward_normal);
 
         Some(rec)
     }
 }

Listing 20: [sphere.rs] The sphere class with normal determination


A List of Hittable Objects

We have a generic object called a hittable that the ray can intersect with. We now add a class that stores a list of hittables:

use std::rc::Rc;

use crate::{
    hittable::{HitRecord, Hittable},
    ray::Ray,
};

#[derive(Default)]
pub struct HittableList {
    pub objects: Vec<Rc<dyn Hittable>>,
}

impl HittableList {
    pub fn new() -> Self {
        Self::default()
    }

    pub fn clear(&mut self) {
        self.objects.clear();
    }

    pub fn add(&mut self, object: Rc<dyn Hittable>) {
        self.objects.push(object);
    }
}

impl Hittable for HittableList {
    fn hit(&self, r: Ray, ray_tmin: f64, ray_tmax: f64) -> Option<HitRecord> {
        self.objects
            .iter()
            .filter_map(|obj| obj.hit(r, ray_tmin, ray_tmax))
            .min_by(|a, b| a.t.partial_cmp(&b.t).expect("no NaN value"))
    }
}

Listing 21: [hittable_list.rs] The hittable_list class


Some New C++ Features 1

The hittable_list class code uses some C++ features that may trip you up if you're not normally a C++ programmer: vector, shared_ptr, and make_shared. 2

shared_ptr<type> is a pointer to some allocated type, with reference-counting semantics. Every time you assign its value to another shared pointer (usually with a simple assignment), the reference count is incremented. As shared pointers go out of scope (like at the end of a block or function), the reference count is decremented. Once the count goes to zero, the object is safely deleted. 3

Typically, a shared pointer is first initialized with a newly-allocated object, something like this:

#![allow(unused)]
fn main() {
let double_ptr: Rc<double> = Rc::new(0.37);
let vec3_ptr: Rc<Vec3> = Rc::new(Vec3::new(1.414214, 2.718281, 1.618034));
let sphere_ptr: Rc<Sphere> = Rc::new(Sphere::new(Point3::new(0.0, 0.0, 0.0), 1.0));
}

Listing 22: An example allocation using shared_ptr


make_shared<thing>(thing_constructor_params ...) allocates a new instance of type thing, using the constructor parameters. It returns a shared_ptr<thing>. 4

Since the type can be automatically deduced by the return type of make_shared<type>(...), the above lines can be more simply expressed using C++'s auto type specifier: 5

#![allow(unused)]
fn main() {
let double_ptr = Rc::new(0.37);
let vec3_ptr = Rc::new(Vec3::new(1.414214, 2.718281, 1.618034));
let sphere_ptr = Rc::new(Sphere::new(Point3::new(0.0, 0.0, 0.0), 1.0));
}

Listing 23: An example allocation using shared_ptr with auto type


We'll use shared pointers in our code, because it allows multiple geometries to share a common instance (for example, a bunch of spheres that all use the same color material), and because it makes memory management automatic and easier to reason about.

std::shared_ptr is included with the <memory> header.6

The second C++ feature you may be unfamiliar with is std::vector. This is a generic array-like collection of an arbitrary type. Above, we use a collection of pointers to hittable. std::vector automatically grows as more values are added: objects.push_back(object) adds a value to the end of the std::vector member variable objects.

std::vector is included with the ``` header. 7

Finally, the using statements in listing 21 tell the compiler that we'll be getting shared_ptr and make_shared from the std library, so we don't need to prefix these with std:: every time we reference them.


  1. This chapter can be safely skipped when the code of last chapter is clear.

  2. The Rust equivalents are Vec, Rc and a simple new method of the reference counting smart pointer.

  3. Here we use Rc. In contrast to the C++ shared_ptr, the contained value of an Rc is inmutable. Enclosing the value with a cell based type like for example RefCell would allow for interior mutability. However in this case, all objects are created on startup and do not change over the course of the program lifetime which is why a simple Rc is sufficient.

  4. Rust has no constructors, instead it is convention to fill structs with the new method, the implementation of traits like Default, From or similar, or to use any method that returns Self. So technically something like make_shared does not exist in Rust.

  5. The type annotations in the last listing were not nessecary, Rust's let declarations can in this case infere the types.

  6. Rc is found in std::rc::Rc.

  7. Vec is included in Rust's std prelude and does have to be included to be used.

Common Constants and Utility Functions

We need some math constants that we conveniently define in their own header file. For now we only need infinity, but we will also throw our own definition of pi in there, which we will need later. We'll also throw common useful constants and future utility functions in here. This new header, rtweekend.h, will be our general main header file. 1

pub use log::*;

// Rust Std usings

pub use std::rc::Rc;

// Constants

pub const INFINITY: f64 = f64::INFINITY;
pub const PI: f64 = std::f64::consts::PI;

// Common Headers

pub use crate::{color::*, ray::*, vec3::*};

Listing 24: [prelude.rs] The rtweekend.h common header


Program files will include rtweekend.h first, so all other header files (where the bulk of our code will reside) can implicitly assume that rtweekend.h has already been included. Header files still need to explicitly include any other necessary header files. We'll make some updates with these assumptions in mind.

// nothing changes

Listing 25: [color.rs] Assume rtweekend.h inclusion for color.h 2


diff --git a/src/hittable.rs b/src/hittable.rs
index 8ced826..a7aab5a 100644
--- a/src/hittable.rs
+++ b/src/hittable.rs
@@ -1,30 +1,27 @@
-use crate::{
-    ray::Ray,
-    vec3::{Point3, Vec3, dot},
-};
+use crate::prelude::*;
 
 #[derive(Debug, Default, Clone, Copy)]
 pub struct HitRecord {
     pub p: Point3,
     pub normal: Vec3,
     pub t: f64,
     pub front_face: bool,
 }
 
 impl HitRecord {
     pub fn set_face_normal(&mut self, r: Ray, outward_normal: Vec3) {
         // Sets the hit record normal vector.
         // NOTE: the parameter `outward_normal` is assumed to have unit length.
 
         self.front_face = dot(r.direction(), outward_normal) < 0.0;
         self.normal = if self.front_face {
             outward_normal
         } else {
             -outward_normal
         };
     }
 }
 
 pub trait Hittable {
     fn hit(&self, r: Ray, ray_tmin: f64, ray_tmax: f64) -> Option<HitRecord>;
 }

Listing 26: [hittable.rs] Assume rtweekend.h inclusion for hittable.h


diff --git a/src/hittable_list.rs b/src/hittable_list.rs
index 95c24e4..7841161 100644
--- a/src/hittable_list.rs
+++ b/src/hittable_list.rs
@@ -1,34 +1,32 @@
-use std::rc::Rc;
-
 use crate::{
     hittable::{HitRecord, Hittable},
-    ray::Ray,
+    prelude::*,
 };
 
 #[derive(Default)]
 pub struct HittableList {
     pub objects: Vec<Rc<dyn Hittable>>,
 }
 
 impl HittableList {
     pub fn new() -> Self {
         Self::default()
     }
 
     pub fn clear(&mut self) {
         self.objects.clear();
     }
 
     pub fn add(&mut self, object: Rc<dyn Hittable>) {
         self.objects.push(object);
     }
 }
 
 impl Hittable for HittableList {
     fn hit(&self, r: Ray, ray_tmin: f64, ray_tmax: f64) -> Option<HitRecord> {
         self.objects
             .iter()
             .filter_map(|obj| obj.hit(r, ray_tmin, ray_tmax))
             .min_by(|a, b| a.t.partial_cmp(&b.t).expect("no NaN value"))
     }
 }

Listing 27: [hittable_list.rs] Assume rtweekend.h inclusion for hittable_list.h


diff --git a/src/sphere.rs b/src/sphere.rs
index 86d3cbb..9de9f72 100644
--- a/src/sphere.rs
+++ b/src/sphere.rs
@@ -1,57 +1,56 @@
 use crate::{
     hittable::{HitRecord, Hittable},
-    ray::Ray,
-    vec3::{Point3, dot},
+    prelude::*,
 };
 
 #[derive(Debug, Clone, Copy)]
 pub struct Sphere {
     center: Point3,
     radius: f64,
 }
 
 impl Sphere {
     pub fn new(center: Point3, radius: f64) -> Self {
         Self {
             center,
             radius: f64::max(0.0, radius),
         }
     }
 }
 
 impl Hittable for Sphere {
     fn hit(&self, r: Ray, ray_tmin: f64, ray_tmax: f64) -> Option<HitRecord> {
         let oc = self.center - r.origin();
         let a = r.direction().length_squared();
         let h = dot(r.direction(), oc);
         let c = oc.length_squared() - self.radius * self.radius;
 
         let discriminant = h * h - a * c;
         if discriminant < 0.0 {
             return None;
         }
 
         let sqrtd = f64::sqrt(discriminant);
 
         // Find the nearest root that lies in the acceptable range.
         let mut root = (h - sqrtd) / a;
         if root <= ray_tmin || ray_tmax <= root {
             root = (h + sqrtd) / a;
             if root <= ray_tmin || ray_tmax <= root {
                 return None;
             }
         }
 
         let t = root;
         let p = r.at(t);
         let mut rec = HitRecord {
             t,
             p,
             ..Default::default()
         };
         let outward_normal = (p - self.center) / self.radius;
         rec.set_face_normal(r, outward_normal);
 
         Some(rec)
     }
 }

Listing 28: [sphere.rs] Assume rtweekend.h inclusion for sphere.h


// nothing changes

Listing 29: [vec3.rs] Assume rtweekend.h inclusion for vec3.h


And now the new main:

diff --git a/src/main.rs b/src/main.rs
index 1f26e3d..59c000b 100644
--- a/src/main.rs
+++ b/src/main.rs
@@ -1,82 +1,74 @@
-use code::{
-    color::{Color, write_color},
-    ray::Ray,
-    vec3::{Point3, Vec3, dot, unit_vector},
-};
-
-fn hit_sphere(center: Point3, radius: f64, r: Ray) -> Option<f64> {
-    let oc = center - r.origin();
-    let a = r.direction().length_squared();
-    let h = dot(r.direction(), oc);
-    let c = oc.length_squared() - radius * radius;
-    let discriminant = h * h - a * c;
-
-    (discriminant >= 0.0).then(|| (h - f64::sqrt(discriminant)) / a)
-}
+use code::{hittable::Hittable, hittable_list::HittableList, prelude::*, sphere::Sphere};
 
-fn ray_color(r: Ray) -> Color {
-    if let Some(t) = hit_sphere(Point3::new(0.0, 0.0, -1.0), 0.5, r) {
-        let n = unit_vector(r.at(t) - Vec3::new(0.0, 0.0, -1.0));
-        return 0.5 * Color::new(n.x() + 1.0, n.y() + 1.0, n.z() + 1.0);
+fn ray_color(r: Ray, world: &impl Hittable) -> Color {
+    if let Some(rec) = world.hit(r, 0.0, INFINITY) {
+        return 0.5 * (rec.normal + Color::new(1.0, 1.0, 1.0));
     }
 
     let unit_direction = unit_vector(r.direction());
     let a = 0.5 * (unit_direction.y() + 1.0);
     (1.0 - a) * Color::new(1.0, 1.0, 1.0) + a * Color::new(0.5, 0.7, 1.0)
 }
 
 fn main() -> Result<(), Box<dyn std::error::Error>> {
     // Image
 
     const ASPECT_RATIO: f64 = 16.0 / 9.0;
     const IMAGE_WIDTH: i32 = 400;
 
     // Calculate the image height, and ensure that it's at least 1.
     const IMAGE_HEIGHT: i32 = {
         let image_height = (IMAGE_WIDTH as f64 / ASPECT_RATIO) as i32;
         if image_height < 1 { 1 } else { image_height }
     };
 
+    // World
+
+    let mut world = HittableList::new();
+
+    world.add(Rc::new(Sphere::new(Point3::new(0.0, 0.0, -1.0), 0.5)));
+    world.add(Rc::new(Sphere::new(Point3::new(0.0, -100.5, -1.0), 100.0)));
+
     // Camera
 
     let focal_length = 1.0;
     let viewport_height = 2.0;
     let viewport_width = viewport_height * (IMAGE_WIDTH as f64) / (IMAGE_HEIGHT as f64);
     let camera_center = Point3::new(0.0, 0.0, 0.0);
 
     // Calculate the vectors across the horizontal and down the vertical viewport edges.
     let viewport_u = Vec3::new(viewport_width, 0.0, 0.0);
     let viewport_v = Vec3::new(0.0, -viewport_height, 0.0);
 
     // Calculate the horizontal and vertical delta vectors from pixel to pixel.
     let pixel_delta_u = viewport_u / IMAGE_WIDTH as f64;
     let pixel_delta_v = viewport_v / IMAGE_HEIGHT as f64;
 
     // Calculate the location of the upper left pixel.
     let viewport_upper_left =
         camera_center - Vec3::new(0.0, 0.0, focal_length) - viewport_u / 2.0 - viewport_v / 2.0;
     let pixel00_loc = viewport_upper_left + 0.5 * (pixel_delta_u + pixel_delta_v);
 
     // Render
 
     env_logger::init();
     println!("P3");
     println!("{IMAGE_WIDTH} {IMAGE_HEIGHT}");
     println!("255");
 
     for j in 0..IMAGE_HEIGHT {
-        log::info!("Scanlines remaining: {}", IMAGE_HEIGHT - j);
+        info!("Scanlines remaining: {}", IMAGE_HEIGHT - j);
         for i in 0..IMAGE_WIDTH {
             let pixel_center =
                 pixel00_loc + (i as f64) * pixel_delta_u + (j as f64) * pixel_delta_v;
             let ray_direction = pixel_center - camera_center;
             let r = Ray::new(camera_center, ray_direction);
 
-            let pixel_color = ray_color(r);
+            let pixel_color = ray_color(r, &world);
             write_color(std::io::stdout(), pixel_color)?;
         }
     }
-    log::info!("Done.");
+    info!("Done.");
 
     Ok(())
 }

Listing 30: [main.rs] The new main with hittables


This yields a picture that is really just a visualization of where the spheres are located along with their surface normal. This is often a great way to view any flaws or specific characteristics of a geometric model.

 Resulting render of normals-colored sphere with ground

Image 5: Resulting render of normals-colored sphere with ground



  1. In Rust it is common to create a prelude for common types, which we will do here instead. Note however, that there are at the momentan no plan to include a custom prelude as a language feature, instead we need to import the prelude with use crate::prelude::*.

  2. There is no need to use the prelude in color.rs only for the Vec3 struct. The listing is still included to match the numbering of the original book series.

An Interval Class

Before we continue, we'll implement an interval class to manage real-valued intervals with a minimum and a maximum. We'll end up using this class quite often as we proceed.

#[derive(Debug, Clone, Copy)]
pub struct Interval {
    pub min: f64,
    pub max: f64,
}

impl Default for Interval {
    fn default() -> Self {
        Self::EMPTY
    }
}

impl Interval {
    pub const EMPTY: Self = Self {
        min: f64::INFINITY,
        max: f64::NEG_INFINITY,
    };

    pub const UNIVERSE: Self = Self {
        min: f64::NEG_INFINITY,
        max: f64::INFINITY,
    };

    pub fn new(min: f64, max: f64) -> Self {
        Self { min, max }
    }

    pub fn size(&self) -> f64 {
        self.max - self.min
    }

    pub fn contains(&self, x: f64) -> bool {
        self.min <= x && x <= self.max
    }

    pub fn surrounds(&self, x: f64) -> bool {
        self.min < x && x < self.max
    }
}

Listing 31: [interval.rs] Introducing the new interval class


diff --git a/src/prelude.rs b/src/prelude.rs
index fcd4621..2ec9487 100644
--- a/src/prelude.rs
+++ b/src/prelude.rs
@@ -1,14 +1,14 @@
 pub use log::*;
 
 // Rust Std usings
 
 pub use std::rc::Rc;
 
 // Constants
 
 pub const INFINITY: f64 = f64::INFINITY;
 pub const PI: f64 = std::f64::consts::PI;
 
 // Common Headers
 
-pub use crate::{color::*, ray::*, vec3::*};
+pub use crate::{color::*, interval::Interval, ray::*, vec3::*};

Listing 32: [prelude.rs] Including the new interval class


diff --git a/src/hittable.rs b/src/hittable.rs
index a7aab5a..1b65b92 100644
--- a/src/hittable.rs
+++ b/src/hittable.rs
@@ -1,27 +1,27 @@
 use crate::prelude::*;
 
 #[derive(Debug, Default, Clone, Copy)]
 pub struct HitRecord {
     pub p: Point3,
     pub normal: Vec3,
     pub t: f64,
     pub front_face: bool,
 }
 
 impl HitRecord {
     pub fn set_face_normal(&mut self, r: Ray, outward_normal: Vec3) {
         // Sets the hit record normal vector.
         // NOTE: the parameter `outward_normal` is assumed to have unit length.
 
         self.front_face = dot(r.direction(), outward_normal) < 0.0;
         self.normal = if self.front_face {
             outward_normal
         } else {
             -outward_normal
         };
     }
 }
 
 pub trait Hittable {
-    fn hit(&self, r: Ray, ray_tmin: f64, ray_tmax: f64) -> Option<HitRecord>;
+    fn hit(&self, r: Ray, ray_t: Interval) -> Option<HitRecord>;
 }

Listing 33: [hittable.rs] hittable::hit() using interval


diff --git a/src/hittable_list.rs b/src/hittable_list.rs
index 7841161..4647aa5 100644
--- a/src/hittable_list.rs
+++ b/src/hittable_list.rs
@@ -1,32 +1,32 @@
 use crate::{
     hittable::{HitRecord, Hittable},
     prelude::*,
 };
 
 #[derive(Default)]
 pub struct HittableList {
     pub objects: Vec<Rc<dyn Hittable>>,
 }
 
 impl HittableList {
     pub fn new() -> Self {
         Self::default()
     }
 
     pub fn clear(&mut self) {
         self.objects.clear();
     }
 
     pub fn add(&mut self, object: Rc<dyn Hittable>) {
         self.objects.push(object);
     }
 }
 
 impl Hittable for HittableList {
-    fn hit(&self, r: Ray, ray_tmin: f64, ray_tmax: f64) -> Option<HitRecord> {
+    fn hit(&self, r: Ray, ray_t: Interval) -> Option<HitRecord> {
         self.objects
             .iter()
-            .filter_map(|obj| obj.hit(r, ray_tmin, ray_tmax))
+            .filter_map(|obj| obj.hit(r, ray_t))
             .min_by(|a, b| a.t.partial_cmp(&b.t).expect("no NaN value"))
     }
 }

Listing 34: [hittable_list.rs] hittable_list::hit() using interval


diff --git a/src/sphere.rs b/src/sphere.rs
index 9de9f72..a2710b4 100644
--- a/src/sphere.rs
+++ b/src/sphere.rs
@@ -1,56 +1,56 @@
 use crate::{
     hittable::{HitRecord, Hittable},
     prelude::*,
 };
 
 #[derive(Debug, Clone, Copy)]
 pub struct Sphere {
     center: Point3,
     radius: f64,
 }
 
 impl Sphere {
     pub fn new(center: Point3, radius: f64) -> Self {
         Self {
             center,
             radius: f64::max(0.0, radius),
         }
     }
 }
 
 impl Hittable for Sphere {
-    fn hit(&self, r: Ray, ray_tmin: f64, ray_tmax: f64) -> Option<HitRecord> {
+    fn hit(&self, r: Ray, ray_t: Interval) -> Option<HitRecord> {
         let oc = self.center - r.origin();
         let a = r.direction().length_squared();
         let h = dot(r.direction(), oc);
         let c = oc.length_squared() - self.radius * self.radius;
 
         let discriminant = h * h - a * c;
         if discriminant < 0.0 {
             return None;
         }
 
         let sqrtd = f64::sqrt(discriminant);
 
         // Find the nearest root that lies in the acceptable range.
         let mut root = (h - sqrtd) / a;
-        if root <= ray_tmin || ray_tmax <= root {
+        if !ray_t.surrounds(root) {
             root = (h + sqrtd) / a;
-            if root <= ray_tmin || ray_tmax <= root {
+            if !ray_t.surrounds(root) {
                 return None;
             }
         }
 
         let t = root;
         let p = r.at(t);
         let mut rec = HitRecord {
             t,
             p,
             ..Default::default()
         };
         let outward_normal = (p - self.center) / self.radius;
         rec.set_face_normal(r, outward_normal);
 
         Some(rec)
     }
 }

Listing 35: [sphere.rs] sphere using interval


diff --git a/src/main.rs b/src/main.rs
index 59c000b..a8d3932 100644
--- a/src/main.rs
+++ b/src/main.rs
@@ -1,74 +1,74 @@
 use code::{hittable::Hittable, hittable_list::HittableList, prelude::*, sphere::Sphere};
 
 fn ray_color(r: Ray, world: &impl Hittable) -> Color {
-    if let Some(rec) = world.hit(r, 0.0, INFINITY) {
+    if let Some(rec) = world.hit(r, Interval::new(0.0, INFINITY)) {
         return 0.5 * (rec.normal + Color::new(1.0, 1.0, 1.0));
     }
 
     let unit_direction = unit_vector(r.direction());
     let a = 0.5 * (unit_direction.y() + 1.0);
     (1.0 - a) * Color::new(1.0, 1.0, 1.0) + a * Color::new(0.5, 0.7, 1.0)
 }
 
 fn main() -> Result<(), Box<dyn std::error::Error>> {
     // Image
 
     const ASPECT_RATIO: f64 = 16.0 / 9.0;
     const IMAGE_WIDTH: i32 = 400;
 
     // Calculate the image height, and ensure that it's at least 1.
     const IMAGE_HEIGHT: i32 = {
         let image_height = (IMAGE_WIDTH as f64 / ASPECT_RATIO) as i32;
         if image_height < 1 { 1 } else { image_height }
     };
 
     // World
 
     let mut world = HittableList::new();
 
     world.add(Rc::new(Sphere::new(Point3::new(0.0, 0.0, -1.0), 0.5)));
     world.add(Rc::new(Sphere::new(Point3::new(0.0, -100.5, -1.0), 100.0)));
 
     // Camera
 
     let focal_length = 1.0;
     let viewport_height = 2.0;
     let viewport_width = viewport_height * (IMAGE_WIDTH as f64) / (IMAGE_HEIGHT as f64);
     let camera_center = Point3::new(0.0, 0.0, 0.0);
 
     // Calculate the vectors across the horizontal and down the vertical viewport edges.
     let viewport_u = Vec3::new(viewport_width, 0.0, 0.0);
     let viewport_v = Vec3::new(0.0, -viewport_height, 0.0);
 
     // Calculate the horizontal and vertical delta vectors from pixel to pixel.
     let pixel_delta_u = viewport_u / IMAGE_WIDTH as f64;
     let pixel_delta_v = viewport_v / IMAGE_HEIGHT as f64;
 
     // Calculate the location of the upper left pixel.
     let viewport_upper_left =
         camera_center - Vec3::new(0.0, 0.0, focal_length) - viewport_u / 2.0 - viewport_v / 2.0;
     let pixel00_loc = viewport_upper_left + 0.5 * (pixel_delta_u + pixel_delta_v);
 
     // Render
 
     env_logger::init();
     println!("P3");
     println!("{IMAGE_WIDTH} {IMAGE_HEIGHT}");
     println!("255");
 
     for j in 0..IMAGE_HEIGHT {
         info!("Scanlines remaining: {}", IMAGE_HEIGHT - j);
         for i in 0..IMAGE_WIDTH {
             let pixel_center =
                 pixel00_loc + (i as f64) * pixel_delta_u + (j as f64) * pixel_delta_v;
             let ray_direction = pixel_center - camera_center;
             let r = Ray::new(camera_center, ray_direction);
 
             let pixel_color = ray_color(r, &world);
             write_color(std::io::stdout(), pixel_color)?;
         }
     }
     info!("Done.");
 
     Ok(())
 }

Listing 36: [main.rs] The new main using interval


Moving Camera Code Into Its Own Class

Before continuing, now is a good time to consolidate our camera and scene-render code into a single new class: the camera class. The camera class will be responsible for two important jobs:

  1. Construct and dispatch rays into the world.
  2. Use the results of these rays to construct the rendered image.

In this refactoring, we'll collect the ray_color() function, along with the image, camera, and render sections of our main program. The new camera class will contain two public methods initialize() and render(), plus two private helper methods get_ray() and ray_color().

Ultimately, the camera will follow the simplest usage pattern that we could think of: it will be default constructed no arguments, then the owning code will modify the camera's public variables through simple assignment, and finally everything is initialized by a call to the initialize() function. This pattern is chosen instead of the owner calling a constructor with a ton of parameters or by defining and calling a bunch of setter methods. Instead, the owning code only needs to set what it explicitly cares about. Finally, we could either have the owning code call initialize(), or just have the camera call this function automatically at the start of render(). We'll use the second approach. 1

After main creates a camera and sets default values, it will call the render() method. The render() method will prepare the camera for rendering and then execute the render loop.

Here's the skeleton of our new camera class:

use crate::{hittable::Hittable, prelude::*};

pub struct Camera {
    /// Ratio of image width over height
    pub aspect_ratio: f64,
    /// Rendered image width in pixel count
    pub image_width: i32,

    /// Rendered image height
    image_height: i32,
    /// Camera center
    center: Point3,
    /// Location of pixel 0, 0
    pixel00_loc: Point3,
    /// Offset to pixel to the right
    pixel_delta_u: Vec3,
    /// Offset to pixel below
    pixel_delta_v: Vec3,
}

impl Default for Camera {
    fn default() -> Self {
        Self {
            aspect_ratio: 1.0,
            image_width: 100,
            image_height: Default::default(),
            center: Default::default(),
            pixel00_loc: Default::default(),
            pixel_delta_u: Default::default(),
            pixel_delta_v: Default::default(),
        }
    }
}

impl Camera {
    pub fn with_aspect_ratio(mut self, aspect_ratio: f64) -> Self {
        self.aspect_ratio = aspect_ratio;

        self
    }

    pub fn with_image_width(mut self, image_width: i32) -> Self {
        self.image_width = image_width;

        self
    }

    pub fn render(&mut self, world: &impl Hittable) -> std::io::Result<()> {
        self.initialize();

        println!("P3");
        println!("{} {}", self.image_width, self.image_height);
        println!("255");

        for j in 0..self.image_height {
            info!("Scanlines remaining: {}", self.image_height - j);
            for i in 0..self.image_width {
                let pixel_center = self.pixel00_loc
                    + (i as f64) * self.pixel_delta_u
                    + (j as f64) * self.pixel_delta_v;
                let ray_direction = pixel_center - self.center;
                let r = Ray::new(self.center, ray_direction);

                let pixel_color = Self::ray_color(r, world);
                write_color(std::io::stdout(), pixel_color)?;
            }
        }
        info!("Done.");

        Ok(())
    }

    fn initialize(&mut self) {
        self.image_height = {
            let image_height = (self.image_width as f64 / self.aspect_ratio) as i32;
            if image_height < 1 { 1 } else { image_height }
        };

        self.center = Point3::new(0.0, 0.0, 0.0);

        // Determine viewport dimensions.
        let focal_length = 1.0;
        let viewport_height = 2.0;
        let viewport_width =
            viewport_height * (self.image_width as f64) / (self.image_height as f64);

        // Calculate the vectors across the horizontal and down the vertical viewport edges.
        let viewport_u = Vec3::new(viewport_width, 0.0, 0.0);
        let viewport_v = Vec3::new(0.0, -viewport_height, 0.0);

        // Calculate the horizontal and vertical delta vectors from pixel to pixel.
        self.pixel_delta_u = viewport_u / self.image_width as f64;
        self.pixel_delta_v = viewport_v / self.image_height as f64;

        // Calculate the location of the upper left pixel.
        let viewport_upper_left =
            self.center - Vec3::new(0.0, 0.0, focal_length) - viewport_u / 2.0 - viewport_v / 2.0;
        self.pixel00_loc = viewport_upper_left + 0.5 * (self.pixel_delta_u + self.pixel_delta_v);
    }

    fn ray_color(r: Ray, world: &impl Hittable) -> Color {
        if let Some(rec) = world.hit(r, Interval::new(0.0, INFINITY)) {
            return 0.5 * (rec.normal + Color::new(1.0, 1.0, 1.0));
        }

        let unit_direction = unit_vector(r.direction());
        let a = 0.5 * (unit_direction.y() + 1.0);
        (1.0 - a) * Color::new(1.0, 1.0, 1.0) + a * Color::new(0.5, 0.7, 1.0)
    }
}

Listing 37: [camera.rs] The camera class skeleton


To begin with, let's fill in the ray_color() function from main.cc:

use crate::{hittable::Hittable, prelude::*};

pub struct Camera {
    /// Ratio of image width over height
    pub aspect_ratio: f64,
    /// Rendered image width in pixel count
    pub image_width: i32,

    /// Rendered image height
    image_height: i32,
    /// Camera center
    center: Point3,
    /// Location of pixel 0, 0
    pixel00_loc: Point3,
    /// Offset to pixel to the right
    pixel_delta_u: Vec3,
    /// Offset to pixel below
    pixel_delta_v: Vec3,
}

impl Default for Camera {
    fn default() -> Self {
        Self {
            aspect_ratio: 1.0,
            image_width: 100,
            image_height: Default::default(),
            center: Default::default(),
            pixel00_loc: Default::default(),
            pixel_delta_u: Default::default(),
            pixel_delta_v: Default::default(),
        }
    }
}

impl Camera {
    pub fn with_aspect_ratio(mut self, aspect_ratio: f64) -> Self {
        self.aspect_ratio = aspect_ratio;

        self
    }

    pub fn with_image_width(mut self, image_width: i32) -> Self {
        self.image_width = image_width;

        self
    }

    pub fn render(&mut self, world: &impl Hittable) -> std::io::Result<()> {
        self.initialize();

        println!("P3");
        println!("{} {}", self.image_width, self.image_height);
        println!("255");

        for j in 0..self.image_height {
            info!("Scanlines remaining: {}", self.image_height - j);
            for i in 0..self.image_width {
                let pixel_center = self.pixel00_loc
                    + (i as f64) * self.pixel_delta_u
                    + (j as f64) * self.pixel_delta_v;
                let ray_direction = pixel_center - self.center;
                let r = Ray::new(self.center, ray_direction);

                let pixel_color = Self::ray_color(r, world);
                write_color(std::io::stdout(), pixel_color)?;
            }
        }
        info!("Done.");

        Ok(())
    }

    fn initialize(&mut self) {
        self.image_height = {
            let image_height = (self.image_width as f64 / self.aspect_ratio) as i32;
            if image_height < 1 { 1 } else { image_height }
        };

        self.center = Point3::new(0.0, 0.0, 0.0);

        // Determine viewport dimensions.
        let focal_length = 1.0;
        let viewport_height = 2.0;
        let viewport_width =
            viewport_height * (self.image_width as f64) / (self.image_height as f64);

        // Calculate the vectors across the horizontal and down the vertical viewport edges.
        let viewport_u = Vec3::new(viewport_width, 0.0, 0.0);
        let viewport_v = Vec3::new(0.0, -viewport_height, 0.0);

        // Calculate the horizontal and vertical delta vectors from pixel to pixel.
        self.pixel_delta_u = viewport_u / self.image_width as f64;
        self.pixel_delta_v = viewport_v / self.image_height as f64;

        // Calculate the location of the upper left pixel.
        let viewport_upper_left =
            self.center - Vec3::new(0.0, 0.0, focal_length) - viewport_u / 2.0 - viewport_v / 2.0;
        self.pixel00_loc = viewport_upper_left + 0.5 * (self.pixel_delta_u + self.pixel_delta_v);
    }

    fn ray_color(r: Ray, world: &impl Hittable) -> Color {
        if let Some(rec) = world.hit(r, Interval::new(0.0, INFINITY)) {
            return 0.5 * (rec.normal + Color::new(1.0, 1.0, 1.0));
        }

        let unit_direction = unit_vector(r.direction());
        let a = 0.5 * (unit_direction.y() + 1.0);
        (1.0 - a) * Color::new(1.0, 1.0, 1.0) + a * Color::new(0.5, 0.7, 1.0)
    }
}

Listing 38: [camera.rs] The camera::ray_color function


Now we move almost everything from the main() function into our new camera class. The only thing remaining in the main() function is the world construction. Here's the camera class with newly migrated code:

use crate::{hittable::Hittable, prelude::*};

pub struct Camera {
    /// Ratio of image width over height
    pub aspect_ratio: f64,
    /// Rendered image width in pixel count
    pub image_width: i32,

    /// Rendered image height
    image_height: i32,
    /// Camera center
    center: Point3,
    /// Location of pixel 0, 0
    pixel00_loc: Point3,
    /// Offset to pixel to the right
    pixel_delta_u: Vec3,
    /// Offset to pixel below
    pixel_delta_v: Vec3,
}

impl Default for Camera {
    fn default() -> Self {
        Self {
            aspect_ratio: 1.0,
            image_width: 100,
            image_height: Default::default(),
            center: Default::default(),
            pixel00_loc: Default::default(),
            pixel_delta_u: Default::default(),
            pixel_delta_v: Default::default(),
        }
    }
}

impl Camera {
    pub fn with_aspect_ratio(mut self, aspect_ratio: f64) -> Self {
        self.aspect_ratio = aspect_ratio;

        self
    }

    pub fn with_image_width(mut self, image_width: i32) -> Self {
        self.image_width = image_width;

        self
    }

    pub fn render(&mut self, world: &impl Hittable) -> std::io::Result<()> {
        self.initialize();

        println!("P3");
        println!("{} {}", self.image_width, self.image_height);
        println!("255");

        for j in 0..self.image_height {
            info!("Scanlines remaining: {}", self.image_height - j);
            for i in 0..self.image_width {
                let pixel_center = self.pixel00_loc
                    + (i as f64) * self.pixel_delta_u
                    + (j as f64) * self.pixel_delta_v;
                let ray_direction = pixel_center - self.center;
                let r = Ray::new(self.center, ray_direction);

                let pixel_color = Self::ray_color(r, world);
                write_color(std::io::stdout(), pixel_color)?;
            }
        }
        info!("Done.");

        Ok(())
    }

    fn initialize(&mut self) {
        self.image_height = {
            let image_height = (self.image_width as f64 / self.aspect_ratio) as i32;
            if image_height < 1 { 1 } else { image_height }
        };

        self.center = Point3::new(0.0, 0.0, 0.0);

        // Determine viewport dimensions.
        let focal_length = 1.0;
        let viewport_height = 2.0;
        let viewport_width =
            viewport_height * (self.image_width as f64) / (self.image_height as f64);

        // Calculate the vectors across the horizontal and down the vertical viewport edges.
        let viewport_u = Vec3::new(viewport_width, 0.0, 0.0);
        let viewport_v = Vec3::new(0.0, -viewport_height, 0.0);

        // Calculate the horizontal and vertical delta vectors from pixel to pixel.
        self.pixel_delta_u = viewport_u / self.image_width as f64;
        self.pixel_delta_v = viewport_v / self.image_height as f64;

        // Calculate the location of the upper left pixel.
        let viewport_upper_left =
            self.center - Vec3::new(0.0, 0.0, focal_length) - viewport_u / 2.0 - viewport_v / 2.0;
        self.pixel00_loc = viewport_upper_left + 0.5 * (self.pixel_delta_u + self.pixel_delta_v);
    }

    fn ray_color(r: Ray, world: &impl Hittable) -> Color {
        if let Some(rec) = world.hit(r, Interval::new(0.0, INFINITY)) {
            return 0.5 * (rec.normal + Color::new(1.0, 1.0, 1.0));
        }

        let unit_direction = unit_vector(r.direction());
        let a = 0.5 * (unit_direction.y() + 1.0);
        (1.0 - a) * Color::new(1.0, 1.0, 1.0) + a * Color::new(0.5, 0.7, 1.0)
    }
}

Listing 39: [camera.rs] The working camera class


And here's the much reduced main:

diff --git a/src/main.rs b/src/main.rs
index a8d3932..27377f1 100644
--- a/src/main.rs
+++ b/src/main.rs
@@ -1,74 +1,15 @@
-use code::{hittable::Hittable, hittable_list::HittableList, prelude::*, sphere::Sphere};
-
-fn ray_color(r: Ray, world: &impl Hittable) -> Color {
-    if let Some(rec) = world.hit(r, Interval::new(0.0, INFINITY)) {
-        return 0.5 * (rec.normal + Color::new(1.0, 1.0, 1.0));
-    }
-
-    let unit_direction = unit_vector(r.direction());
-    let a = 0.5 * (unit_direction.y() + 1.0);
-    (1.0 - a) * Color::new(1.0, 1.0, 1.0) + a * Color::new(0.5, 0.7, 1.0)
-}
-
-fn main() -> Result<(), Box<dyn std::error::Error>> {
-    // Image
-
-    const ASPECT_RATIO: f64 = 16.0 / 9.0;
-    const IMAGE_WIDTH: i32 = 400;
-
-    // Calculate the image height, and ensure that it's at least 1.
-    const IMAGE_HEIGHT: i32 = {
-        let image_height = (IMAGE_WIDTH as f64 / ASPECT_RATIO) as i32;
-        if image_height < 1 { 1 } else { image_height }
-    };
-
-    // World
+use code::{camera::Camera, hittable_list::HittableList, prelude::*, sphere::Sphere};
 
+fn main() -> std::io::Result<()> {
     let mut world = HittableList::new();
 
     world.add(Rc::new(Sphere::new(Point3::new(0.0, 0.0, -1.0), 0.5)));
     world.add(Rc::new(Sphere::new(Point3::new(0.0, -100.5, -1.0), 100.0)));
 
-    // Camera
-
-    let focal_length = 1.0;
-    let viewport_height = 2.0;
-    let viewport_width = viewport_height * (IMAGE_WIDTH as f64) / (IMAGE_HEIGHT as f64);
-    let camera_center = Point3::new(0.0, 0.0, 0.0);
-
-    // Calculate the vectors across the horizontal and down the vertical viewport edges.
-    let viewport_u = Vec3::new(viewport_width, 0.0, 0.0);
-    let viewport_v = Vec3::new(0.0, -viewport_height, 0.0);
-
-    // Calculate the horizontal and vertical delta vectors from pixel to pixel.
-    let pixel_delta_u = viewport_u / IMAGE_WIDTH as f64;
-    let pixel_delta_v = viewport_v / IMAGE_HEIGHT as f64;
-
-    // Calculate the location of the upper left pixel.
-    let viewport_upper_left =
-        camera_center - Vec3::new(0.0, 0.0, focal_length) - viewport_u / 2.0 - viewport_v / 2.0;
-    let pixel00_loc = viewport_upper_left + 0.5 * (pixel_delta_u + pixel_delta_v);
-
-    // Render
-
     env_logger::init();
-    println!("P3");
-    println!("{IMAGE_WIDTH} {IMAGE_HEIGHT}");
-    println!("255");
-
-    for j in 0..IMAGE_HEIGHT {
-        info!("Scanlines remaining: {}", IMAGE_HEIGHT - j);
-        for i in 0..IMAGE_WIDTH {
-            let pixel_center =
-                pixel00_loc + (i as f64) * pixel_delta_u + (j as f64) * pixel_delta_v;
-            let ray_direction = pixel_center - camera_center;
-            let r = Ray::new(camera_center, ray_direction);
-
-            let pixel_color = ray_color(r, &world);
-            write_color(std::io::stdout(), pixel_color)?;
-        }
-    }
-    info!("Done.");
 
-    Ok(())
+    Camera::default()
+        .with_aspect_ratio(16.0 / 9.0)
+        .with_image_width(400)
+        .render(&world)
 }

Listing 40: [main.rs] The new main, using the new camera


Running this newly refactored program should give us the same rendered image as before.


  1. The idiomatic Rust solution for this type of problem is the builder pattern. Important parameters can be set either directly with struct access or via a convenient method chain in the owning code.

Antialiasing

If you zoom into the rendered images so far, you might notice the harsh “stair step” nature of edges in our rendered images. This stair-stepping is commonly referred to as “aliasing”, or “jaggies”. When a real camera takes a picture, there are usually no jaggies along edges, because the edge pixels are a blend of some foreground and some background. Consider that unlike our rendered images, a true image of the world is continuous. Put another way, the world (and any true image of it) has effectively infinite resolution. We can get the same effect by averaging a bunch of samples for each pixel.

With a single ray through the center of each pixel, we are performing what is commonly called point sampling. The problem with point sampling can be illustrated by rendering a small checkerboard far away. If this checkerboard consists of an 8×8 grid of black and white tiles, but only four rays hit it, then all four rays might intersect only white tiles, or only black, or some odd combination. In the real world, when we perceive a checkerboard far away with our eyes, we perceive it as a gray color, instead of sharp points of black and white. That's because our eyes are naturally doing what we want our ray tracer to do: integrate the (continuous function of) light falling on a particular (discrete) region of our rendered image.

Clearly we don't gain anything by just resampling the same ray through the pixel center multiple times — we'd just get the same result each time. Instead, we want to sample the light falling around the pixel, and then integrate those samples to approximate the true continuous result. So, how do we integrate the light falling around the pixel?

We'll adopt the simplest model: sampling the square region centered at the pixel that extends halfway to each of the four neighboring pixels. This is not the optimal approach, but it is the most straight-forward. (See A Pixel is Not a Little Square for a deeper dive into this topic.)

Pixel samples

Figure 8: Pixel samples


Some Random Number Utilities 1

We're going to need a random number generator that returns real random numbers. This function should return a canonical random number, which by convention falls in the range \( 0 \le n < 1 \). The “less than” before the \( 1 \) is important, as we will sometimes take advantage of that.

A simple approach to this is to use the std::rand() function that can be found in <cstdlib>, which returns a random integer in the range 0 and RAND_MAX. Hence we can get a real random number as desired with the following code snippet, added to rtweekend.h:

// Utility Functions

#[inline]
pub fn degrees_to_radians(degrees: f64) -> f64 {
    degrees.to_radians()
}

#[inline]
pub fn random_double() -> f64 {
    // Returns a random real in [0,1).
    rand::random::<i32>() as f64 / (i32::MAX as f64 + 1.0)
}

#[inline]
pub fn random_double_range(min: f64, max: f64) -> f64 {
    // Returns a random real in [min,max).
    min + (max - min) * random_double()
}

Listing 41: [prelude.rs] random_double() functions

C++ did not traditionally have a standard random number generator, but newer versions of C++ have addressed this issue with the <random> header (if imperfectly according to some experts). 2 If you want to use this, you can obtain a random number with the conditions we need as follows:

// Utility Functions

#[inline]
pub fn degrees_to_radians(degrees: f64) -> f64 {
    degrees.to_radians()
}

#[inline]
pub fn random_double() -> f64 {
    // Returns a random real in [0,1).
    rand::random()
}

#[inline]
pub fn random_double_range(min: f64, max: f64) -> f64 {
    // Returns a random real in [min,max).
    rand::random_range(min..max)
}

Listing 42: [prelude.rs] random_double() functions


  1. The utility functions, such as degrees_to_radians, were not mentioned in chapter 6.7 since Rust already provides this functionality by default. The utility functions for randomness described in this chapter will also not be included in the code, as they only wrap functions implemented in the Rust rand crate (see listing 42).

  2. As far as I am aware, the Rust implementation is perfectly fine.

Generating Pixels with Multiple Samples

For a single pixel composed of multiple samples, we'll select samples from the area surrounding the pixel and average the resulting light (color) values together.

First we'll update the write_color() function to account for the number of samples we use: we need to find the average across all of the samples that we take. To do this, we'll add the full color from each iteration, and then finish with a single division (by the number of samples) at the end, before writing out the color. To ensure that the color components of the final result remain within the proper \( [0,1] \) bounds, we'll add and use a small helper function: interval::clamp(x). 1 2

diff --git a/src/interval.rs b/src/interval.rs
index 509fddf..482d922 100644
--- a/src/interval.rs
+++ b/src/interval.rs
@@ -1,39 +1,43 @@
 #[derive(Debug, Clone, Copy)]
 pub struct Interval {
     pub min: f64,
     pub max: f64,
 }
 
 impl Default for Interval {
     fn default() -> Self {
         Self::EMPTY
     }
 }
 
 impl Interval {
     pub const EMPTY: Self = Self {
         min: f64::INFINITY,
         max: f64::NEG_INFINITY,
     };
 
     pub const UNIVERSE: Self = Self {
         min: f64::NEG_INFINITY,
         max: f64::INFINITY,
     };
 
-    pub fn new(min: f64, max: f64) -> Self {
+    pub const fn new(min: f64, max: f64) -> Self {
         Self { min, max }
     }
 
     pub fn size(&self) -> f64 {
         self.max - self.min
     }
 
     pub fn contains(&self, x: f64) -> bool {
         self.min <= x && x <= self.max
     }
 
     pub fn surrounds(&self, x: f64) -> bool {
         self.min < x && x < self.max
     }
+
+    pub const fn clamp(&self, x: f64) -> f64 {
+        x.clamp(self.min, self.max)
+    }
 }

Listing 43: [interval.rs] The interval::clamp() utility function


Here's the updated write_color() function that incorporates the interval clamping function:

diff --git a/src/color.rs b/src/color.rs
index c645ca2..1615d55 100644
--- a/src/color.rs
+++ b/src/color.rs
@@ -1,15 +1,18 @@
-use crate::vec3::Vec3;
+use crate::prelude::*;
 
 pub type Color = Vec3;
 
 pub fn write_color(mut out: impl std::io::Write, pixel_color: Color) -> std::io::Result<()> {
     let r = pixel_color.x();
     let g = pixel_color.y();
     let b = pixel_color.z();
 
-    let rbyte = (255.999 * r) as i32;
-    let gbyte = (255.999 * g) as i32;
-    let bbyte = (255.999 * b) as i32;
+    // Translate the [0,1] component values to the byte range [0,255].
+    const INTENSITY: Interval = Interval::new(0.000, 0.999);
+    let rbyte = (256.0 * INTENSITY.clamp(r)) as i32;
+    let gbyte = (256.0 * INTENSITY.clamp(g)) as i32;
+    let bbyte = (256.0 * INTENSITY.clamp(b)) as i32;
 
+    // Write out the pixel color components.
     writeln!(out, "{rbyte} {gbyte} {bbyte}")
 }

Listing 44: [color.rs] The multi-sample write_color() function


Now let's update the camera class to define and use a new camera::get_ray(i,j) function, which will generate different samples for each pixel. This function will use a new helper function sample_square() that generates a random sample point within the unit square centered at the origin. We then transform the random sample from this ideal square back to the particular pixel we're currently sampling.

diff --git a/src/camera.rs b/src/camera.rs
index 73dc5cc..f181b03 100644
--- a/src/camera.rs
+++ b/src/camera.rs
@@ -1,110 +1,151 @@
 use crate::{hittable::Hittable, prelude::*};
 
 pub struct Camera {
     /// Ratio of image width over height
     pub aspect_ratio: f64,
     /// Rendered image width in pixel count
     pub image_width: i32,
+    // Count of random samples for each pixel
+    pub samples_per_pixel: i32,
 
     /// Rendered image height
     image_height: i32,
+    // Color scale factor for a sum of pixel samples
+    pixel_samples_scale: f64,
     /// Camera center
     center: Point3,
     /// Location of pixel 0, 0
     pixel00_loc: Point3,
     /// Offset to pixel to the right
     pixel_delta_u: Vec3,
     /// Offset to pixel below
     pixel_delta_v: Vec3,
 }
 
 impl Default for Camera {
     fn default() -> Self {
         Self {
             aspect_ratio: 1.0,
             image_width: 100,
+            samples_per_pixel: 10,
             image_height: Default::default(),
+            pixel_samples_scale: Default::default(),
             center: Default::default(),
             pixel00_loc: Default::default(),
             pixel_delta_u: Default::default(),
             pixel_delta_v: Default::default(),
         }
     }
 }
 
 impl Camera {
     pub fn with_aspect_ratio(mut self, aspect_ratio: f64) -> Self {
         self.aspect_ratio = aspect_ratio;
 
         self
     }
 
     pub fn with_image_width(mut self, image_width: i32) -> Self {
         self.image_width = image_width;
 
         self
     }
 
+    pub fn with_samples_per_pixel(mut self, samples_per_pixel: i32) -> Self {
+        self.samples_per_pixel = samples_per_pixel;
+
+        self
+    }
+
     pub fn render(&mut self, world: &impl Hittable) -> std::io::Result<()> {
         self.initialize();
 
         println!("P3");
         println!("{} {}", self.image_width, self.image_height);
         println!("255");
 
         for j in 0..self.image_height {
             info!("Scanlines remaining: {}", self.image_height - j);
             for i in 0..self.image_width {
-                let pixel_center = self.pixel00_loc
-                    + (i as f64) * self.pixel_delta_u
-                    + (j as f64) * self.pixel_delta_v;
-                let ray_direction = pixel_center - self.center;
-                let r = Ray::new(self.center, ray_direction);
-
-                let pixel_color = Self::ray_color(r, world);
-                write_color(std::io::stdout(), pixel_color)?;
+                let mut pixel_color = Color::new(0.0, 0.0, 0.0);
+                for _sample in 0..self.samples_per_pixel {
+                    let r = self.get_ray(i, j);
+                    pixel_color += Self::ray_color(r, world);
+                }
+                write_color(std::io::stdout(), self.pixel_samples_scale * pixel_color)?;
             }
         }
         info!("Done.");
 
         Ok(())
     }
 
     fn initialize(&mut self) {
         self.image_height = {
             let image_height = (self.image_width as f64 / self.aspect_ratio) as i32;
             if image_height < 1 { 1 } else { image_height }
         };
 
+        self.pixel_samples_scale = 1.0 / self.samples_per_pixel as f64;
+
         self.center = Point3::new(0.0, 0.0, 0.0);
 
         // Determine viewport dimensions.
         let focal_length = 1.0;
         let viewport_height = 2.0;
         let viewport_width =
             viewport_height * (self.image_width as f64) / (self.image_height as f64);
 
         // Calculate the vectors across the horizontal and down the vertical viewport edges.
         let viewport_u = Vec3::new(viewport_width, 0.0, 0.0);
         let viewport_v = Vec3::new(0.0, -viewport_height, 0.0);
 
         // Calculate the horizontal and vertical delta vectors from pixel to pixel.
         self.pixel_delta_u = viewport_u / self.image_width as f64;
         self.pixel_delta_v = viewport_v / self.image_height as f64;
 
         // Calculate the location of the upper left pixel.
         let viewport_upper_left =
             self.center - Vec3::new(0.0, 0.0, focal_length) - viewport_u / 2.0 - viewport_v / 2.0;
         self.pixel00_loc = viewport_upper_left + 0.5 * (self.pixel_delta_u + self.pixel_delta_v);
     }
 
+    fn get_ray(&self, i: i32, j: i32) -> Ray {
+        // Construct a camera ray originating from the origin and directed at randomly sampled
+        // point around the pixel location i, j.
+
+        let offset = Self::sample_square();
+        let pixel_sample = self.pixel00_loc
+            + ((i as f64 + offset.x()) * self.pixel_delta_u)
+            + ((j as f64 + offset.y()) * self.pixel_delta_v);
+
+        let ray_origin = self.center;
+        let ray_direction = pixel_sample - ray_origin;
+
+        Ray::new(ray_origin, ray_direction)
+    }
+
+    fn sample_square() -> Vec3 {
+        // Returns the vector to a random point in the [-.5,-.5]-[+.5,+.5] unit square.
+        Vec3::new(
+            rand::random::<f64>() - 0.5,
+            rand::random::<f64>() - 0.5,
+            0.0,
+        )
+    }
+
+    fn _sample_disk(radius: f64) -> Vec3 {
+        // Returns a random point in the unit (radius 0.5) disk centered at the origin.
+        radius * random_in_unit_disk()
+    }
+
     fn ray_color(r: Ray, world: &impl Hittable) -> Color {
         if let Some(rec) = world.hit(r, Interval::new(0.0, INFINITY)) {
             return 0.5 * (rec.normal + Color::new(1.0, 1.0, 1.0));
         }
 
         let unit_direction = unit_vector(r.direction());
         let a = 0.5 * (unit_direction.y() + 1.0);
         (1.0 - a) * Color::new(1.0, 1.0, 1.0) + a * Color::new(0.5, 0.7, 1.0)
     }
 }

Listing 45: [camera.rs] Camera with samples-per-pixel parameter


(In addition to the new sample_square() function above, you'll also find the function sample_disk() in the Github source code. This is included in case you'd like to experiment with non-square pixels, but we won't be using it in this book. sample_disk() depends on the function random_in_unit_disk() which is defined later on.)

Main is updated to set the new camera parameter.

diff --git a/src/main.rs b/src/main.rs
index 27377f1..9f08807 100644
--- a/src/main.rs
+++ b/src/main.rs
@@ -1,15 +1,16 @@
 use code::{camera::Camera, hittable_list::HittableList, prelude::*, sphere::Sphere};
 
 fn main() -> std::io::Result<()> {
     let mut world = HittableList::new();
 
     world.add(Rc::new(Sphere::new(Point3::new(0.0, 0.0, -1.0), 0.5)));
     world.add(Rc::new(Sphere::new(Point3::new(0.0, -100.5, -1.0), 100.0)));
 
     env_logger::init();
 
     Camera::default()
         .with_aspect_ratio(16.0 / 9.0)
         .with_image_width(400)
+        .with_samples_per_pixel(100)
         .render(&world)
 }

Listing 46: [main.rs] Setting the new samples-per-pixel parameter


Zooming into the image that is produced, we can see the difference in edge pixels.

Before and after antialiasing

Image 6: Before and after antialiasing



  1. For this purpose, version 1.50 of Rust introduced the f64::clamp(self, min, max) function (the C++17 standard introduced a similar function called std::clamp(v, lo, hi)).

  2. The function is const, meaning it can be used to initialise const variables. This will be demonstrated in the next listing.

Diffuse Materials

Now that we have objects and multiple rays per pixel, we can make some realistic looking materials. We’ll start with diffuse materials (also called matte).

One question is whether we mix and match geometry and materials (so that we can assign a material to multiple spheres, or vice versa) or if geometry and materials are tightly bound (which could be useful for procedural objects where the geometry and material are linked). We’ll go with separate — which is usual in most renderers — but do be aware that there are alternative approaches.

A Simple Diffuse Material

Diffuse objects that don’t emit their own light merely take on the color of their surroundings, but they do modulate that with their own intrinsic color. Light that reflects off a diffuse surface has its direction randomized, so, if we send three rays into a crack between two diffuse surfaces they will each have different random behavior:

Light ray bounces

Figure 9: Light ray bounces


They might also be absorbed rather than reflected. The darker the surface, the more likely the ray is absorbed (that’s why it's dark!). Really any algorithm that randomizes direction will produce surfaces that look matte. Let's start with the most intuitive: a surface that randomly bounces a ray equally in all directions. For this material, a ray that hits the surface has an equal probability of bouncing in any direction away from the surface.

Equal reflection above the horizon

Figure 10: Equal reflection above the horizon


This very intuitive material is the simplest kind of diffuse and — indeed — many of the first raytracing papers used this diffuse method (before adopting a more accurate method that we'll be implementing a little bit later). We don't currently have a way to randomly reflect a ray, so we'll need to add a few functions to our vector utility header. The first thing we need is the ability to generate arbitrary random vectors:

diff --git a/src/vec3.rs b/src/vec3.rs
index d4352e1..f9228ac 100644
--- a/src/vec3.rs
+++ b/src/vec3.rs
@@ -1,190 +1,202 @@
 use std::{
     fmt::Display,
     ops::{Add, AddAssign, Div, DivAssign, Index, IndexMut, Mul, MulAssign, Neg, Sub},
 };
 
 #[derive(Debug, Default, Clone, Copy)]
 pub struct Vec3 {
     pub e: [f64; 3],
 }
 
 pub type Point3 = Vec3;
 
 impl Vec3 {
     pub fn new(e0: f64, e1: f64, e2: f64) -> Self {
         Self { e: [e0, e1, e2] }
     }
 
     pub fn x(&self) -> f64 {
         self.e[0]
     }
 
     pub fn y(&self) -> f64 {
         self.e[1]
     }
 
     pub fn z(&self) -> f64 {
         self.e[2]
     }
 
     pub fn length(&self) -> f64 {
         f64::sqrt(self.length_squared())
     }
 
     pub fn length_squared(&self) -> f64 {
         self.e[0] * self.e[0] + self.e[1] * self.e[1] + self.e[2] * self.e[2]
     }
+
+    pub fn random() -> Self {
+        Vec3 { e: rand::random() }
+    }
+
+    pub fn random_range(min: f64, max: f64) -> Self {
+        Vec3::new(
+            rand::random_range(min..max),
+            rand::random_range(min..max),
+            rand::random_range(min..max),
+        )
+    }
 }
 
 impl Neg for Vec3 {
     type Output = Self;
 
     fn neg(self) -> Self::Output {
         Self::Output {
             e: self.e.map(|e| -e),
         }
     }
 }
 
 impl Index<usize> for Vec3 {
     type Output = f64;
 
     fn index(&self, index: usize) -> &Self::Output {
         &self.e[index]
     }
 }
 
 impl IndexMut<usize> for Vec3 {
     fn index_mut(&mut self, index: usize) -> &mut Self::Output {
         &mut self.e[index]
     }
 }
 
 impl AddAssign for Vec3 {
     fn add_assign(&mut self, rhs: Self) {
         self.e[0] += rhs.e[0];
         self.e[1] += rhs.e[1];
         self.e[2] += rhs.e[2];
     }
 }
 
 impl MulAssign<f64> for Vec3 {
     fn mul_assign(&mut self, rhs: f64) {
         self.e[0] *= rhs;
         self.e[1] *= rhs;
         self.e[2] *= rhs;
     }
 }
 
 impl DivAssign<f64> for Vec3 {
     fn div_assign(&mut self, rhs: f64) {
         self.mul_assign(1.0 / rhs);
     }
 }
 
 impl Display for Vec3 {
     fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
         write!(f, "{} {} {}", self.e[0], self.e[1], self.e[2])
     }
 }
 
 impl Add for Vec3 {
     type Output = Self;
 
     fn add(self, rhs: Self) -> Self::Output {
         Self::Output {
             e: [
                 self.e[0] + rhs.e[0],
                 self.e[1] + rhs.e[1],
                 self.e[2] + rhs.e[2],
             ],
         }
     }
 }
 
 impl Sub for Vec3 {
     type Output = Self;
 
     fn sub(self, rhs: Self) -> Self::Output {
         Self::Output {
             e: [
                 self.e[0] - rhs.e[0],
                 self.e[1] - rhs.e[1],
                 self.e[2] - rhs.e[2],
             ],
         }
     }
 }
 
 impl Mul for Vec3 {
     type Output = Self;
 
     fn mul(self, rhs: Self) -> Self::Output {
         Self::Output {
             e: [
                 self.e[0] * rhs.e[0],
                 self.e[1] * rhs.e[1],
                 self.e[2] * rhs.e[2],
             ],
         }
     }
 }
 
 impl Mul<f64> for Vec3 {
     type Output = Self;
 
     fn mul(self, rhs: f64) -> Self::Output {
         Self::Output {
             e: [self.e[0] * rhs, self.e[1] * rhs, self.e[2] * rhs],
         }
     }
 }
 
 impl Mul<Vec3> for f64 {
     type Output = Vec3;
 
     fn mul(self, rhs: Vec3) -> Self::Output {
         rhs.mul(self)
     }
 }
 
 impl Div<f64> for Vec3 {
     type Output = Self;
 
     fn div(self, rhs: f64) -> Self::Output {
         self * (1.0 / rhs)
     }
 }
 
 #[inline]
 pub fn dot(u: Vec3, v: Vec3) -> f64 {
     u.e[0] * v.e[0] + u.e[1] * v.e[1] + u.e[2] * v.e[2]
 }
 
 #[inline]
 pub fn cross(u: Vec3, v: Vec3) -> Vec3 {
     Vec3::new(
         u.e[1] * v.e[2] - u.e[2] * v.e[1],
         u.e[2] * v.e[0] - u.e[0] * v.e[2],
         u.e[0] * v.e[1] - u.e[1] * v.e[0],
     )
 }
 
 #[inline]
 pub fn unit_vector(v: Vec3) -> Vec3 {
     v / v.length()
 }
 
 #[inline]
 pub fn random_in_unit_disk() -> Vec3 {
     loop {
         let p = Vec3::new(
             rand::random_range(-1.0..1.0),
             rand::random_range(-1.0..1.0),
             0.0,
         );
         if p.length_squared() < 1.0 {
             return p;
         }
     }
 }

Listing 47: [vec3.rs] vec3 random utility functions


Then we need to figure out how to manipulate a random vector so that we only get results that are on the surface of a hemisphere. There are analytical methods of doing this, but they are actually surprisingly complicated to understand, and quite a bit complicated to implement. Instead, we'll use what is typically the easiest algorithm: A rejection method. A rejection method works by repeatedly generating random samples until we produce a sample that meets the desired criteria. In other words, keep rejecting bad samples until you find a good one.

There are many equally valid ways of generating a random vector on a hemisphere using the rejection method, but for our purposes we will go with the simplest, which is:

  1. Generate a random vector inside the unit sphere
  2. Normalize this vector to extend it to the sphere surface
  3. Invert the normalized vector if it falls onto the wrong hemisphere

First, we will use a rejection method to generate the random vector inside the unit sphere (that is, a sphere of radius \( 1 \)). Pick a random point inside the cube enclosing the unit sphere (that is, where \( 𝑥 \), \( 𝑦 \), and \( 𝑧 \) are all in the range \( [−1,+1] \)). If this point lies outside the unit sphere, then generate a new one until we find one that lies inside or on the unit sphere.

Two vectors were rejected before finding a good one (pre-normalization)

Figure 11: Two vectors were rejected before finding a good one (pre-normalization)


The accepted random vector is normalized to produce a unit vector

Figure 12: The accepted random vector is normalized to produce a unit vector


Here's our first draft of the function:

diff --git a/src/vec3.rs b/src/vec3.rs
index f9228ac..e3ee3ea 100644
--- a/src/vec3.rs
+++ b/src/vec3.rs
@@ -1,202 +1,213 @@
 use std::{
     fmt::Display,
     ops::{Add, AddAssign, Div, DivAssign, Index, IndexMut, Mul, MulAssign, Neg, Sub},
 };
 
 #[derive(Debug, Default, Clone, Copy)]
 pub struct Vec3 {
     pub e: [f64; 3],
 }
 
 pub type Point3 = Vec3;
 
 impl Vec3 {
     pub fn new(e0: f64, e1: f64, e2: f64) -> Self {
         Self { e: [e0, e1, e2] }
     }
 
     pub fn x(&self) -> f64 {
         self.e[0]
     }
 
     pub fn y(&self) -> f64 {
         self.e[1]
     }
 
     pub fn z(&self) -> f64 {
         self.e[2]
     }
 
     pub fn length(&self) -> f64 {
         f64::sqrt(self.length_squared())
     }
 
     pub fn length_squared(&self) -> f64 {
         self.e[0] * self.e[0] + self.e[1] * self.e[1] + self.e[2] * self.e[2]
     }
 
     pub fn random() -> Self {
         Vec3 { e: rand::random() }
     }
 
     pub fn random_range(min: f64, max: f64) -> Self {
         Vec3::new(
             rand::random_range(min..max),
             rand::random_range(min..max),
             rand::random_range(min..max),
         )
     }
 }
 
 impl Neg for Vec3 {
     type Output = Self;
 
     fn neg(self) -> Self::Output {
         Self::Output {
             e: self.e.map(|e| -e),
         }
     }
 }
 
 impl Index<usize> for Vec3 {
     type Output = f64;
 
     fn index(&self, index: usize) -> &Self::Output {
         &self.e[index]
     }
 }
 
 impl IndexMut<usize> for Vec3 {
     fn index_mut(&mut self, index: usize) -> &mut Self::Output {
         &mut self.e[index]
     }
 }
 
 impl AddAssign for Vec3 {
     fn add_assign(&mut self, rhs: Self) {
         self.e[0] += rhs.e[0];
         self.e[1] += rhs.e[1];
         self.e[2] += rhs.e[2];
     }
 }
 
 impl MulAssign<f64> for Vec3 {
     fn mul_assign(&mut self, rhs: f64) {
         self.e[0] *= rhs;
         self.e[1] *= rhs;
         self.e[2] *= rhs;
     }
 }
 
 impl DivAssign<f64> for Vec3 {
     fn div_assign(&mut self, rhs: f64) {
         self.mul_assign(1.0 / rhs);
     }
 }
 
 impl Display for Vec3 {
     fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
         write!(f, "{} {} {}", self.e[0], self.e[1], self.e[2])
     }
 }
 
 impl Add for Vec3 {
     type Output = Self;
 
     fn add(self, rhs: Self) -> Self::Output {
         Self::Output {
             e: [
                 self.e[0] + rhs.e[0],
                 self.e[1] + rhs.e[1],
                 self.e[2] + rhs.e[2],
             ],
         }
     }
 }
 
 impl Sub for Vec3 {
     type Output = Self;
 
     fn sub(self, rhs: Self) -> Self::Output {
         Self::Output {
             e: [
                 self.e[0] - rhs.e[0],
                 self.e[1] - rhs.e[1],
                 self.e[2] - rhs.e[2],
             ],
         }
     }
 }
 
 impl Mul for Vec3 {
     type Output = Self;
 
     fn mul(self, rhs: Self) -> Self::Output {
         Self::Output {
             e: [
                 self.e[0] * rhs.e[0],
                 self.e[1] * rhs.e[1],
                 self.e[2] * rhs.e[2],
             ],
         }
     }
 }
 
 impl Mul<f64> for Vec3 {
     type Output = Self;
 
     fn mul(self, rhs: f64) -> Self::Output {
         Self::Output {
             e: [self.e[0] * rhs, self.e[1] * rhs, self.e[2] * rhs],
         }
     }
 }
 
 impl Mul<Vec3> for f64 {
     type Output = Vec3;
 
     fn mul(self, rhs: Vec3) -> Self::Output {
         rhs.mul(self)
     }
 }
 
 impl Div<f64> for Vec3 {
     type Output = Self;
 
     fn div(self, rhs: f64) -> Self::Output {
         self * (1.0 / rhs)
     }
 }
 
 #[inline]
 pub fn dot(u: Vec3, v: Vec3) -> f64 {
     u.e[0] * v.e[0] + u.e[1] * v.e[1] + u.e[2] * v.e[2]
 }
 
 #[inline]
 pub fn cross(u: Vec3, v: Vec3) -> Vec3 {
     Vec3::new(
         u.e[1] * v.e[2] - u.e[2] * v.e[1],
         u.e[2] * v.e[0] - u.e[0] * v.e[2],
         u.e[0] * v.e[1] - u.e[1] * v.e[0],
     )
 }
 
 #[inline]
 pub fn unit_vector(v: Vec3) -> Vec3 {
     v / v.length()
 }
 
 #[inline]
+pub fn random_unit_vector() -> Vec3 {
+    loop {
+        let p = Vec3::random_range(-1.0, 1.0);
+        let lensq = p.length_squared();
+        if lensq <= 1.0 {
+            return p / f64::sqrt(lensq);
+        }
+    }
+}
+
+#[inline]
 pub fn random_in_unit_disk() -> Vec3 {
     loop {
         let p = Vec3::new(
             rand::random_range(-1.0..1.0),
             rand::random_range(-1.0..1.0),
             0.0,
         );
         if p.length_squared() < 1.0 {
             return p;
         }
     }
 }

Listing 48: [vec3.rs] The random_unit_vector() function, version one


Sadly, we have a small floating-point abstraction leak to deal with. Since floating-point numbers have finite precision, a very small value can underflow to zero when squared. So if all three coordinates are small enough (that is, very near the center of the sphere), the norm of the vector will be zero, and thus normalizing will yield the bogus vector \( [\pm \infty, \pm \infty, \pm \infty] \). To fix this, we'll also reject points that lie inside this “black hole” around the center. With double precision (64-bit floats), we can safely support values greater than 10−160.

Here's our more robust function:

diff --git a/src/vec3.rs b/src/vec3.rs
index e3ee3ea..ba86acc 100644
--- a/src/vec3.rs
+++ b/src/vec3.rs
@@ -1,213 +1,213 @@
 use std::{
     fmt::Display,
     ops::{Add, AddAssign, Div, DivAssign, Index, IndexMut, Mul, MulAssign, Neg, Sub},
 };
 
 #[derive(Debug, Default, Clone, Copy)]
 pub struct Vec3 {
     pub e: [f64; 3],
 }
 
 pub type Point3 = Vec3;
 
 impl Vec3 {
     pub fn new(e0: f64, e1: f64, e2: f64) -> Self {
         Self { e: [e0, e1, e2] }
     }
 
     pub fn x(&self) -> f64 {
         self.e[0]
     }
 
     pub fn y(&self) -> f64 {
         self.e[1]
     }
 
     pub fn z(&self) -> f64 {
         self.e[2]
     }
 
     pub fn length(&self) -> f64 {
         f64::sqrt(self.length_squared())
     }
 
     pub fn length_squared(&self) -> f64 {
         self.e[0] * self.e[0] + self.e[1] * self.e[1] + self.e[2] * self.e[2]
     }
 
     pub fn random() -> Self {
         Vec3 { e: rand::random() }
     }
 
     pub fn random_range(min: f64, max: f64) -> Self {
         Vec3::new(
             rand::random_range(min..max),
             rand::random_range(min..max),
             rand::random_range(min..max),
         )
     }
 }
 
 impl Neg for Vec3 {
     type Output = Self;
 
     fn neg(self) -> Self::Output {
         Self::Output {
             e: self.e.map(|e| -e),
         }
     }
 }
 
 impl Index<usize> for Vec3 {
     type Output = f64;
 
     fn index(&self, index: usize) -> &Self::Output {
         &self.e[index]
     }
 }
 
 impl IndexMut<usize> for Vec3 {
     fn index_mut(&mut self, index: usize) -> &mut Self::Output {
         &mut self.e[index]
     }
 }
 
 impl AddAssign for Vec3 {
     fn add_assign(&mut self, rhs: Self) {
         self.e[0] += rhs.e[0];
         self.e[1] += rhs.e[1];
         self.e[2] += rhs.e[2];
     }
 }
 
 impl MulAssign<f64> for Vec3 {
     fn mul_assign(&mut self, rhs: f64) {
         self.e[0] *= rhs;
         self.e[1] *= rhs;
         self.e[2] *= rhs;
     }
 }
 
 impl DivAssign<f64> for Vec3 {
     fn div_assign(&mut self, rhs: f64) {
         self.mul_assign(1.0 / rhs);
     }
 }
 
 impl Display for Vec3 {
     fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
         write!(f, "{} {} {}", self.e[0], self.e[1], self.e[2])
     }
 }
 
 impl Add for Vec3 {
     type Output = Self;
 
     fn add(self, rhs: Self) -> Self::Output {
         Self::Output {
             e: [
                 self.e[0] + rhs.e[0],
                 self.e[1] + rhs.e[1],
                 self.e[2] + rhs.e[2],
             ],
         }
     }
 }
 
 impl Sub for Vec3 {
     type Output = Self;
 
     fn sub(self, rhs: Self) -> Self::Output {
         Self::Output {
             e: [
                 self.e[0] - rhs.e[0],
                 self.e[1] - rhs.e[1],
                 self.e[2] - rhs.e[2],
             ],
         }
     }
 }
 
 impl Mul for Vec3 {
     type Output = Self;
 
     fn mul(self, rhs: Self) -> Self::Output {
         Self::Output {
             e: [
                 self.e[0] * rhs.e[0],
                 self.e[1] * rhs.e[1],
                 self.e[2] * rhs.e[2],
             ],
         }
     }
 }
 
 impl Mul<f64> for Vec3 {
     type Output = Self;
 
     fn mul(self, rhs: f64) -> Self::Output {
         Self::Output {
             e: [self.e[0] * rhs, self.e[1] * rhs, self.e[2] * rhs],
         }
     }
 }
 
 impl Mul<Vec3> for f64 {
     type Output = Vec3;
 
     fn mul(self, rhs: Vec3) -> Self::Output {
         rhs.mul(self)
     }
 }
 
 impl Div<f64> for Vec3 {
     type Output = Self;
 
     fn div(self, rhs: f64) -> Self::Output {
         self * (1.0 / rhs)
     }
 }
 
 #[inline]
 pub fn dot(u: Vec3, v: Vec3) -> f64 {
     u.e[0] * v.e[0] + u.e[1] * v.e[1] + u.e[2] * v.e[2]
 }
 
 #[inline]
 pub fn cross(u: Vec3, v: Vec3) -> Vec3 {
     Vec3::new(
         u.e[1] * v.e[2] - u.e[2] * v.e[1],
         u.e[2] * v.e[0] - u.e[0] * v.e[2],
         u.e[0] * v.e[1] - u.e[1] * v.e[0],
     )
 }
 
 #[inline]
 pub fn unit_vector(v: Vec3) -> Vec3 {
     v / v.length()
 }
 
 #[inline]
 pub fn random_unit_vector() -> Vec3 {
     loop {
         let p = Vec3::random_range(-1.0, 1.0);
         let lensq = p.length_squared();
-        if lensq <= 1.0 {
+        if 1e-160 < lensq && lensq <= 1.0 {
             return p / f64::sqrt(lensq);
         }
     }
 }
 
 #[inline]
 pub fn random_in_unit_disk() -> Vec3 {
     loop {
         let p = Vec3::new(
             rand::random_range(-1.0..1.0),
             rand::random_range(-1.0..1.0),
             0.0,
         );
         if p.length_squared() < 1.0 {
             return p;
         }
     }
 }

Listing 49: [vec3.rs] The random_unit_vector() function, version two


Now that we have a random unit vector, we can determine if it is on the correct hemisphere by comparing against the surface normal:

The normal vector tells us which hemisphere we need

Figure 13: The normal vector tells us which hemisphere we need


We can take the dot product of the surface normal and our random vector to determine if it's in the correct hemisphere. If the dot product is positive, then the vector is in the correct hemisphere. If the dot product is negative, then we need to invert the vector.

diff --git a/src/vec3.rs b/src/vec3.rs
index ba86acc..4cb969b 100644
--- a/src/vec3.rs
+++ b/src/vec3.rs
@@ -1,213 +1,223 @@
 use std::{
     fmt::Display,
     ops::{Add, AddAssign, Div, DivAssign, Index, IndexMut, Mul, MulAssign, Neg, Sub},
 };
 
 #[derive(Debug, Default, Clone, Copy)]
 pub struct Vec3 {
     pub e: [f64; 3],
 }
 
 pub type Point3 = Vec3;
 
 impl Vec3 {
     pub fn new(e0: f64, e1: f64, e2: f64) -> Self {
         Self { e: [e0, e1, e2] }
     }
 
     pub fn x(&self) -> f64 {
         self.e[0]
     }
 
     pub fn y(&self) -> f64 {
         self.e[1]
     }
 
     pub fn z(&self) -> f64 {
         self.e[2]
     }
 
     pub fn length(&self) -> f64 {
         f64::sqrt(self.length_squared())
     }
 
     pub fn length_squared(&self) -> f64 {
         self.e[0] * self.e[0] + self.e[1] * self.e[1] + self.e[2] * self.e[2]
     }
 
     pub fn random() -> Self {
         Vec3 { e: rand::random() }
     }
 
     pub fn random_range(min: f64, max: f64) -> Self {
         Vec3::new(
             rand::random_range(min..max),
             rand::random_range(min..max),
             rand::random_range(min..max),
         )
     }
 }
 
 impl Neg for Vec3 {
     type Output = Self;
 
     fn neg(self) -> Self::Output {
         Self::Output {
             e: self.e.map(|e| -e),
         }
     }
 }
 
 impl Index<usize> for Vec3 {
     type Output = f64;
 
     fn index(&self, index: usize) -> &Self::Output {
         &self.e[index]
     }
 }
 
 impl IndexMut<usize> for Vec3 {
     fn index_mut(&mut self, index: usize) -> &mut Self::Output {
         &mut self.e[index]
     }
 }
 
 impl AddAssign for Vec3 {
     fn add_assign(&mut self, rhs: Self) {
         self.e[0] += rhs.e[0];
         self.e[1] += rhs.e[1];
         self.e[2] += rhs.e[2];
     }
 }
 
 impl MulAssign<f64> for Vec3 {
     fn mul_assign(&mut self, rhs: f64) {
         self.e[0] *= rhs;
         self.e[1] *= rhs;
         self.e[2] *= rhs;
     }
 }
 
 impl DivAssign<f64> for Vec3 {
     fn div_assign(&mut self, rhs: f64) {
         self.mul_assign(1.0 / rhs);
     }
 }
 
 impl Display for Vec3 {
     fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
         write!(f, "{} {} {}", self.e[0], self.e[1], self.e[2])
     }
 }
 
 impl Add for Vec3 {
     type Output = Self;
 
     fn add(self, rhs: Self) -> Self::Output {
         Self::Output {
             e: [
                 self.e[0] + rhs.e[0],
                 self.e[1] + rhs.e[1],
                 self.e[2] + rhs.e[2],
             ],
         }
     }
 }
 
 impl Sub for Vec3 {
     type Output = Self;
 
     fn sub(self, rhs: Self) -> Self::Output {
         Self::Output {
             e: [
                 self.e[0] - rhs.e[0],
                 self.e[1] - rhs.e[1],
                 self.e[2] - rhs.e[2],
             ],
         }
     }
 }
 
 impl Mul for Vec3 {
     type Output = Self;
 
     fn mul(self, rhs: Self) -> Self::Output {
         Self::Output {
             e: [
                 self.e[0] * rhs.e[0],
                 self.e[1] * rhs.e[1],
                 self.e[2] * rhs.e[2],
             ],
         }
     }
 }
 
 impl Mul<f64> for Vec3 {
     type Output = Self;
 
     fn mul(self, rhs: f64) -> Self::Output {
         Self::Output {
             e: [self.e[0] * rhs, self.e[1] * rhs, self.e[2] * rhs],
         }
     }
 }
 
 impl Mul<Vec3> for f64 {
     type Output = Vec3;
 
     fn mul(self, rhs: Vec3) -> Self::Output {
         rhs.mul(self)
     }
 }
 
 impl Div<f64> for Vec3 {
     type Output = Self;
 
     fn div(self, rhs: f64) -> Self::Output {
         self * (1.0 / rhs)
     }
 }
 
 #[inline]
 pub fn dot(u: Vec3, v: Vec3) -> f64 {
     u.e[0] * v.e[0] + u.e[1] * v.e[1] + u.e[2] * v.e[2]
 }
 
 #[inline]
 pub fn cross(u: Vec3, v: Vec3) -> Vec3 {
     Vec3::new(
         u.e[1] * v.e[2] - u.e[2] * v.e[1],
         u.e[2] * v.e[0] - u.e[0] * v.e[2],
         u.e[0] * v.e[1] - u.e[1] * v.e[0],
     )
 }
 
 #[inline]
 pub fn unit_vector(v: Vec3) -> Vec3 {
     v / v.length()
 }
 
 #[inline]
 pub fn random_unit_vector() -> Vec3 {
     loop {
         let p = Vec3::random_range(-1.0, 1.0);
         let lensq = p.length_squared();
         if 1e-160 < lensq && lensq <= 1.0 {
             return p / f64::sqrt(lensq);
         }
     }
 }
 
 #[inline]
+pub fn random_on_hemisphere(normal: Vec3) -> Vec3 {
+    let on_unit_sphere = random_unit_vector();
+    if dot(on_unit_sphere, normal) > 0.0 {
+        on_unit_sphere
+    } else {
+        -on_unit_sphere
+    }
+}
+
+#[inline]
 pub fn random_in_unit_disk() -> Vec3 {
     loop {
         let p = Vec3::new(
             rand::random_range(-1.0..1.0),
             rand::random_range(-1.0..1.0),
             0.0,
         );
         if p.length_squared() < 1.0 {
             return p;
         }
     }
 }

Listing 50: [vec3.rs] The random_on_hemisphere() function


If a ray bounces off of a material and keeps 100% of its color, then we say that the material is white. If a ray bounces off of a material and keeps 0% of its color, then we say that the material is black. As a first demonstration of our new diffuse material we'll set the ray_color function to return 50% of the color from a bounce. We should expect to get a nice gray color.

diff --git a/src/camera.rs b/src/camera.rs
index f181b03..181eb64 100644
--- a/src/camera.rs
+++ b/src/camera.rs
@@ -1,151 +1,152 @@
 use crate::{hittable::Hittable, prelude::*};
 
 pub struct Camera {
     /// Ratio of image width over height
     pub aspect_ratio: f64,
     /// Rendered image width in pixel count
     pub image_width: i32,
     // Count of random samples for each pixel
     pub samples_per_pixel: i32,
 
     /// Rendered image height
     image_height: i32,
     // Color scale factor for a sum of pixel samples
     pixel_samples_scale: f64,
     /// Camera center
     center: Point3,
     /// Location of pixel 0, 0
     pixel00_loc: Point3,
     /// Offset to pixel to the right
     pixel_delta_u: Vec3,
     /// Offset to pixel below
     pixel_delta_v: Vec3,
 }
 
 impl Default for Camera {
     fn default() -> Self {
         Self {
             aspect_ratio: 1.0,
             image_width: 100,
             samples_per_pixel: 10,
             image_height: Default::default(),
             pixel_samples_scale: Default::default(),
             center: Default::default(),
             pixel00_loc: Default::default(),
             pixel_delta_u: Default::default(),
             pixel_delta_v: Default::default(),
         }
     }
 }
 
 impl Camera {
     pub fn with_aspect_ratio(mut self, aspect_ratio: f64) -> Self {
         self.aspect_ratio = aspect_ratio;
 
         self
     }
 
     pub fn with_image_width(mut self, image_width: i32) -> Self {
         self.image_width = image_width;
 
         self
     }
 
     pub fn with_samples_per_pixel(mut self, samples_per_pixel: i32) -> Self {
         self.samples_per_pixel = samples_per_pixel;
 
         self
     }
 
     pub fn render(&mut self, world: &impl Hittable) -> std::io::Result<()> {
         self.initialize();
 
         println!("P3");
         println!("{} {}", self.image_width, self.image_height);
         println!("255");
 
         for j in 0..self.image_height {
             info!("Scanlines remaining: {}", self.image_height - j);
             for i in 0..self.image_width {
                 let mut pixel_color = Color::new(0.0, 0.0, 0.0);
                 for _sample in 0..self.samples_per_pixel {
                     let r = self.get_ray(i, j);
                     pixel_color += Self::ray_color(r, world);
                 }
                 write_color(std::io::stdout(), self.pixel_samples_scale * pixel_color)?;
             }
         }
         info!("Done.");
 
         Ok(())
     }
 
     fn initialize(&mut self) {
         self.image_height = {
             let image_height = (self.image_width as f64 / self.aspect_ratio) as i32;
             if image_height < 1 { 1 } else { image_height }
         };
 
         self.pixel_samples_scale = 1.0 / self.samples_per_pixel as f64;
 
         self.center = Point3::new(0.0, 0.0, 0.0);
 
         // Determine viewport dimensions.
         let focal_length = 1.0;
         let viewport_height = 2.0;
         let viewport_width =
             viewport_height * (self.image_width as f64) / (self.image_height as f64);
 
         // Calculate the vectors across the horizontal and down the vertical viewport edges.
         let viewport_u = Vec3::new(viewport_width, 0.0, 0.0);
         let viewport_v = Vec3::new(0.0, -viewport_height, 0.0);
 
         // Calculate the horizontal and vertical delta vectors from pixel to pixel.
         self.pixel_delta_u = viewport_u / self.image_width as f64;
         self.pixel_delta_v = viewport_v / self.image_height as f64;
 
         // Calculate the location of the upper left pixel.
         let viewport_upper_left =
             self.center - Vec3::new(0.0, 0.0, focal_length) - viewport_u / 2.0 - viewport_v / 2.0;
         self.pixel00_loc = viewport_upper_left + 0.5 * (self.pixel_delta_u + self.pixel_delta_v);
     }
 
     fn get_ray(&self, i: i32, j: i32) -> Ray {
         // Construct a camera ray originating from the origin and directed at randomly sampled
         // point around the pixel location i, j.
 
         let offset = Self::sample_square();
         let pixel_sample = self.pixel00_loc
             + ((i as f64 + offset.x()) * self.pixel_delta_u)
             + ((j as f64 + offset.y()) * self.pixel_delta_v);
 
         let ray_origin = self.center;
         let ray_direction = pixel_sample - ray_origin;
 
         Ray::new(ray_origin, ray_direction)
     }
 
     fn sample_square() -> Vec3 {
         // Returns the vector to a random point in the [-.5,-.5]-[+.5,+.5] unit square.
         Vec3::new(
             rand::random::<f64>() - 0.5,
             rand::random::<f64>() - 0.5,
             0.0,
         )
     }
 
     fn _sample_disk(radius: f64) -> Vec3 {
         // Returns a random point in the unit (radius 0.5) disk centered at the origin.
         radius * random_in_unit_disk()
     }
 
     fn ray_color(r: Ray, world: &impl Hittable) -> Color {
         if let Some(rec) = world.hit(r, Interval::new(0.0, INFINITY)) {
-            return 0.5 * (rec.normal + Color::new(1.0, 1.0, 1.0));
+            let direction = random_on_hemisphere(rec.normal);
+            return 0.5 * Self::ray_color(Ray::new(rec.p, direction), world);
         }
 
         let unit_direction = unit_vector(r.direction());
         let a = 0.5 * (unit_direction.y() + 1.0);
         (1.0 - a) * Color::new(1.0, 1.0, 1.0) + a * Color::new(0.5, 0.7, 1.0)
     }
 }

Listing 51: [camera.rs] ray_color() using a random ray direction


... Indeed we do get rather nice gray spheres:

First render of a diffuse sphere

Image 7: First render of a diffuse sphere


Limiting the Number of Child Rays

There's one potential problem lurking here. Notice that the ray_color function is recursive. When will it stop recursing? When it fails to hit anything. In some cases, however, that may be a long time — long enough to blow the stack. To guard against that, let's limit the maximum recursion depth, returning no light contribution at the maximum depth:

diff --git a/src/camera.rs b/src/camera.rs
index 181eb64..2abb78e 100644
--- a/src/camera.rs
+++ b/src/camera.rs
@@ -1,152 +1,166 @@
 use crate::{hittable::Hittable, prelude::*};
 
 pub struct Camera {
     /// Ratio of image width over height
     pub aspect_ratio: f64,
     /// Rendered image width in pixel count
     pub image_width: i32,
     // Count of random samples for each pixel
     pub samples_per_pixel: i32,
+    // Maximum number of ray bounces into scene
+    pub max_depth: i32,
 
     /// Rendered image height
     image_height: i32,
     // Color scale factor for a sum of pixel samples
     pixel_samples_scale: f64,
     /// Camera center
     center: Point3,
     /// Location of pixel 0, 0
     pixel00_loc: Point3,
     /// Offset to pixel to the right
     pixel_delta_u: Vec3,
     /// Offset to pixel below
     pixel_delta_v: Vec3,
 }
 
 impl Default for Camera {
     fn default() -> Self {
         Self {
             aspect_ratio: 1.0,
             image_width: 100,
             samples_per_pixel: 10,
+            max_depth: 10,
             image_height: Default::default(),
             pixel_samples_scale: Default::default(),
             center: Default::default(),
             pixel00_loc: Default::default(),
             pixel_delta_u: Default::default(),
             pixel_delta_v: Default::default(),
         }
     }
 }
 
 impl Camera {
     pub fn with_aspect_ratio(mut self, aspect_ratio: f64) -> Self {
         self.aspect_ratio = aspect_ratio;
 
         self
     }
 
     pub fn with_image_width(mut self, image_width: i32) -> Self {
         self.image_width = image_width;
 
         self
     }
 
     pub fn with_samples_per_pixel(mut self, samples_per_pixel: i32) -> Self {
         self.samples_per_pixel = samples_per_pixel;
 
         self
     }
 
+    pub fn with_max_depth(mut self, max_depth: i32) -> Self {
+        self.max_depth = max_depth;
+
+        self
+    }
+
     pub fn render(&mut self, world: &impl Hittable) -> std::io::Result<()> {
         self.initialize();
 
         println!("P3");
         println!("{} {}", self.image_width, self.image_height);
         println!("255");
 
         for j in 0..self.image_height {
             info!("Scanlines remaining: {}", self.image_height - j);
             for i in 0..self.image_width {
                 let mut pixel_color = Color::new(0.0, 0.0, 0.0);
                 for _sample in 0..self.samples_per_pixel {
                     let r = self.get_ray(i, j);
-                    pixel_color += Self::ray_color(r, world);
+                    pixel_color += Self::ray_color(r, self.max_depth, world);
                 }
                 write_color(std::io::stdout(), self.pixel_samples_scale * pixel_color)?;
             }
         }
         info!("Done.");
 
         Ok(())
     }
 
     fn initialize(&mut self) {
         self.image_height = {
             let image_height = (self.image_width as f64 / self.aspect_ratio) as i32;
             if image_height < 1 { 1 } else { image_height }
         };
 
         self.pixel_samples_scale = 1.0 / self.samples_per_pixel as f64;
 
         self.center = Point3::new(0.0, 0.0, 0.0);
 
         // Determine viewport dimensions.
         let focal_length = 1.0;
         let viewport_height = 2.0;
         let viewport_width =
             viewport_height * (self.image_width as f64) / (self.image_height as f64);
 
         // Calculate the vectors across the horizontal and down the vertical viewport edges.
         let viewport_u = Vec3::new(viewport_width, 0.0, 0.0);
         let viewport_v = Vec3::new(0.0, -viewport_height, 0.0);
 
         // Calculate the horizontal and vertical delta vectors from pixel to pixel.
         self.pixel_delta_u = viewport_u / self.image_width as f64;
         self.pixel_delta_v = viewport_v / self.image_height as f64;
 
         // Calculate the location of the upper left pixel.
         let viewport_upper_left =
             self.center - Vec3::new(0.0, 0.0, focal_length) - viewport_u / 2.0 - viewport_v / 2.0;
         self.pixel00_loc = viewport_upper_left + 0.5 * (self.pixel_delta_u + self.pixel_delta_v);
     }
 
     fn get_ray(&self, i: i32, j: i32) -> Ray {
         // Construct a camera ray originating from the origin and directed at randomly sampled
         // point around the pixel location i, j.
 
         let offset = Self::sample_square();
         let pixel_sample = self.pixel00_loc
             + ((i as f64 + offset.x()) * self.pixel_delta_u)
             + ((j as f64 + offset.y()) * self.pixel_delta_v);
 
         let ray_origin = self.center;
         let ray_direction = pixel_sample - ray_origin;
 
         Ray::new(ray_origin, ray_direction)
     }
 
     fn sample_square() -> Vec3 {
         // Returns the vector to a random point in the [-.5,-.5]-[+.5,+.5] unit square.
         Vec3::new(
             rand::random::<f64>() - 0.5,
             rand::random::<f64>() - 0.5,
             0.0,
         )
     }
 
     fn _sample_disk(radius: f64) -> Vec3 {
         // Returns a random point in the unit (radius 0.5) disk centered at the origin.
         radius * random_in_unit_disk()
     }
 
-    fn ray_color(r: Ray, world: &impl Hittable) -> Color {
+    fn ray_color(r: Ray, depth: i32, world: &impl Hittable) -> Color {
+        // If we've exceeded the ray bounce limit, no more light is gathered.
+        if depth <= 0 {
+            return Color::new(0.0, 0.0, 0.0);
+        }
+
         if let Some(rec) = world.hit(r, Interval::new(0.0, INFINITY)) {
             let direction = random_on_hemisphere(rec.normal);
-            return 0.5 * Self::ray_color(Ray::new(rec.p, direction), world);
+            return 0.5 * Self::ray_color(Ray::new(rec.p, direction), depth - 1, world);
         }
 
         let unit_direction = unit_vector(r.direction());
         let a = 0.5 * (unit_direction.y() + 1.0);
         (1.0 - a) * Color::new(1.0, 1.0, 1.0) + a * Color::new(0.5, 0.7, 1.0)
     }
 }

Listing 52: [camera.rs] camera::ray_color() with depth limiting


Update the main() function to use this new depth limit:

diff --git a/src/main.rs b/src/main.rs
index 9f08807..a016213 100644
--- a/src/main.rs
+++ b/src/main.rs
@@ -1,16 +1,17 @@
 use code::{camera::Camera, hittable_list::HittableList, prelude::*, sphere::Sphere};
 
 fn main() -> std::io::Result<()> {
     let mut world = HittableList::new();
 
     world.add(Rc::new(Sphere::new(Point3::new(0.0, 0.0, -1.0), 0.5)));
     world.add(Rc::new(Sphere::new(Point3::new(0.0, -100.5, -1.0), 100.0)));
 
     env_logger::init();
 
     Camera::default()
         .with_aspect_ratio(16.0 / 9.0)
         .with_image_width(400)
         .with_samples_per_pixel(100)
+        .with_max_depth(50)
         .render(&world)
 }

Listing 53: [main.rs] Using the new ray depth limiting


For this very simple scene we should get basically the same result:

Second render of a diffuse sphere with limited bounces

Image 8: Second render of a diffuse sphere with limited bounces


Fixing Shadow Acne

There’s also a subtle bug that we need to address. A ray will attempt to accurately calculate the intersection point when it intersects with a surface. Unfortunately for us, this calculation is susceptible to floating point rounding errors which can cause the intersection point to be ever so slightly off. This means that the origin of the next ray, the ray that is randomly scattered off of the surface, is unlikely to be perfectly flush with the surface. It might be just above the surface. It might be just below the surface. If the ray's origin is just below the surface then it could intersect with that surface again. Which means that it will find the nearest surface at \( 𝑡 = 0.00000001 \) or whatever floating point approximation the hit function gives us. The simplest hack to address this is just to ignore hits that are very close to the calculated intersection point:

diff --git a/src/camera.rs b/src/camera.rs
index 2abb78e..eea2b1f 100644
--- a/src/camera.rs
+++ b/src/camera.rs
@@ -1,166 +1,166 @@
 use crate::{hittable::Hittable, prelude::*};
 
 pub struct Camera {
     /// Ratio of image width over height
     pub aspect_ratio: f64,
     /// Rendered image width in pixel count
     pub image_width: i32,
     // Count of random samples for each pixel
     pub samples_per_pixel: i32,
     // Maximum number of ray bounces into scene
     pub max_depth: i32,
 
     /// Rendered image height
     image_height: i32,
     // Color scale factor for a sum of pixel samples
     pixel_samples_scale: f64,
     /// Camera center
     center: Point3,
     /// Location of pixel 0, 0
     pixel00_loc: Point3,
     /// Offset to pixel to the right
     pixel_delta_u: Vec3,
     /// Offset to pixel below
     pixel_delta_v: Vec3,
 }
 
 impl Default for Camera {
     fn default() -> Self {
         Self {
             aspect_ratio: 1.0,
             image_width: 100,
             samples_per_pixel: 10,
             max_depth: 10,
             image_height: Default::default(),
             pixel_samples_scale: Default::default(),
             center: Default::default(),
             pixel00_loc: Default::default(),
             pixel_delta_u: Default::default(),
             pixel_delta_v: Default::default(),
         }
     }
 }
 
 impl Camera {
     pub fn with_aspect_ratio(mut self, aspect_ratio: f64) -> Self {
         self.aspect_ratio = aspect_ratio;
 
         self
     }
 
     pub fn with_image_width(mut self, image_width: i32) -> Self {
         self.image_width = image_width;
 
         self
     }
 
     pub fn with_samples_per_pixel(mut self, samples_per_pixel: i32) -> Self {
         self.samples_per_pixel = samples_per_pixel;
 
         self
     }
 
     pub fn with_max_depth(mut self, max_depth: i32) -> Self {
         self.max_depth = max_depth;
 
         self
     }
 
     pub fn render(&mut self, world: &impl Hittable) -> std::io::Result<()> {
         self.initialize();
 
         println!("P3");
         println!("{} {}", self.image_width, self.image_height);
         println!("255");
 
         for j in 0..self.image_height {
             info!("Scanlines remaining: {}", self.image_height - j);
             for i in 0..self.image_width {
                 let mut pixel_color = Color::new(0.0, 0.0, 0.0);
                 for _sample in 0..self.samples_per_pixel {
                     let r = self.get_ray(i, j);
                     pixel_color += Self::ray_color(r, self.max_depth, world);
                 }
                 write_color(std::io::stdout(), self.pixel_samples_scale * pixel_color)?;
             }
         }
         info!("Done.");
 
         Ok(())
     }
 
     fn initialize(&mut self) {
         self.image_height = {
             let image_height = (self.image_width as f64 / self.aspect_ratio) as i32;
             if image_height < 1 { 1 } else { image_height }
         };
 
         self.pixel_samples_scale = 1.0 / self.samples_per_pixel as f64;
 
         self.center = Point3::new(0.0, 0.0, 0.0);
 
         // Determine viewport dimensions.
         let focal_length = 1.0;
         let viewport_height = 2.0;
         let viewport_width =
             viewport_height * (self.image_width as f64) / (self.image_height as f64);
 
         // Calculate the vectors across the horizontal and down the vertical viewport edges.
         let viewport_u = Vec3::new(viewport_width, 0.0, 0.0);
         let viewport_v = Vec3::new(0.0, -viewport_height, 0.0);
 
         // Calculate the horizontal and vertical delta vectors from pixel to pixel.
         self.pixel_delta_u = viewport_u / self.image_width as f64;
         self.pixel_delta_v = viewport_v / self.image_height as f64;
 
         // Calculate the location of the upper left pixel.
         let viewport_upper_left =
             self.center - Vec3::new(0.0, 0.0, focal_length) - viewport_u / 2.0 - viewport_v / 2.0;
         self.pixel00_loc = viewport_upper_left + 0.5 * (self.pixel_delta_u + self.pixel_delta_v);
     }
 
     fn get_ray(&self, i: i32, j: i32) -> Ray {
         // Construct a camera ray originating from the origin and directed at randomly sampled
         // point around the pixel location i, j.
 
         let offset = Self::sample_square();
         let pixel_sample = self.pixel00_loc
             + ((i as f64 + offset.x()) * self.pixel_delta_u)
             + ((j as f64 + offset.y()) * self.pixel_delta_v);
 
         let ray_origin = self.center;
         let ray_direction = pixel_sample - ray_origin;
 
         Ray::new(ray_origin, ray_direction)
     }
 
     fn sample_square() -> Vec3 {
         // Returns the vector to a random point in the [-.5,-.5]-[+.5,+.5] unit square.
         Vec3::new(
             rand::random::<f64>() - 0.5,
             rand::random::<f64>() - 0.5,
             0.0,
         )
     }
 
     fn _sample_disk(radius: f64) -> Vec3 {
         // Returns a random point in the unit (radius 0.5) disk centered at the origin.
         radius * random_in_unit_disk()
     }
 
     fn ray_color(r: Ray, depth: i32, world: &impl Hittable) -> Color {
         // If we've exceeded the ray bounce limit, no more light is gathered.
         if depth <= 0 {
             return Color::new(0.0, 0.0, 0.0);
         }
 
-        if let Some(rec) = world.hit(r, Interval::new(0.0, INFINITY)) {
+        if let Some(rec) = world.hit(r, Interval::new(0.001, INFINITY)) {
             let direction = random_on_hemisphere(rec.normal);
             return 0.5 * Self::ray_color(Ray::new(rec.p, direction), depth - 1, world);
         }
 
         let unit_direction = unit_vector(r.direction());
         let a = 0.5 * (unit_direction.y() + 1.0);
         (1.0 - a) * Color::new(1.0, 1.0, 1.0) + a * Color::new(0.5, 0.7, 1.0)
     }
 }

Listing 54: [camera.rs] Calculating reflected ray origins with tolerance


This gets rid of the shadow acne problem. Yes it is really called that. Here's the result:

Diffuse sphere with no shadow acne

Image 9: Diffuse sphere with no shadow acne


True Lambertian Reflection

Scattering reflected rays evenly about the hemisphere produces a nice soft diffuse model, but we can definitely do better. A more accurate representation of real diffuse objects is the Lambertian distribution. This distribution scatters reflected rays in a manner that is proportional to \( cos(\phi) \), where \( \phi \) is the angle between the reflected ray and the surface normal. This means that a reflected ray is most likely to scatter in a direction near the surface normal, and less likely to scatter in directions away from the normal. This non-uniform Lambertian distribution does a better job of modeling material reflection in the real world than our previous uniform scattering.

We can create this distribution by adding a random unit vector to the normal vector. At the point of intersection on a surface there is the hit point, \( \mathbf{p} \), and there is the normal of the surface, \( \mathbf{n} \). At the point of intersection, this surface has exactly two sides, so there can only be two unique unit spheres tangent to any intersection point (one unique sphere for each side of the surface). These two unit spheres will be displaced from the surface by the length of their radius, which is exactly one for a unit sphere.

One sphere will be displaced in the direction of the surface's normal (\( \mathbf{n} \)) and one sphere will be displaced in the opposite direction (\( -\mathbf{n} \)). This leaves us with two spheres of unit size that will only be just touching the surface at the intersection point. From this, one of the spheres will have its center at \( (\mathbf{P} + \mathbf{n}) \) and the other sphere will have its center at \( (\mathbf{P} - \mathbf{n}) \). The sphere with a center at \( (\mathbf{P} - \mathbf{n}) \) is considered inside the surface, whereas the sphere with center \( (\mathbf{P} + \mathbf{n}) \) is considered outside the surface.

We want to select the tangent unit sphere that is on the same side of the surface as the ray origin. Pick a random point \( \mathbf{S} \) on this unit radius sphere and send a ray from the hit point \( \mathbf{P} \) to the random point \( \mathbf{S} \) (this is the vector \( (\mathbf{S} - \mathbf{P}) \):

Randomly generating a vector according to Lambertian distribution

Figure 14: Randomly generating a vector according to Lambertian distribution


The change is actually fairly minimal:

diff --git a/src/camera.rs b/src/camera.rs
index eea2b1f..3a2e772 100644
--- a/src/camera.rs
+++ b/src/camera.rs
@@ -1,166 +1,166 @@
 use crate::{hittable::Hittable, prelude::*};
 
 pub struct Camera {
     /// Ratio of image width over height
     pub aspect_ratio: f64,
     /// Rendered image width in pixel count
     pub image_width: i32,
     // Count of random samples for each pixel
     pub samples_per_pixel: i32,
     // Maximum number of ray bounces into scene
     pub max_depth: i32,
 
     /// Rendered image height
     image_height: i32,
     // Color scale factor for a sum of pixel samples
     pixel_samples_scale: f64,
     /// Camera center
     center: Point3,
     /// Location of pixel 0, 0
     pixel00_loc: Point3,
     /// Offset to pixel to the right
     pixel_delta_u: Vec3,
     /// Offset to pixel below
     pixel_delta_v: Vec3,
 }
 
 impl Default for Camera {
     fn default() -> Self {
         Self {
             aspect_ratio: 1.0,
             image_width: 100,
             samples_per_pixel: 10,
             max_depth: 10,
             image_height: Default::default(),
             pixel_samples_scale: Default::default(),
             center: Default::default(),
             pixel00_loc: Default::default(),
             pixel_delta_u: Default::default(),
             pixel_delta_v: Default::default(),
         }
     }
 }
 
 impl Camera {
     pub fn with_aspect_ratio(mut self, aspect_ratio: f64) -> Self {
         self.aspect_ratio = aspect_ratio;
 
         self
     }
 
     pub fn with_image_width(mut self, image_width: i32) -> Self {
         self.image_width = image_width;
 
         self
     }
 
     pub fn with_samples_per_pixel(mut self, samples_per_pixel: i32) -> Self {
         self.samples_per_pixel = samples_per_pixel;
 
         self
     }
 
     pub fn with_max_depth(mut self, max_depth: i32) -> Self {
         self.max_depth = max_depth;
 
         self
     }
 
     pub fn render(&mut self, world: &impl Hittable) -> std::io::Result<()> {
         self.initialize();
 
         println!("P3");
         println!("{} {}", self.image_width, self.image_height);
         println!("255");
 
         for j in 0..self.image_height {
             info!("Scanlines remaining: {}", self.image_height - j);
             for i in 0..self.image_width {
                 let mut pixel_color = Color::new(0.0, 0.0, 0.0);
                 for _sample in 0..self.samples_per_pixel {
                     let r = self.get_ray(i, j);
                     pixel_color += Self::ray_color(r, self.max_depth, world);
                 }
                 write_color(std::io::stdout(), self.pixel_samples_scale * pixel_color)?;
             }
         }
         info!("Done.");
 
         Ok(())
     }
 
     fn initialize(&mut self) {
         self.image_height = {
             let image_height = (self.image_width as f64 / self.aspect_ratio) as i32;
             if image_height < 1 { 1 } else { image_height }
         };
 
         self.pixel_samples_scale = 1.0 / self.samples_per_pixel as f64;
 
         self.center = Point3::new(0.0, 0.0, 0.0);
 
         // Determine viewport dimensions.
         let focal_length = 1.0;
         let viewport_height = 2.0;
         let viewport_width =
             viewport_height * (self.image_width as f64) / (self.image_height as f64);
 
         // Calculate the vectors across the horizontal and down the vertical viewport edges.
         let viewport_u = Vec3::new(viewport_width, 0.0, 0.0);
         let viewport_v = Vec3::new(0.0, -viewport_height, 0.0);
 
         // Calculate the horizontal and vertical delta vectors from pixel to pixel.
         self.pixel_delta_u = viewport_u / self.image_width as f64;
         self.pixel_delta_v = viewport_v / self.image_height as f64;
 
         // Calculate the location of the upper left pixel.
         let viewport_upper_left =
             self.center - Vec3::new(0.0, 0.0, focal_length) - viewport_u / 2.0 - viewport_v / 2.0;
         self.pixel00_loc = viewport_upper_left + 0.5 * (self.pixel_delta_u + self.pixel_delta_v);
     }
 
     fn get_ray(&self, i: i32, j: i32) -> Ray {
         // Construct a camera ray originating from the origin and directed at randomly sampled
         // point around the pixel location i, j.
 
         let offset = Self::sample_square();
         let pixel_sample = self.pixel00_loc
             + ((i as f64 + offset.x()) * self.pixel_delta_u)
             + ((j as f64 + offset.y()) * self.pixel_delta_v);
 
         let ray_origin = self.center;
         let ray_direction = pixel_sample - ray_origin;
 
         Ray::new(ray_origin, ray_direction)
     }
 
     fn sample_square() -> Vec3 {
         // Returns the vector to a random point in the [-.5,-.5]-[+.5,+.5] unit square.
         Vec3::new(
             rand::random::<f64>() - 0.5,
             rand::random::<f64>() - 0.5,
             0.0,
         )
     }
 
     fn _sample_disk(radius: f64) -> Vec3 {
         // Returns a random point in the unit (radius 0.5) disk centered at the origin.
         radius * random_in_unit_disk()
     }
 
     fn ray_color(r: Ray, depth: i32, world: &impl Hittable) -> Color {
         // If we've exceeded the ray bounce limit, no more light is gathered.
         if depth <= 0 {
             return Color::new(0.0, 0.0, 0.0);
         }
 
         if let Some(rec) = world.hit(r, Interval::new(0.001, INFINITY)) {
-            let direction = random_on_hemisphere(rec.normal);
+            let direction = rec.normal + random_unit_vector();
             return 0.5 * Self::ray_color(Ray::new(rec.p, direction), depth - 1, world);
         }
 
         let unit_direction = unit_vector(r.direction());
         let a = 0.5 * (unit_direction.y() + 1.0);
         (1.0 - a) * Color::new(1.0, 1.0, 1.0) + a * Color::new(0.5, 0.7, 1.0)
     }
 }

Listing 55: [camera.rs] ray_color() with replacement diffuse


After rendering we get a similar image:

Correct rendering of Lambertian spheres

Image 10: Correct rendering of Lambertian spheres


It's hard to tell the difference between these two diffuse methods, given that our scene of two spheres is so simple, but you should be able to notice two important visual differences:

  1. The shadows are more pronounced after the change
  2. Both spheres are tinted blue from the sky after the change

Both of these changes are due to the less uniform scattering of the light rays—more rays are scattering toward the normal. This means that for diffuse objects, they will appear darker because less light bounces toward the camera. For the shadows, more light bounces straight-up, so the area underneath the sphere is darker.

Not a lot of common, everyday objects are perfectly diffuse, so our visual intuition of how these objects behave under light can be poorly formed. As scenes become more complicated over the course of the book, you are encouraged to switch between the different diffuse renderers presented here. Most scenes of interest will contain a large amount of diffuse materials. You can gain valuable insight by understanding the effect of different diffuse methods on the lighting of a scene.

Using Gamma Correction for Accurate Color Intensity

Note the shadowing under the sphere. The picture is very dark, but our spheres only absorb half the energy of each bounce, so they are 50% reflectors. The spheres should look pretty bright (in real life, a light grey) but they appear to be rather dark. We can see this more clearly if we walk through the full brightness gamut for our diffuse material. We start by setting the reflectance of the ray_color function from 0.5 (50%) to 0.1 (10%):

diff --git a/src/camera.rs b/src/camera.rs
index 3a2e772..e6c60c3 100644
--- a/src/camera.rs
+++ b/src/camera.rs
@@ -1,166 +1,166 @@
 use crate::{hittable::Hittable, prelude::*};
 
 pub struct Camera {
     /// Ratio of image width over height
     pub aspect_ratio: f64,
     /// Rendered image width in pixel count
     pub image_width: i32,
     // Count of random samples for each pixel
     pub samples_per_pixel: i32,
     // Maximum number of ray bounces into scene
     pub max_depth: i32,
 
     /// Rendered image height
     image_height: i32,
     // Color scale factor for a sum of pixel samples
     pixel_samples_scale: f64,
     /// Camera center
     center: Point3,
     /// Location of pixel 0, 0
     pixel00_loc: Point3,
     /// Offset to pixel to the right
     pixel_delta_u: Vec3,
     /// Offset to pixel below
     pixel_delta_v: Vec3,
 }
 
 impl Default for Camera {
     fn default() -> Self {
         Self {
             aspect_ratio: 1.0,
             image_width: 100,
             samples_per_pixel: 10,
             max_depth: 10,
             image_height: Default::default(),
             pixel_samples_scale: Default::default(),
             center: Default::default(),
             pixel00_loc: Default::default(),
             pixel_delta_u: Default::default(),
             pixel_delta_v: Default::default(),
         }
     }
 }
 
 impl Camera {
     pub fn with_aspect_ratio(mut self, aspect_ratio: f64) -> Self {
         self.aspect_ratio = aspect_ratio;
 
         self
     }
 
     pub fn with_image_width(mut self, image_width: i32) -> Self {
         self.image_width = image_width;
 
         self
     }
 
     pub fn with_samples_per_pixel(mut self, samples_per_pixel: i32) -> Self {
         self.samples_per_pixel = samples_per_pixel;
 
         self
     }
 
     pub fn with_max_depth(mut self, max_depth: i32) -> Self {
         self.max_depth = max_depth;
 
         self
     }
 
     pub fn render(&mut self, world: &impl Hittable) -> std::io::Result<()> {
         self.initialize();
 
         println!("P3");
         println!("{} {}", self.image_width, self.image_height);
         println!("255");
 
         for j in 0..self.image_height {
             info!("Scanlines remaining: {}", self.image_height - j);
             for i in 0..self.image_width {
                 let mut pixel_color = Color::new(0.0, 0.0, 0.0);
                 for _sample in 0..self.samples_per_pixel {
                     let r = self.get_ray(i, j);
                     pixel_color += Self::ray_color(r, self.max_depth, world);
                 }
                 write_color(std::io::stdout(), self.pixel_samples_scale * pixel_color)?;
             }
         }
         info!("Done.");
 
         Ok(())
     }
 
     fn initialize(&mut self) {
         self.image_height = {
             let image_height = (self.image_width as f64 / self.aspect_ratio) as i32;
             if image_height < 1 { 1 } else { image_height }
         };
 
         self.pixel_samples_scale = 1.0 / self.samples_per_pixel as f64;
 
         self.center = Point3::new(0.0, 0.0, 0.0);
 
         // Determine viewport dimensions.
         let focal_length = 1.0;
         let viewport_height = 2.0;
         let viewport_width =
             viewport_height * (self.image_width as f64) / (self.image_height as f64);
 
         // Calculate the vectors across the horizontal and down the vertical viewport edges.
         let viewport_u = Vec3::new(viewport_width, 0.0, 0.0);
         let viewport_v = Vec3::new(0.0, -viewport_height, 0.0);
 
         // Calculate the horizontal and vertical delta vectors from pixel to pixel.
         self.pixel_delta_u = viewport_u / self.image_width as f64;
         self.pixel_delta_v = viewport_v / self.image_height as f64;
 
         // Calculate the location of the upper left pixel.
         let viewport_upper_left =
             self.center - Vec3::new(0.0, 0.0, focal_length) - viewport_u / 2.0 - viewport_v / 2.0;
         self.pixel00_loc = viewport_upper_left + 0.5 * (self.pixel_delta_u + self.pixel_delta_v);
     }
 
     fn get_ray(&self, i: i32, j: i32) -> Ray {
         // Construct a camera ray originating from the origin and directed at randomly sampled
         // point around the pixel location i, j.
 
         let offset = Self::sample_square();
         let pixel_sample = self.pixel00_loc
             + ((i as f64 + offset.x()) * self.pixel_delta_u)
             + ((j as f64 + offset.y()) * self.pixel_delta_v);
 
         let ray_origin = self.center;
         let ray_direction = pixel_sample - ray_origin;
 
         Ray::new(ray_origin, ray_direction)
     }
 
     fn sample_square() -> Vec3 {
         // Returns the vector to a random point in the [-.5,-.5]-[+.5,+.5] unit square.
         Vec3::new(
             rand::random::<f64>() - 0.5,
             rand::random::<f64>() - 0.5,
             0.0,
         )
     }
 
     fn _sample_disk(radius: f64) -> Vec3 {
         // Returns a random point in the unit (radius 0.5) disk centered at the origin.
         radius * random_in_unit_disk()
     }
 
     fn ray_color(r: Ray, depth: i32, world: &impl Hittable) -> Color {
         // If we've exceeded the ray bounce limit, no more light is gathered.
         if depth <= 0 {
             return Color::new(0.0, 0.0, 0.0);
         }
 
         if let Some(rec) = world.hit(r, Interval::new(0.001, INFINITY)) {
             let direction = rec.normal + random_unit_vector();
-            return 0.5 * Self::ray_color(Ray::new(rec.p, direction), depth - 1, world);
+            return 0.1 * Self::ray_color(Ray::new(rec.p, direction), depth - 1, world);
         }
 
         let unit_direction = unit_vector(r.direction());
         let a = 0.5 * (unit_direction.y() + 1.0);
         (1.0 - a) * Color::new(1.0, 1.0, 1.0) + a * Color::new(0.5, 0.7, 1.0)
     }
 }

Listing 56: [camera.rs] ray_color() with 10% reflectance


We render out at this new 10% reflectance. We then set reflectance to 30% and render again. We repeat for 50%, 70%, and finally 90%. You can overlay these images from left to right in the photo editor of your choice and you should get a very nice visual representation of the increasing brightness of your chosen gamut. This is the one that we've been working with so far:

The gamut of our renderer so far

Image 11: The gamut of our renderer so far


f you look closely, or if you use a color picker, you should notice that the 50% reflectance render (the one in the middle) is far too dark to be half-way between white and black (middle-gray). Indeed, the 70% reflector is closer to middle-gray. The reason for this is that almost all computer programs assume that an image is “gamma corrected” before being written into an image file. This means that the 0 to 1 values have some transform applied before being stored as a byte. Images with data that are written without being transformed are said to be in linear space, whereas images that are transformed are said to be in gamma space. It is likely that the image viewer you are using is expecting an image in gamma space, but we are giving it an image in linear space. This is the reason why our image appears inaccurately dark.

There are many good reasons for why images should be stored in gamma space, but for our purposes we just need to be aware of it. We are going to transform our data into gamma space so that our image viewer can more accurately display our image. As a simple approximation, we can use “gamma 2” as our transform, which is the power that you use when going from gamma space to linear space. We need to go from linear space to gamma space, which means taking the inverse of “gamma 2", which means an exponent of \( 1/gamma \), which is just the square-root. We'll also want to ensure that we robustly handle negative inputs.

diff --git a/src/color.rs b/src/color.rs
index 1615d55..d69fd1d 100644
--- a/src/color.rs
+++ b/src/color.rs
@@ -1,18 +1,32 @@
 use crate::prelude::*;
 
 pub type Color = Vec3;
 
+#[inline]
+pub fn linear_to_gamma(linear_component: f64) -> f64 {
+    if linear_component > 0.0 {
+        f64::sqrt(linear_component)
+    } else {
+        0.0
+    }
+}
+
 pub fn write_color(mut out: impl std::io::Write, pixel_color: Color) -> std::io::Result<()> {
     let r = pixel_color.x();
     let g = pixel_color.y();
     let b = pixel_color.z();
 
+    // Apply a linear to gamma transform for gamma 2
+    let r = linear_to_gamma(r);
+    let g = linear_to_gamma(g);
+    let b = linear_to_gamma(b);
+
     // Translate the [0,1] component values to the byte range [0,255].
     const INTENSITY: Interval = Interval::new(0.000, 0.999);
     let rbyte = (256.0 * INTENSITY.clamp(r)) as i32;
     let gbyte = (256.0 * INTENSITY.clamp(g)) as i32;
     let bbyte = (256.0 * INTENSITY.clamp(b)) as i32;
 
     // Write out the pixel color components.
     writeln!(out, "{rbyte} {gbyte} {bbyte}")
 }

Listing 57: [color.rs] write_color(), with gamma correction


Using this gamma correction, we now get a much more consistent ramp from darkness to lightness:

The gamut of our renderer, gamma-corrected

Image 12: The gamut of our renderer, gamma-corrected


An Abstract Class for Materials

If we want different objects to have different materials, we have a design decision. We could have a universal material type with lots of parameters so any individual material type could just ignore the parameters that don't affect it. This is not a bad approach. Or we could have an abstract material class that encapsulates unique behavior. I am a fan of the latter approach. For our program the material needs to do two things:

  1. Produce a scattered ray (or say it absorbed the incident ray).
  2. If scattered, say how much the ray should be attenuated.

This suggests the abstract class:

use crate::{hittable::HitRecord, prelude::*};

pub trait Material {
    fn scatter(&self, _r_in: Ray, _rec: HitRecord) -> Option<(Ray, Color)> {
        None
    }
}

Listing 58: [material.rs] The material class


A Data Structure to Describe Ray-Object Intersections

The hit_record is to avoid a bunch of arguments so we can stuff whatever info we want in there. You can use arguments instead of an encapsulated type, it’s just a matter of taste. Hittables and materials need to be able to reference the other's type in code so there is some circularity of the references. In C++ we add the line class material; to tell the compiler that material is a class that will be defined later. Since we're just specifying a pointer to the class, the compiler doesn't need to know the details of the class, solving the circular reference issue. 1

diff --git a/src/hittable.rs b/src/hittable.rs
index 1b65b92..000dc1d 100644
--- a/src/hittable.rs
+++ b/src/hittable.rs
@@ -1,27 +1,43 @@
-use crate::prelude::*;
+use crate::{
+    material::{Lambertian, Material},
+    prelude::*,
+};
 
-#[derive(Debug, Default, Clone, Copy)]
+#[derive(Clone)]
 pub struct HitRecord {
     pub p: Point3,
     pub normal: Vec3,
+    pub mat: Rc<dyn Material>,
     pub t: f64,
     pub front_face: bool,
 }
 
+impl Default for HitRecord {
+    fn default() -> Self {
+        Self {
+            p: Default::default(),
+            normal: Default::default(),
+            mat: Rc::new(Lambertian::default()),
+            t: Default::default(),
+            front_face: Default::default(),
+        }
+    }
+}
+
 impl HitRecord {
     pub fn set_face_normal(&mut self, r: Ray, outward_normal: Vec3) {
         // Sets the hit record normal vector.
         // NOTE: the parameter `outward_normal` is assumed to have unit length.
 
         self.front_face = dot(r.direction(), outward_normal) < 0.0;
         self.normal = if self.front_face {
             outward_normal
         } else {
             -outward_normal
         };
     }
 }
 
 pub trait Hittable {
     fn hit(&self, r: Ray, ray_t: Interval) -> Option<HitRecord>;
 }

Listing 59: [hittable.rs] Hit record with added material pointer 2


hit_record is just a way to stuff a bunch of arguments into a class so we can send them as a group. When a ray hits a surface (a particular sphere for example), the material pointer in the hit_record will be set to point at the material pointer the sphere was given when it was set up in main() when we start. When the ray_color() routine gets the hit_record it can call member functions of the material pointer to find out what ray, if any, is scattered.

To achieve this, hit_record needs to be told the material that is assigned to the sphere.

diff --git a/src/sphere.rs b/src/sphere.rs
index a2710b4..2b0026a 100644
--- a/src/sphere.rs
+++ b/src/sphere.rs
@@ -1,56 +1,61 @@
 use crate::{
     hittable::{HitRecord, Hittable},
+    material::{Lambertian, Material},
     prelude::*,
 };
 
-#[derive(Debug, Clone, Copy)]
+#[derive(Clone)]
 pub struct Sphere {
     center: Point3,
     radius: f64,
+    mat: Rc<dyn Material>,
 }
 
 impl Sphere {
     pub fn new(center: Point3, radius: f64) -> Self {
         Self {
             center,
             radius: f64::max(0.0, radius),
+            // TODO: Initialize the material pointer `mat`.
+            mat: Rc::new(Lambertian::default()),
         }
     }
 }
 
 impl Hittable for Sphere {
     fn hit(&self, r: Ray, ray_t: Interval) -> Option<HitRecord> {
         let oc = self.center - r.origin();
         let a = r.direction().length_squared();
         let h = dot(r.direction(), oc);
         let c = oc.length_squared() - self.radius * self.radius;
 
         let discriminant = h * h - a * c;
         if discriminant < 0.0 {
             return None;
         }
 
         let sqrtd = f64::sqrt(discriminant);
 
         // Find the nearest root that lies in the acceptable range.
         let mut root = (h - sqrtd) / a;
         if !ray_t.surrounds(root) {
             root = (h + sqrtd) / a;
             if !ray_t.surrounds(root) {
                 return None;
             }
         }
 
         let t = root;
         let p = r.at(t);
         let mut rec = HitRecord {
             t,
             p,
+            mat: self.mat.clone(),
             ..Default::default()
         };
         let outward_normal = (p - self.center) / self.radius;
         rec.set_face_normal(r, outward_normal);
 
         Some(rec)
     }
 }

Listing 60: [sphere.rs] Ray-sphere intersection with added material information



  1. In Rust traits needs to be imported using the use keyword. Circular references are not a problem in Rust, since the use keyword does not simply copy the header file in comparison to the C++ include macro.

  2. HitRecord can no longer derive Default since Rc<dyn Material> can not implement Default. Instead, the default material is set to a Lambertian.

Modeling Light Scatter and Reflectance

Here and throughout these books we will use the term albedo (Latin for “whiteness”). Albedo is a precise technical term in some disciplines, but in all cases it is used to define some form of fractional reflectance. Albedo will vary with material color and (as we will later implement for glass materials) can also vary with incident viewing direction (the direction of the incoming ray).

Lambertian (diffuse) reflectance can either always scatter and attenuate light according to its reflectance \( \mathbf{R} \), or it can sometimes scatter (with probability \( (1 - \mathbf{R}) \)) with no attenuation (where a ray that isn't scattered is just absorbed into the material). It could also be a mixture of both those strategies. We will choose to always scatter, so implementing Lambertian materials becomes a simple task:

diff --git a/src/material.rs b/src/material.rs
index 13b34c3..5702b9c 100644
--- a/src/material.rs
+++ b/src/material.rs
@@ -1,7 +1,28 @@
 use crate::{hittable::HitRecord, prelude::*};
 
 pub trait Material {
     fn scatter(&self, _r_in: Ray, _rec: HitRecord) -> Option<(Ray, Color)> {
         None
     }
 }
+
+#[derive(Debug, Default, Clone, Copy)]
+pub struct Lambertian {
+    albedo: Color,
+}
+
+impl Lambertian {
+    pub fn new(albedo: Color) -> Self {
+        Self { albedo }
+    }
+}
+
+impl Material for Lambertian {
+    fn scatter(&self, _r_in: Ray, rec: HitRecord) -> Option<(Ray, Color)> {
+        let scatter_direction = rec.normal + random_unit_vector();
+        let scattered = Ray::new(rec.p, scatter_direction);
+        let attenuation = self.albedo;
+
+        Some((scattered, attenuation))
+    }
+}

Listing 61: [material.rs] The new lambertian material class


Note the third option: we could scatter with some fixed probability \( p \) and have attenuation be \( albedo/p \). Your choice.

If you read the code above carefully, you'll notice a small chance of mischief. If the random unit vector we generate is exactly opposite the normal vector, the two will sum to zero, which will result in a zero scatter direction vector. This leads to bad scenarios later on (infinities and NaNs), so we need to intercept the condition before we pass it on.

In service of this, we'll create a new vector method — vec3::near_zero() — that returns true if the vector is very close to zero in all dimensions.

The following changes will use the C++ standard library function std::fabs, which returns the absolute value of its input. 1

diff --git a/src/vec3.rs b/src/vec3.rs
index 4cb969b..3348fef 100644
--- a/src/vec3.rs
+++ b/src/vec3.rs
@@ -1,223 +1,229 @@
 use std::{
     fmt::Display,
     ops::{Add, AddAssign, Div, DivAssign, Index, IndexMut, Mul, MulAssign, Neg, Sub},
 };
 
 #[derive(Debug, Default, Clone, Copy)]
 pub struct Vec3 {
     pub e: [f64; 3],
 }
 
 pub type Point3 = Vec3;
 
 impl Vec3 {
     pub fn new(e0: f64, e1: f64, e2: f64) -> Self {
         Self { e: [e0, e1, e2] }
     }
 
     pub fn x(&self) -> f64 {
         self.e[0]
     }
 
     pub fn y(&self) -> f64 {
         self.e[1]
     }
 
     pub fn z(&self) -> f64 {
         self.e[2]
     }
 
     pub fn length(&self) -> f64 {
         f64::sqrt(self.length_squared())
     }
 
     pub fn length_squared(&self) -> f64 {
         self.e[0] * self.e[0] + self.e[1] * self.e[1] + self.e[2] * self.e[2]
     }
 
+    pub fn near_zero(&self) -> bool {
+        // Return true if the vector is close to zero in all dimensions.
+        const S: f64 = 1e-8;
+        self.e[0].abs() < S && self.e[1].abs() < S && self.e[2].abs() < S
+    }
+
     pub fn random() -> Self {
         Vec3 { e: rand::random() }
     }
 
     pub fn random_range(min: f64, max: f64) -> Self {
         Vec3::new(
             rand::random_range(min..max),
             rand::random_range(min..max),
             rand::random_range(min..max),
         )
     }
 }
 
 impl Neg for Vec3 {
     type Output = Self;
 
     fn neg(self) -> Self::Output {
         Self::Output {
             e: self.e.map(|e| -e),
         }
     }
 }
 
 impl Index<usize> for Vec3 {
     type Output = f64;
 
     fn index(&self, index: usize) -> &Self::Output {
         &self.e[index]
     }
 }
 
 impl IndexMut<usize> for Vec3 {
     fn index_mut(&mut self, index: usize) -> &mut Self::Output {
         &mut self.e[index]
     }
 }
 
 impl AddAssign for Vec3 {
     fn add_assign(&mut self, rhs: Self) {
         self.e[0] += rhs.e[0];
         self.e[1] += rhs.e[1];
         self.e[2] += rhs.e[2];
     }
 }
 
 impl MulAssign<f64> for Vec3 {
     fn mul_assign(&mut self, rhs: f64) {
         self.e[0] *= rhs;
         self.e[1] *= rhs;
         self.e[2] *= rhs;
     }
 }
 
 impl DivAssign<f64> for Vec3 {
     fn div_assign(&mut self, rhs: f64) {
         self.mul_assign(1.0 / rhs);
     }
 }
 
 impl Display for Vec3 {
     fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
         write!(f, "{} {} {}", self.e[0], self.e[1], self.e[2])
     }
 }
 
 impl Add for Vec3 {
     type Output = Self;
 
     fn add(self, rhs: Self) -> Self::Output {
         Self::Output {
             e: [
                 self.e[0] + rhs.e[0],
                 self.e[1] + rhs.e[1],
                 self.e[2] + rhs.e[2],
             ],
         }
     }
 }
 
 impl Sub for Vec3 {
     type Output = Self;
 
     fn sub(self, rhs: Self) -> Self::Output {
         Self::Output {
             e: [
                 self.e[0] - rhs.e[0],
                 self.e[1] - rhs.e[1],
                 self.e[2] - rhs.e[2],
             ],
         }
     }
 }
 
 impl Mul for Vec3 {
     type Output = Self;
 
     fn mul(self, rhs: Self) -> Self::Output {
         Self::Output {
             e: [
                 self.e[0] * rhs.e[0],
                 self.e[1] * rhs.e[1],
                 self.e[2] * rhs.e[2],
             ],
         }
     }
 }
 
 impl Mul<f64> for Vec3 {
     type Output = Self;
 
     fn mul(self, rhs: f64) -> Self::Output {
         Self::Output {
             e: [self.e[0] * rhs, self.e[1] * rhs, self.e[2] * rhs],
         }
     }
 }
 
 impl Mul<Vec3> for f64 {
     type Output = Vec3;
 
     fn mul(self, rhs: Vec3) -> Self::Output {
         rhs.mul(self)
     }
 }
 
 impl Div<f64> for Vec3 {
     type Output = Self;
 
     fn div(self, rhs: f64) -> Self::Output {
         self * (1.0 / rhs)
     }
 }
 
 #[inline]
 pub fn dot(u: Vec3, v: Vec3) -> f64 {
     u.e[0] * v.e[0] + u.e[1] * v.e[1] + u.e[2] * v.e[2]
 }
 
 #[inline]
 pub fn cross(u: Vec3, v: Vec3) -> Vec3 {
     Vec3::new(
         u.e[1] * v.e[2] - u.e[2] * v.e[1],
         u.e[2] * v.e[0] - u.e[0] * v.e[2],
         u.e[0] * v.e[1] - u.e[1] * v.e[0],
     )
 }
 
 #[inline]
 pub fn unit_vector(v: Vec3) -> Vec3 {
     v / v.length()
 }
 
 #[inline]
 pub fn random_unit_vector() -> Vec3 {
     loop {
         let p = Vec3::random_range(-1.0, 1.0);
         let lensq = p.length_squared();
         if 1e-160 < lensq && lensq <= 1.0 {
             return p / f64::sqrt(lensq);
         }
     }
 }
 
 #[inline]
 pub fn random_on_hemisphere(normal: Vec3) -> Vec3 {
     let on_unit_sphere = random_unit_vector();
     if dot(on_unit_sphere, normal) > 0.0 {
         on_unit_sphere
     } else {
         -on_unit_sphere
     }
 }
 
 #[inline]
 pub fn random_in_unit_disk() -> Vec3 {
     loop {
         let p = Vec3::new(
             rand::random_range(-1.0..1.0),
             rand::random_range(-1.0..1.0),
             0.0,
         );
         if p.length_squared() < 1.0 {
             return p;
         }
     }
 }

Listing 62: [vec3.rs] The vec3::near_zero() method


diff --git a/src/material.rs b/src/material.rs
index 5702b9c..1b49e15 100644
--- a/src/material.rs
+++ b/src/material.rs
@@ -1,28 +1,34 @@
 use crate::{hittable::HitRecord, prelude::*};
 
 pub trait Material {
     fn scatter(&self, _r_in: Ray, _rec: HitRecord) -> Option<(Ray, Color)> {
         None
     }
 }
 
 #[derive(Debug, Default, Clone, Copy)]
 pub struct Lambertian {
     albedo: Color,
 }
 
 impl Lambertian {
     pub fn new(albedo: Color) -> Self {
         Self { albedo }
     }
 }
 
 impl Material for Lambertian {
     fn scatter(&self, _r_in: Ray, rec: HitRecord) -> Option<(Ray, Color)> {
-        let scatter_direction = rec.normal + random_unit_vector();
+        let mut scatter_direction = rec.normal + random_unit_vector();
+
+        // Catch degenerate scatter direction
+        if scatter_direction.near_zero() {
+            scatter_direction = rec.normal;
+        }
+
         let scattered = Ray::new(rec.p, scatter_direction);
         let attenuation = self.albedo;
 
         Some((scattered, attenuation))
     }
 }

Listing 63: [material.rs] Lambertian scatter, bullet-proof



  1. Rust version is f64::abs.

Mirrored Light Reflection

For polished metals the ray won’t be randomly scattered. The key question is: How does a ray get reflected from a metal mirror? Vector math is our friend here:

Ray reflection

Figure 15: Ray reflection


The reflected ray direction in red is just \( \mathbf{v} + 2 \mathbf{b} \). In our design, \( \mathbf{n} \) is a unit vector (length one), but \( \mathbf{v} \) may not be. To get the vector \( \mathbf{b} \), we scale the normal vector by the length of the projection of \( \mathbf{v} \) onto \( \mathbf{n} \), which is given by the dot product \( \mathbf{v} \cdot \mathbf{n} \). (If \( \mathbf{n} \) were not a unit vector, we would also need to divide this dot product by the length of \( \mathbf{n} \).) Finally, because \( \mathbf{b} \) points into the surface, and we want \( \mathbf{b} \) to point out of the surface, we need to negate this projection length.

Putting everything together, we get the following computation of the reflected vector:

diff --git a/src/vec3.rs b/src/vec3.rs
index 3348fef..4cb0b6f 100644
--- a/src/vec3.rs
+++ b/src/vec3.rs
@@ -1,229 +1,234 @@
 use std::{
     fmt::Display,
     ops::{Add, AddAssign, Div, DivAssign, Index, IndexMut, Mul, MulAssign, Neg, Sub},
 };
 
 #[derive(Debug, Default, Clone, Copy)]
 pub struct Vec3 {
     pub e: [f64; 3],
 }
 
 pub type Point3 = Vec3;
 
 impl Vec3 {
     pub fn new(e0: f64, e1: f64, e2: f64) -> Self {
         Self { e: [e0, e1, e2] }
     }
 
     pub fn x(&self) -> f64 {
         self.e[0]
     }
 
     pub fn y(&self) -> f64 {
         self.e[1]
     }
 
     pub fn z(&self) -> f64 {
         self.e[2]
     }
 
     pub fn length(&self) -> f64 {
         f64::sqrt(self.length_squared())
     }
 
     pub fn length_squared(&self) -> f64 {
         self.e[0] * self.e[0] + self.e[1] * self.e[1] + self.e[2] * self.e[2]
     }
 
     pub fn near_zero(&self) -> bool {
         // Return true if the vector is close to zero in all dimensions.
         const S: f64 = 1e-8;
         self.e[0].abs() < S && self.e[1].abs() < S && self.e[2].abs() < S
     }
 
     pub fn random() -> Self {
         Vec3 { e: rand::random() }
     }
 
     pub fn random_range(min: f64, max: f64) -> Self {
         Vec3::new(
             rand::random_range(min..max),
             rand::random_range(min..max),
             rand::random_range(min..max),
         )
     }
 }
 
 impl Neg for Vec3 {
     type Output = Self;
 
     fn neg(self) -> Self::Output {
         Self::Output {
             e: self.e.map(|e| -e),
         }
     }
 }
 
 impl Index<usize> for Vec3 {
     type Output = f64;
 
     fn index(&self, index: usize) -> &Self::Output {
         &self.e[index]
     }
 }
 
 impl IndexMut<usize> for Vec3 {
     fn index_mut(&mut self, index: usize) -> &mut Self::Output {
         &mut self.e[index]
     }
 }
 
 impl AddAssign for Vec3 {
     fn add_assign(&mut self, rhs: Self) {
         self.e[0] += rhs.e[0];
         self.e[1] += rhs.e[1];
         self.e[2] += rhs.e[2];
     }
 }
 
 impl MulAssign<f64> for Vec3 {
     fn mul_assign(&mut self, rhs: f64) {
         self.e[0] *= rhs;
         self.e[1] *= rhs;
         self.e[2] *= rhs;
     }
 }
 
 impl DivAssign<f64> for Vec3 {
     fn div_assign(&mut self, rhs: f64) {
         self.mul_assign(1.0 / rhs);
     }
 }
 
 impl Display for Vec3 {
     fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
         write!(f, "{} {} {}", self.e[0], self.e[1], self.e[2])
     }
 }
 
 impl Add for Vec3 {
     type Output = Self;
 
     fn add(self, rhs: Self) -> Self::Output {
         Self::Output {
             e: [
                 self.e[0] + rhs.e[0],
                 self.e[1] + rhs.e[1],
                 self.e[2] + rhs.e[2],
             ],
         }
     }
 }
 
 impl Sub for Vec3 {
     type Output = Self;
 
     fn sub(self, rhs: Self) -> Self::Output {
         Self::Output {
             e: [
                 self.e[0] - rhs.e[0],
                 self.e[1] - rhs.e[1],
                 self.e[2] - rhs.e[2],
             ],
         }
     }
 }
 
 impl Mul for Vec3 {
     type Output = Self;
 
     fn mul(self, rhs: Self) -> Self::Output {
         Self::Output {
             e: [
                 self.e[0] * rhs.e[0],
                 self.e[1] * rhs.e[1],
                 self.e[2] * rhs.e[2],
             ],
         }
     }
 }
 
 impl Mul<f64> for Vec3 {
     type Output = Self;
 
     fn mul(self, rhs: f64) -> Self::Output {
         Self::Output {
             e: [self.e[0] * rhs, self.e[1] * rhs, self.e[2] * rhs],
         }
     }
 }
 
 impl Mul<Vec3> for f64 {
     type Output = Vec3;
 
     fn mul(self, rhs: Vec3) -> Self::Output {
         rhs.mul(self)
     }
 }
 
 impl Div<f64> for Vec3 {
     type Output = Self;
 
     fn div(self, rhs: f64) -> Self::Output {
         self * (1.0 / rhs)
     }
 }
 
 #[inline]
 pub fn dot(u: Vec3, v: Vec3) -> f64 {
     u.e[0] * v.e[0] + u.e[1] * v.e[1] + u.e[2] * v.e[2]
 }
 
 #[inline]
 pub fn cross(u: Vec3, v: Vec3) -> Vec3 {
     Vec3::new(
         u.e[1] * v.e[2] - u.e[2] * v.e[1],
         u.e[2] * v.e[0] - u.e[0] * v.e[2],
         u.e[0] * v.e[1] - u.e[1] * v.e[0],
     )
 }
 
 #[inline]
 pub fn unit_vector(v: Vec3) -> Vec3 {
     v / v.length()
 }
 
 #[inline]
 pub fn random_unit_vector() -> Vec3 {
     loop {
         let p = Vec3::random_range(-1.0, 1.0);
         let lensq = p.length_squared();
         if 1e-160 < lensq && lensq <= 1.0 {
             return p / f64::sqrt(lensq);
         }
     }
 }
 
 #[inline]
 pub fn random_on_hemisphere(normal: Vec3) -> Vec3 {
     let on_unit_sphere = random_unit_vector();
     if dot(on_unit_sphere, normal) > 0.0 {
         on_unit_sphere
     } else {
         -on_unit_sphere
     }
 }
 
 #[inline]
+pub fn reflect(v: Vec3, n: Vec3) -> Vec3 {
+    v - 2.0 * dot(v, n) * n
+}
+
+#[inline]
 pub fn random_in_unit_disk() -> Vec3 {
     loop {
         let p = Vec3::new(
             rand::random_range(-1.0..1.0),
             rand::random_range(-1.0..1.0),
             0.0,
         );
         if p.length_squared() < 1.0 {
             return p;
         }
     }
 }

Listing 64: [vec3.rs] vec3 reflection function


The metal material just reflects rays using that formula:

diff --git a/src/material.rs b/src/material.rs
index 1b49e15..8475d17 100644
--- a/src/material.rs
+++ b/src/material.rs
@@ -1,34 +1,55 @@
 use crate::{hittable::HitRecord, prelude::*};
 
 pub trait Material {
     fn scatter(&self, _r_in: Ray, _rec: HitRecord) -> Option<(Ray, Color)> {
         None
     }
 }
 
 #[derive(Debug, Default, Clone, Copy)]
 pub struct Lambertian {
     albedo: Color,
 }
 
 impl Lambertian {
     pub fn new(albedo: Color) -> Self {
         Self { albedo }
     }
 }
 
 impl Material for Lambertian {
     fn scatter(&self, _r_in: Ray, rec: HitRecord) -> Option<(Ray, Color)> {
         let mut scatter_direction = rec.normal + random_unit_vector();
 
         // Catch degenerate scatter direction
         if scatter_direction.near_zero() {
             scatter_direction = rec.normal;
         }
 
         let scattered = Ray::new(rec.p, scatter_direction);
         let attenuation = self.albedo;
 
         Some((scattered, attenuation))
     }
 }
+
+#[derive(Debug, Default, Clone, Copy)]
+pub struct Metal {
+    albedo: Color,
+}
+
+impl Metal {
+    pub fn new(albedo: Color) -> Self {
+        Self { albedo }
+    }
+}
+
+impl Material for Metal {
+    fn scatter(&self, r_in: Ray, rec: HitRecord) -> Option<(Ray, Color)> {
+        let reflected = reflect(r_in.direction(), rec.normal);
+        let scattered = Ray::new(rec.p, reflected);
+        let attenuation = self.albedo;
+
+        Some((scattered, attenuation))
+    }
+}

Listing 65: [material.rs] Metal material with reflectance function


We need to modify the ray_color() function for all of our changes:

diff --git a/src/camera.rs b/src/camera.rs
index e6c60c3..1927898 100644
--- a/src/camera.rs
+++ b/src/camera.rs
@@ -1,166 +1,168 @@
 use crate::{hittable::Hittable, prelude::*};
 
 pub struct Camera {
     /// Ratio of image width over height
     pub aspect_ratio: f64,
     /// Rendered image width in pixel count
     pub image_width: i32,
     // Count of random samples for each pixel
     pub samples_per_pixel: i32,
     // Maximum number of ray bounces into scene
     pub max_depth: i32,
 
     /// Rendered image height
     image_height: i32,
     // Color scale factor for a sum of pixel samples
     pixel_samples_scale: f64,
     /// Camera center
     center: Point3,
     /// Location of pixel 0, 0
     pixel00_loc: Point3,
     /// Offset to pixel to the right
     pixel_delta_u: Vec3,
     /// Offset to pixel below
     pixel_delta_v: Vec3,
 }
 
 impl Default for Camera {
     fn default() -> Self {
         Self {
             aspect_ratio: 1.0,
             image_width: 100,
             samples_per_pixel: 10,
             max_depth: 10,
             image_height: Default::default(),
             pixel_samples_scale: Default::default(),
             center: Default::default(),
             pixel00_loc: Default::default(),
             pixel_delta_u: Default::default(),
             pixel_delta_v: Default::default(),
         }
     }
 }
 
 impl Camera {
     pub fn with_aspect_ratio(mut self, aspect_ratio: f64) -> Self {
         self.aspect_ratio = aspect_ratio;
 
         self
     }
 
     pub fn with_image_width(mut self, image_width: i32) -> Self {
         self.image_width = image_width;
 
         self
     }
 
     pub fn with_samples_per_pixel(mut self, samples_per_pixel: i32) -> Self {
         self.samples_per_pixel = samples_per_pixel;
 
         self
     }
 
     pub fn with_max_depth(mut self, max_depth: i32) -> Self {
         self.max_depth = max_depth;
 
         self
     }
 
     pub fn render(&mut self, world: &impl Hittable) -> std::io::Result<()> {
         self.initialize();
 
         println!("P3");
         println!("{} {}", self.image_width, self.image_height);
         println!("255");
 
         for j in 0..self.image_height {
             info!("Scanlines remaining: {}", self.image_height - j);
             for i in 0..self.image_width {
                 let mut pixel_color = Color::new(0.0, 0.0, 0.0);
                 for _sample in 0..self.samples_per_pixel {
                     let r = self.get_ray(i, j);
                     pixel_color += Self::ray_color(r, self.max_depth, world);
                 }
                 write_color(std::io::stdout(), self.pixel_samples_scale * pixel_color)?;
             }
         }
         info!("Done.");
 
         Ok(())
     }
 
     fn initialize(&mut self) {
         self.image_height = {
             let image_height = (self.image_width as f64 / self.aspect_ratio) as i32;
             if image_height < 1 { 1 } else { image_height }
         };
 
         self.pixel_samples_scale = 1.0 / self.samples_per_pixel as f64;
 
         self.center = Point3::new(0.0, 0.0, 0.0);
 
         // Determine viewport dimensions.
         let focal_length = 1.0;
         let viewport_height = 2.0;
         let viewport_width =
             viewport_height * (self.image_width as f64) / (self.image_height as f64);
 
         // Calculate the vectors across the horizontal and down the vertical viewport edges.
         let viewport_u = Vec3::new(viewport_width, 0.0, 0.0);
         let viewport_v = Vec3::new(0.0, -viewport_height, 0.0);
 
         // Calculate the horizontal and vertical delta vectors from pixel to pixel.
         self.pixel_delta_u = viewport_u / self.image_width as f64;
         self.pixel_delta_v = viewport_v / self.image_height as f64;
 
         // Calculate the location of the upper left pixel.
         let viewport_upper_left =
             self.center - Vec3::new(0.0, 0.0, focal_length) - viewport_u / 2.0 - viewport_v / 2.0;
         self.pixel00_loc = viewport_upper_left + 0.5 * (self.pixel_delta_u + self.pixel_delta_v);
     }
 
     fn get_ray(&self, i: i32, j: i32) -> Ray {
         // Construct a camera ray originating from the origin and directed at randomly sampled
         // point around the pixel location i, j.
 
         let offset = Self::sample_square();
         let pixel_sample = self.pixel00_loc
             + ((i as f64 + offset.x()) * self.pixel_delta_u)
             + ((j as f64 + offset.y()) * self.pixel_delta_v);
 
         let ray_origin = self.center;
         let ray_direction = pixel_sample - ray_origin;
 
         Ray::new(ray_origin, ray_direction)
     }
 
     fn sample_square() -> Vec3 {
         // Returns the vector to a random point in the [-.5,-.5]-[+.5,+.5] unit square.
         Vec3::new(
             rand::random::<f64>() - 0.5,
             rand::random::<f64>() - 0.5,
             0.0,
         )
     }
 
     fn _sample_disk(radius: f64) -> Vec3 {
         // Returns a random point in the unit (radius 0.5) disk centered at the origin.
         radius * random_in_unit_disk()
     }
 
     fn ray_color(r: Ray, depth: i32, world: &impl Hittable) -> Color {
         // If we've exceeded the ray bounce limit, no more light is gathered.
         if depth <= 0 {
             return Color::new(0.0, 0.0, 0.0);
         }
 
         if let Some(rec) = world.hit(r, Interval::new(0.001, INFINITY)) {
-            let direction = rec.normal + random_unit_vector();
-            return 0.1 * Self::ray_color(Ray::new(rec.p, direction), depth - 1, world);
+            if let Some((scattered, attenuation)) = rec.mat.scatter(r, rec.clone()) {
+                return attenuation * Self::ray_color(scattered, depth - 1, world);
+            }
+            return Color::new(0.0, 0.0, 0.0);
         }
 
         let unit_direction = unit_vector(r.direction());
         let a = 0.5 * (unit_direction.y() + 1.0);
         (1.0 - a) * Color::new(1.0, 1.0, 1.0) + a * Color::new(0.5, 0.7, 1.0)
     }
 }

Listing 66: [camera.rs] Ray color with scattered reflectance


Now we'll update the sphere constructor to initialize the material pointer mat:

diff --git a/src/sphere.rs b/src/sphere.rs
index 2b0026a..2a6b2ce 100644
--- a/src/sphere.rs
+++ b/src/sphere.rs
@@ -1,61 +1,60 @@
 use crate::{
     hittable::{HitRecord, Hittable},
-    material::{Lambertian, Material},
+    material::Material,
     prelude::*,
 };
 
 #[derive(Clone)]
 pub struct Sphere {
     center: Point3,
     radius: f64,
     mat: Rc<dyn Material>,
 }
 
 impl Sphere {
-    pub fn new(center: Point3, radius: f64) -> Self {
+    pub fn new(center: Point3, radius: f64, mat: Rc<dyn Material>) -> Self {
         Self {
             center,
             radius: f64::max(0.0, radius),
-            // TODO: Initialize the material pointer `mat`.
-            mat: Rc::new(Lambertian::default()),
+            mat,
         }
     }
 }
 
 impl Hittable for Sphere {
     fn hit(&self, r: Ray, ray_t: Interval) -> Option<HitRecord> {
         let oc = self.center - r.origin();
         let a = r.direction().length_squared();
         let h = dot(r.direction(), oc);
         let c = oc.length_squared() - self.radius * self.radius;
 
         let discriminant = h * h - a * c;
         if discriminant < 0.0 {
             return None;
         }
 
         let sqrtd = f64::sqrt(discriminant);
 
         // Find the nearest root that lies in the acceptable range.
         let mut root = (h - sqrtd) / a;
         if !ray_t.surrounds(root) {
             root = (h + sqrtd) / a;
             if !ray_t.surrounds(root) {
                 return None;
             }
         }
 
         let t = root;
         let p = r.at(t);
         let mut rec = HitRecord {
             t,
             p,
             mat: self.mat.clone(),
             ..Default::default()
         };
         let outward_normal = (p - self.center) / self.radius;
         rec.set_face_normal(r, outward_normal);
 
         Some(rec)
     }
 }

Listing 67: [sphere.rs] Initializing sphere with a material


A Scene with Metal Spheres

Now let’s add some metal spheres to our scene:

diff --git a/src/main.rs b/src/main.rs
index a016213..4f0fceb 100644
--- a/src/main.rs
+++ b/src/main.rs
@@ -1,17 +1,46 @@
-use code::{camera::Camera, hittable_list::HittableList, prelude::*, sphere::Sphere};
+use code::{
+    camera::Camera,
+    hittable_list::HittableList,
+    material::{Lambertian, Metal},
+    prelude::*,
+    sphere::Sphere,
+};
 
 fn main() -> std::io::Result<()> {
     let mut world = HittableList::new();
 
-    world.add(Rc::new(Sphere::new(Point3::new(0.0, 0.0, -1.0), 0.5)));
-    world.add(Rc::new(Sphere::new(Point3::new(0.0, -100.5, -1.0), 100.0)));
+    let material_ground = Rc::new(Lambertian::new(Color::new(0.8, 0.8, 0.0)));
+    let material_center = Rc::new(Lambertian::new(Color::new(0.1, 0.2, 0.5)));
+    let material_left = Rc::new(Metal::new(Color::new(0.8, 0.8, 0.8)));
+    let material_right = Rc::new(Metal::new(Color::new(0.8, 0.6, 0.2)));
+
+    world.add(Rc::new(Sphere::new(
+        Point3::new(0.0, -100.5, -1.0),
+        100.0,
+        material_ground,
+    )));
+    world.add(Rc::new(Sphere::new(
+        Point3::new(0.0, 0.0, -1.2),
+        0.5,
+        material_center,
+    )));
+    world.add(Rc::new(Sphere::new(
+        Point3::new(-1.0, 0.0, -1.0),
+        0.5,
+        material_left,
+    )));
+    world.add(Rc::new(Sphere::new(
+        Point3::new(1.0, 0.0, -1.0),
+        0.5,
+        material_right,
+    )));
 
     env_logger::init();
 
     Camera::default()
         .with_aspect_ratio(16.0 / 9.0)
         .with_image_width(400)
         .with_samples_per_pixel(100)
         .with_max_depth(50)
         .render(&world)
 }

Listing 68: [main.rs] Scene with metal spheres


Which gives:

Shiny metal

Image 13: Shiny metal


Fuzzy Reflection

We can also randomize the reflected direction by using a small sphere and choosing a new endpoint for the ray. We'll use a random point from the surface of a sphere centered on the original endpoint, scaled by the fuzz factor.

Generating fuzzed reflection rays

Figure 16: Generating fuzzed reflection rays


The bigger the fuzz sphere, the fuzzier the reflections will be. This suggests adding a fuzziness parameter that is just the radius of the sphere (so zero is no perturbation). The catch is that for big spheres or grazing rays, we may scatter below the surface. We can just have the surface absorb those.

Also note that in order for the fuzz sphere to make sense, it needs to be consistently scaled compared to the reflection vector, which can vary in length arbitrarily. To address this, we need to normalize the reflected ray.

diff --git a/src/material.rs b/src/material.rs
index 8475d17..e52f6f3 100644
--- a/src/material.rs
+++ b/src/material.rs
@@ -1,55 +1,58 @@
 use crate::{hittable::HitRecord, prelude::*};
 
 pub trait Material {
     fn scatter(&self, _r_in: Ray, _rec: HitRecord) -> Option<(Ray, Color)> {
         None
     }
 }
 
 #[derive(Debug, Default, Clone, Copy)]
 pub struct Lambertian {
     albedo: Color,
 }
 
 impl Lambertian {
     pub fn new(albedo: Color) -> Self {
         Self { albedo }
     }
 }
 
 impl Material for Lambertian {
     fn scatter(&self, _r_in: Ray, rec: HitRecord) -> Option<(Ray, Color)> {
         let mut scatter_direction = rec.normal + random_unit_vector();
 
         // Catch degenerate scatter direction
         if scatter_direction.near_zero() {
             scatter_direction = rec.normal;
         }
 
         let scattered = Ray::new(rec.p, scatter_direction);
         let attenuation = self.albedo;
 
         Some((scattered, attenuation))
     }
 }
 
 #[derive(Debug, Default, Clone, Copy)]
 pub struct Metal {
     albedo: Color,
+    fuzz: f64,
 }
 
 impl Metal {
-    pub fn new(albedo: Color) -> Self {
-        Self { albedo }
+    pub fn new(albedo: Color, fuzz: f64) -> Self {
+        let fuzz = if fuzz < 1.0 { fuzz } else { 1.0 };
+        Self { albedo, fuzz }
     }
 }
 
 impl Material for Metal {
     fn scatter(&self, r_in: Ray, rec: HitRecord) -> Option<(Ray, Color)> {
-        let reflected = reflect(r_in.direction(), rec.normal);
+        let mut reflected = reflect(r_in.direction(), rec.normal);
+        reflected = unit_vector(reflected) + (self.fuzz * random_unit_vector());
         let scattered = Ray::new(rec.p, reflected);
         let attenuation = self.albedo;
 
-        Some((scattered, attenuation))
+        (dot(scattered.direction(), rec.normal) > 0.0).then(|| (scattered, attenuation))
     }
 }

Listing 69: [material.rs] Metal material fuzziness


We can try that out by adding fuzziness 0.3 and 1.0 to the metals:

diff --git a/src/main.rs b/src/main.rs
index 4f0fceb..a705a06 100644
--- a/src/main.rs
+++ b/src/main.rs
@@ -1,46 +1,46 @@
 use code::{
     camera::Camera,
     hittable_list::HittableList,
     material::{Lambertian, Metal},
     prelude::*,
     sphere::Sphere,
 };
 
 fn main() -> std::io::Result<()> {
     let mut world = HittableList::new();
 
     let material_ground = Rc::new(Lambertian::new(Color::new(0.8, 0.8, 0.0)));
     let material_center = Rc::new(Lambertian::new(Color::new(0.1, 0.2, 0.5)));
-    let material_left = Rc::new(Metal::new(Color::new(0.8, 0.8, 0.8)));
-    let material_right = Rc::new(Metal::new(Color::new(0.8, 0.6, 0.2)));
+    let material_left = Rc::new(Metal::new(Color::new(0.8, 0.8, 0.8), 0.3));
+    let material_right = Rc::new(Metal::new(Color::new(0.8, 0.6, 0.2), 1.0));
 
     world.add(Rc::new(Sphere::new(
         Point3::new(0.0, -100.5, -1.0),
         100.0,
         material_ground,
     )));
     world.add(Rc::new(Sphere::new(
         Point3::new(0.0, 0.0, -1.2),
         0.5,
         material_center,
     )));
     world.add(Rc::new(Sphere::new(
         Point3::new(-1.0, 0.0, -1.0),
         0.5,
         material_left,
     )));
     world.add(Rc::new(Sphere::new(
         Point3::new(1.0, 0.0, -1.0),
         0.5,
         material_right,
     )));
 
     env_logger::init();
 
     Camera::default()
         .with_aspect_ratio(16.0 / 9.0)
         .with_image_width(400)
         .with_samples_per_pixel(100)
         .with_max_depth(50)
         .render(&world)
 }

Listing 70: [main.rs] Metal spheres with fuzziness


Fuzzed metal

Image 14: Fuzzed metal


Dielectrics

Clear materials such as water, glass, and diamond are dielectrics. When a light ray hits them, it splits into a reflected ray and a refracted (transmitted) ray. We’ll handle that by randomly choosing between reflection and refraction, only generating one scattered ray per interaction.

As a quick review of terms, a reflected ray hits a surface and then “bounces” off in a new direction.

A refracted ray bends as it transitions from a material's surroundings into the material itself (as with glass or water). This is why a pencil looks bent when partially inserted in water.

The amount that a refracted ray bends is determined by the material's refractive index. Generally, this is a single value that describes how much light bends when entering a material from a vacuum. Glass has a refractive index of something like 1.5–1.7, diamond is around 2.4, and air has a small refractive index of 1.000293.

When a transparent material is embedded in a different transparent material, you can describe the refraction with a relative refraction index: the refractive index of the object's material divided by the refractive index of the surrounding material. For example, if you want to render a glass ball under water, then the glass ball would have an effective refractive index of 1.125. This is given by the refractive index of glass (1.5) divided by the refractive index of water (1.333).

You can find the refractive index of most common materials with a quick internet search.

Refraction

The hardest part to debug is the refracted ray. I usually first just have all the light refract if there is a refraction ray at all. For this project, I tried to put two glass balls in our scene, and I got this (I have not told you how to do this right or wrong yet, but soon!):

Glass first

Image 15: Glass first


Is that right? Glass balls look odd in real life. But no, it isn’t right. The world should be flipped upside down and no weird black stuff. I just printed out the ray straight through the middle of the image and it was clearly wrong. That often does the job.

Snell's Law

The refraction is described by Snell’s law:

\[ \eta \cdot \sin \theta = \eta' \cdot \sin \theta' \]

Where \( \theta \) and \( \theta' \) are the angles from the normal, and \( \eta \) and \( \eta' \) (pronounced “eta” and “eta prime”) are the refractive indices. The geometry is:

Ray refraction

Figure 17: Ray refraction


In order to determine the direction of the refracted ray, we have to solve for \( \sin \theta' \):

\[ \sin \theta' = \frac{\eta}{\eta'} \cdot \sin \theta \]

On the refracted side of the surface there is a refracted ray \( \mathbf{R}' \) and a normal \( \mathbf{n}' \), and there exists an angle, \( \theta' \), between them. We can split \( \mathbf{R}' \) into the parts of the ray that are perpendicular to \( \mathbf{n}' \) and parallel to \( \mathbf{n}' \):

\[ \mathbf{R}' = \mathbf{R}' _ {\bot} + \mathbf{R}'_{\|} \]

If we solve for \( \mathbf{R}' _ {\bot} \) and \( \mathbf{R}'_{\|} \) we get:

\[ \mathbf{R}' _ {\bot} = \frac{\eta}{\eta'} (\mathbf{R} + |\mathbf{R}| \cos( \theta ) \mathbf{n} ) \]

\[ \mathbf{R}'_{\|} = - \sqrt{1 - |\mathbf{R}' _ {\bot}|^2 } \mathbf{n} \]

You can go ahead and prove this for yourself if you want, but we will treat it as fact and move on. The rest of the book will not require you to understand the proof.

We know the value of every term on the right-hand side except for \( \cos \theta \). It is well known that the dot product of two vectors can be explained in terms of the cosine of the angle between them:

\[ \mathbf{𝐚} \cdot \mathbf{b} = |\mathbf{a}| |\mathbf{b}| \cos \theta \]

If we restrict \( \mathbf{a} \) and \( \mathbf{b} \) to be unit vectors:

\[ \mathbf{𝐚} \cdot \mathbf{b} = \cos \theta \]

We can now rewrite \( \mathbf{R}' _ {\bot} \) in terms of known quantities:

\[ \mathbf{R}' _ {\bot} = \frac{\eta}{\eta'} (\mathbf{R} + (- \mathbf{R} \cdot \mathbf{n} ) \mathbf{n} ) \]

When we combine them back together, we can write a function to calculate \( \mathbf{R}' \):

diff --git a/src/vec3.rs b/src/vec3.rs
index 4cb0b6f..62afd80 100644
--- a/src/vec3.rs
+++ b/src/vec3.rs
@@ -1,234 +1,243 @@
 use std::{
     fmt::Display,
     ops::{Add, AddAssign, Div, DivAssign, Index, IndexMut, Mul, MulAssign, Neg, Sub},
 };
 
 #[derive(Debug, Default, Clone, Copy)]
 pub struct Vec3 {
     pub e: [f64; 3],
 }
 
 pub type Point3 = Vec3;
 
 impl Vec3 {
     pub fn new(e0: f64, e1: f64, e2: f64) -> Self {
         Self { e: [e0, e1, e2] }
     }
 
     pub fn x(&self) -> f64 {
         self.e[0]
     }
 
     pub fn y(&self) -> f64 {
         self.e[1]
     }
 
     pub fn z(&self) -> f64 {
         self.e[2]
     }
 
     pub fn length(&self) -> f64 {
         f64::sqrt(self.length_squared())
     }
 
     pub fn length_squared(&self) -> f64 {
         self.e[0] * self.e[0] + self.e[1] * self.e[1] + self.e[2] * self.e[2]
     }
 
     pub fn near_zero(&self) -> bool {
         // Return true if the vector is close to zero in all dimensions.
         const S: f64 = 1e-8;
         self.e[0].abs() < S && self.e[1].abs() < S && self.e[2].abs() < S
     }
 
     pub fn random() -> Self {
         Vec3 { e: rand::random() }
     }
 
     pub fn random_range(min: f64, max: f64) -> Self {
         Vec3::new(
             rand::random_range(min..max),
             rand::random_range(min..max),
             rand::random_range(min..max),
         )
     }
 }
 
 impl Neg for Vec3 {
     type Output = Self;
 
     fn neg(self) -> Self::Output {
         Self::Output {
             e: self.e.map(|e| -e),
         }
     }
 }
 
 impl Index<usize> for Vec3 {
     type Output = f64;
 
     fn index(&self, index: usize) -> &Self::Output {
         &self.e[index]
     }
 }
 
 impl IndexMut<usize> for Vec3 {
     fn index_mut(&mut self, index: usize) -> &mut Self::Output {
         &mut self.e[index]
     }
 }
 
 impl AddAssign for Vec3 {
     fn add_assign(&mut self, rhs: Self) {
         self.e[0] += rhs.e[0];
         self.e[1] += rhs.e[1];
         self.e[2] += rhs.e[2];
     }
 }
 
 impl MulAssign<f64> for Vec3 {
     fn mul_assign(&mut self, rhs: f64) {
         self.e[0] *= rhs;
         self.e[1] *= rhs;
         self.e[2] *= rhs;
     }
 }
 
 impl DivAssign<f64> for Vec3 {
     fn div_assign(&mut self, rhs: f64) {
         self.mul_assign(1.0 / rhs);
     }
 }
 
 impl Display for Vec3 {
     fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
         write!(f, "{} {} {}", self.e[0], self.e[1], self.e[2])
     }
 }
 
 impl Add for Vec3 {
     type Output = Self;
 
     fn add(self, rhs: Self) -> Self::Output {
         Self::Output {
             e: [
                 self.e[0] + rhs.e[0],
                 self.e[1] + rhs.e[1],
                 self.e[2] + rhs.e[2],
             ],
         }
     }
 }
 
 impl Sub for Vec3 {
     type Output = Self;
 
     fn sub(self, rhs: Self) -> Self::Output {
         Self::Output {
             e: [
                 self.e[0] - rhs.e[0],
                 self.e[1] - rhs.e[1],
                 self.e[2] - rhs.e[2],
             ],
         }
     }
 }
 
 impl Mul for Vec3 {
     type Output = Self;
 
     fn mul(self, rhs: Self) -> Self::Output {
         Self::Output {
             e: [
                 self.e[0] * rhs.e[0],
                 self.e[1] * rhs.e[1],
                 self.e[2] * rhs.e[2],
             ],
         }
     }
 }
 
 impl Mul<f64> for Vec3 {
     type Output = Self;
 
     fn mul(self, rhs: f64) -> Self::Output {
         Self::Output {
             e: [self.e[0] * rhs, self.e[1] * rhs, self.e[2] * rhs],
         }
     }
 }
 
 impl Mul<Vec3> for f64 {
     type Output = Vec3;
 
     fn mul(self, rhs: Vec3) -> Self::Output {
         rhs.mul(self)
     }
 }
 
 impl Div<f64> for Vec3 {
     type Output = Self;
 
     fn div(self, rhs: f64) -> Self::Output {
         self * (1.0 / rhs)
     }
 }
 
 #[inline]
 pub fn dot(u: Vec3, v: Vec3) -> f64 {
     u.e[0] * v.e[0] + u.e[1] * v.e[1] + u.e[2] * v.e[2]
 }
 
 #[inline]
 pub fn cross(u: Vec3, v: Vec3) -> Vec3 {
     Vec3::new(
         u.e[1] * v.e[2] - u.e[2] * v.e[1],
         u.e[2] * v.e[0] - u.e[0] * v.e[2],
         u.e[0] * v.e[1] - u.e[1] * v.e[0],
     )
 }
 
 #[inline]
 pub fn unit_vector(v: Vec3) -> Vec3 {
     v / v.length()
 }
 
 #[inline]
 pub fn random_unit_vector() -> Vec3 {
     loop {
         let p = Vec3::random_range(-1.0, 1.0);
         let lensq = p.length_squared();
         if 1e-160 < lensq && lensq <= 1.0 {
             return p / f64::sqrt(lensq);
         }
     }
 }
 
 #[inline]
 pub fn random_on_hemisphere(normal: Vec3) -> Vec3 {
     let on_unit_sphere = random_unit_vector();
     if dot(on_unit_sphere, normal) > 0.0 {
         on_unit_sphere
     } else {
         -on_unit_sphere
     }
 }
 
 #[inline]
 pub fn reflect(v: Vec3, n: Vec3) -> Vec3 {
     v - 2.0 * dot(v, n) * n
 }
 
 #[inline]
+pub fn refract(uv: Vec3, n: Vec3, etai_over_etat: f64) -> Vec3 {
+    let cos_theta = f64::min(dot(-uv, n), 1.0);
+    let r_out_perp = etai_over_etat * (uv + cos_theta * n);
+    let r_out_parallel = -f64::sqrt(f64::abs(1.0 - r_out_perp.length_squared())) * n;
+
+    r_out_perp + r_out_parallel
+}
+
+#[inline]
 pub fn random_in_unit_disk() -> Vec3 {
     loop {
         let p = Vec3::new(
             rand::random_range(-1.0..1.0),
             rand::random_range(-1.0..1.0),
             0.0,
         );
         if p.length_squared() < 1.0 {
             return p;
         }
     }
 }

Listing 71: [vec3.rs] Refraction function


And the dielectric material that always refracts is:

diff --git a/src/material.rs b/src/material.rs
index e52f6f3..090a4f6 100644
--- a/src/material.rs
+++ b/src/material.rs
@@ -1,58 +1,89 @@
 use crate::{hittable::HitRecord, prelude::*};
 
 pub trait Material {
     fn scatter(&self, _r_in: Ray, _rec: HitRecord) -> Option<(Ray, Color)> {
         None
     }
 }
 
 #[derive(Debug, Default, Clone, Copy)]
 pub struct Lambertian {
     albedo: Color,
 }
 
 impl Lambertian {
     pub fn new(albedo: Color) -> Self {
         Self { albedo }
     }
 }
 
 impl Material for Lambertian {
     fn scatter(&self, _r_in: Ray, rec: HitRecord) -> Option<(Ray, Color)> {
         let mut scatter_direction = rec.normal + random_unit_vector();
 
         // Catch degenerate scatter direction
         if scatter_direction.near_zero() {
             scatter_direction = rec.normal;
         }
 
         let scattered = Ray::new(rec.p, scatter_direction);
         let attenuation = self.albedo;
 
         Some((scattered, attenuation))
     }
 }
 
 #[derive(Debug, Default, Clone, Copy)]
 pub struct Metal {
     albedo: Color,
     fuzz: f64,
 }
 
 impl Metal {
     pub fn new(albedo: Color, fuzz: f64) -> Self {
         let fuzz = if fuzz < 1.0 { fuzz } else { 1.0 };
         Self { albedo, fuzz }
     }
 }
 
 impl Material for Metal {
     fn scatter(&self, r_in: Ray, rec: HitRecord) -> Option<(Ray, Color)> {
         let mut reflected = reflect(r_in.direction(), rec.normal);
         reflected = unit_vector(reflected) + (self.fuzz * random_unit_vector());
         let scattered = Ray::new(rec.p, reflected);
         let attenuation = self.albedo;
 
         (dot(scattered.direction(), rec.normal) > 0.0).then(|| (scattered, attenuation))
     }
 }
+
+#[derive(Debug, Default, Clone, Copy)]
+pub struct Dielectric {
+    /// Refractive index in vacuum or air, or the ratio of the material's refractive index over
+    /// the refractive index of the enclosing media
+    refraction_index: f64,
+}
+
+impl Dielectric {
+    pub fn new(refraction_index: f64) -> Self {
+        Self { refraction_index }
+    }
+}
+
+impl Material for Dielectric {
+    fn scatter(&self, r_in: Ray, rec: HitRecord) -> Option<(Ray, Color)> {
+        let attenuation = Color::new(1.0, 1.0, 1.0);
+        let ri = if rec.front_face {
+            1.0 / self.refraction_index
+        } else {
+            self.refraction_index
+        };
+
+        let unit_direction = unit_vector(r_in.direction());
+        let refracted = refract(unit_direction, rec.normal, ri);
+
+        let scattered = Ray::new(rec.p, refracted);
+
+        Some((scattered, attenuation))
+    }
+}

Listing 72: [material.rs] Dielectric material class that always refracts


Now we'll update the scene to illustrate refraction by changing the left sphere to glass, which has an index of refraction of approximately 1.5.

diff --git a/src/main.rs b/src/main.rs
index a705a06..3894014 100644
--- a/src/main.rs
+++ b/src/main.rs
@@ -1,46 +1,46 @@
 use code::{
     camera::Camera,
     hittable_list::HittableList,
-    material::{Lambertian, Metal},
+    material::{Dielectric, Lambertian, Metal},
     prelude::*,
     sphere::Sphere,
 };
 
 fn main() -> std::io::Result<()> {
     let mut world = HittableList::new();
 
     let material_ground = Rc::new(Lambertian::new(Color::new(0.8, 0.8, 0.0)));
     let material_center = Rc::new(Lambertian::new(Color::new(0.1, 0.2, 0.5)));
-    let material_left = Rc::new(Metal::new(Color::new(0.8, 0.8, 0.8), 0.3));
+    let material_left = Rc::new(Dielectric::new(1.5));
     let material_right = Rc::new(Metal::new(Color::new(0.8, 0.6, 0.2), 1.0));
 
     world.add(Rc::new(Sphere::new(
         Point3::new(0.0, -100.5, -1.0),
         100.0,
         material_ground,
     )));
     world.add(Rc::new(Sphere::new(
         Point3::new(0.0, 0.0, -1.2),
         0.5,
         material_center,
     )));
     world.add(Rc::new(Sphere::new(
         Point3::new(-1.0, 0.0, -1.0),
         0.5,
         material_left,
     )));
     world.add(Rc::new(Sphere::new(
         Point3::new(1.0, 0.0, -1.0),
         0.5,
         material_right,
     )));
 
     env_logger::init();
 
     Camera::default()
         .with_aspect_ratio(16.0 / 9.0)
         .with_image_width(400)
         .with_samples_per_pixel(100)
         .with_max_depth(50)
         .render(&world)
 }

Listing 73: [main.rs] Changing the left sphere to glass


This gives us the following result:

Glass sphere that always refracts

Image 16: Glass sphere that always refracts


Total Internal Reflection

One troublesome practical issue with refraction is that there are ray angles for which no solution is possible using Snell's law. When a ray enters a medium of lower index of refraction at a sufficiently glancing angle, it can refract with an angle greater than 90°. If we refer back to Snell's law and the derivation of \( \sin \theta' \):

\[ \sin \theta' = \frac{\eta}{\eta'} \cdot \sin \theta \]

If the ray is inside glass and outside is air (\( \eta = 1.5 \) and \( \eta' = 1.0 \)):

\[ \sin \theta' = \frac{1.5}{1.0} \cdot \sin \theta \]

The value of \( \sin \theta' \) cannot be greater than 1. So, if,

\[ \frac{1.5}{1.0} \cdot \sin \theta > 1.0 \]

the equality between the two sides of the equation is broken, and a solution cannot exist. If a solution does not exist, the glass cannot refract, and therefore must reflect the ray:

use crate::{hittable::HitRecord, prelude::*};

pub trait Material {
    fn scatter(&self, _r_in: Ray, _rec: HitRecord) -> Option<(Ray, Color)> {
        None
    }
}

#[derive(Debug, Default, Clone, Copy)]
pub struct Lambertian {
    albedo: Color,
}

impl Lambertian {
    pub fn new(albedo: Color) -> Self {
        Self { albedo }
    }
}

impl Material for Lambertian {
    fn scatter(&self, _r_in: Ray, rec: HitRecord) -> Option<(Ray, Color)> {
        let mut scatter_direction = rec.normal + random_unit_vector();

        // Catch degenerate scatter direction
        if scatter_direction.near_zero() {
            scatter_direction = rec.normal;
        }

        let scattered = Ray::new(rec.p, scatter_direction);
        let attenuation = self.albedo;

        Some((scattered, attenuation))
    }
}

#[derive(Debug, Default, Clone, Copy)]
pub struct Metal {
    albedo: Color,
    fuzz: f64,
}

impl Metal {
    pub fn new(albedo: Color, fuzz: f64) -> Self {
        let fuzz = if fuzz < 1.0 { fuzz } else { 1.0 };
        Self { albedo, fuzz }
    }
}

impl Material for Metal {
    fn scatter(&self, r_in: Ray, rec: HitRecord) -> Option<(Ray, Color)> {
        let mut reflected = reflect(r_in.direction(), rec.normal);
        reflected = unit_vector(reflected) + (self.fuzz * random_unit_vector());
        let scattered = Ray::new(rec.p, reflected);
        let attenuation = self.albedo;

        (dot(scattered.direction(), rec.normal) > 0.0).then(|| (scattered, attenuation))
    }
}

#[derive(Debug, Default, Clone, Copy)]
pub struct Dielectric {
    /// Refractive index in vacuum or air, or the ratio of the material's refractive index over
    /// the refractive index of the enclosing media
    refraction_index: f64,
}

impl Dielectric {
    pub fn new(refraction_index: f64) -> Self {
        Self { refraction_index }
    }
}

impl Material for Dielectric {
    fn scatter(&self, r_in: Ray, rec: HitRecord) -> Option<(Ray, Color)> {
        let attenuation = Color::new(1.0, 1.0, 1.0);
        let ri = if rec.front_face {
            1.0 / self.refraction_index
        } else {
            self.refraction_index
        };

        let unit_direction = unit_vector(r_in.direction());
        let direction;

        let cos_theta = f64::min(dot(-unit_direction, rec.normal), 1.0);
        let sin_theta = f64::sqrt(1.0 - cos_theta * cos_theta);

        if ri * sin_theta > 1.0 {
            // Must Reflect
            direction = reflect(unit_direction, rec.normal);
        } else {
            // Can Refract
            direction = refract(unit_direction, rec.normal, ri);
        }

        let scattered = Ray::new(rec.p, direction);

        Some((scattered, attenuation))
    }
}

Listing 74: [material.rs] Determining if the ray can refract


Here all the light is reflected, and because in practice that is usually inside solid objects, it is called total internal reflection. This is why sometimes the water-to-air boundary acts as a perfect mirror when you are submerged — if you're under water looking up, you can see things above the water, but when you are close to the surface and looking sideways, the water surface looks like a mirror.

We can solve for sin_theta using the trigonometric identities:

\[ \sin \theta = \sqrt{ 1 - \cos^2 \theta } \]

and

\[ \cos \theta = \mathbf{R} \cdot \mathbf{n} \]

use crate::{hittable::HitRecord, prelude::*};

pub trait Material {
    fn scatter(&self, _r_in: Ray, _rec: HitRecord) -> Option<(Ray, Color)> {
        None
    }
}

#[derive(Debug, Default, Clone, Copy)]
pub struct Lambertian {
    albedo: Color,
}

impl Lambertian {
    pub fn new(albedo: Color) -> Self {
        Self { albedo }
    }
}

impl Material for Lambertian {
    fn scatter(&self, _r_in: Ray, rec: HitRecord) -> Option<(Ray, Color)> {
        let mut scatter_direction = rec.normal + random_unit_vector();

        // Catch degenerate scatter direction
        if scatter_direction.near_zero() {
            scatter_direction = rec.normal;
        }

        let scattered = Ray::new(rec.p, scatter_direction);
        let attenuation = self.albedo;

        Some((scattered, attenuation))
    }
}

#[derive(Debug, Default, Clone, Copy)]
pub struct Metal {
    albedo: Color,
    fuzz: f64,
}

impl Metal {
    pub fn new(albedo: Color, fuzz: f64) -> Self {
        let fuzz = if fuzz < 1.0 { fuzz } else { 1.0 };
        Self { albedo, fuzz }
    }
}

impl Material for Metal {
    fn scatter(&self, r_in: Ray, rec: HitRecord) -> Option<(Ray, Color)> {
        let mut reflected = reflect(r_in.direction(), rec.normal);
        reflected = unit_vector(reflected) + (self.fuzz * random_unit_vector());
        let scattered = Ray::new(rec.p, reflected);
        let attenuation = self.albedo;

        (dot(scattered.direction(), rec.normal) > 0.0).then(|| (scattered, attenuation))
    }
}

#[derive(Debug, Default, Clone, Copy)]
pub struct Dielectric {
    /// Refractive index in vacuum or air, or the ratio of the material's refractive index over
    /// the refractive index of the enclosing media
    refraction_index: f64,
}

impl Dielectric {
    pub fn new(refraction_index: f64) -> Self {
        Self { refraction_index }
    }
}

impl Material for Dielectric {
    fn scatter(&self, r_in: Ray, rec: HitRecord) -> Option<(Ray, Color)> {
        let attenuation = Color::new(1.0, 1.0, 1.0);
        let ri = if rec.front_face {
            1.0 / self.refraction_index
        } else {
            self.refraction_index
        };

        let unit_direction = unit_vector(r_in.direction());
        let direction;

        let cos_theta = f64::min(dot(-unit_direction, rec.normal), 1.0);
        let sin_theta = f64::sqrt(1.0 - cos_theta * cos_theta);

        if ri * sin_theta > 1.0 {
            // Must Reflect
            direction = reflect(unit_direction, rec.normal);
        } else {
            // Can Refract
            direction = refract(unit_direction, rec.normal, ri);
        }

        let scattered = Ray::new(rec.p, direction);

        Some((scattered, attenuation))
    }
}

Listing 75: [material.rs] Determining if the ray can refract


And the dielectric material that always refracts (when possible) is:

diff --git a/src/material.rs b/src/material.rs
index 090a4f6..951f550 100644
--- a/src/material.rs
+++ b/src/material.rs
@@ -1,89 +1,97 @@
 use crate::{hittable::HitRecord, prelude::*};
 
 pub trait Material {
     fn scatter(&self, _r_in: Ray, _rec: HitRecord) -> Option<(Ray, Color)> {
         None
     }
 }
 
 #[derive(Debug, Default, Clone, Copy)]
 pub struct Lambertian {
     albedo: Color,
 }
 
 impl Lambertian {
     pub fn new(albedo: Color) -> Self {
         Self { albedo }
     }
 }
 
 impl Material for Lambertian {
     fn scatter(&self, _r_in: Ray, rec: HitRecord) -> Option<(Ray, Color)> {
         let mut scatter_direction = rec.normal + random_unit_vector();
 
         // Catch degenerate scatter direction
         if scatter_direction.near_zero() {
             scatter_direction = rec.normal;
         }
 
         let scattered = Ray::new(rec.p, scatter_direction);
         let attenuation = self.albedo;
 
         Some((scattered, attenuation))
     }
 }
 
 #[derive(Debug, Default, Clone, Copy)]
 pub struct Metal {
     albedo: Color,
     fuzz: f64,
 }
 
 impl Metal {
     pub fn new(albedo: Color, fuzz: f64) -> Self {
         let fuzz = if fuzz < 1.0 { fuzz } else { 1.0 };
         Self { albedo, fuzz }
     }
 }
 
 impl Material for Metal {
     fn scatter(&self, r_in: Ray, rec: HitRecord) -> Option<(Ray, Color)> {
         let mut reflected = reflect(r_in.direction(), rec.normal);
         reflected = unit_vector(reflected) + (self.fuzz * random_unit_vector());
         let scattered = Ray::new(rec.p, reflected);
         let attenuation = self.albedo;
 
         (dot(scattered.direction(), rec.normal) > 0.0).then(|| (scattered, attenuation))
     }
 }
 
 #[derive(Debug, Default, Clone, Copy)]
 pub struct Dielectric {
     /// Refractive index in vacuum or air, or the ratio of the material's refractive index over
     /// the refractive index of the enclosing media
     refraction_index: f64,
 }
 
 impl Dielectric {
     pub fn new(refraction_index: f64) -> Self {
         Self { refraction_index }
     }
 }
 
 impl Material for Dielectric {
     fn scatter(&self, r_in: Ray, rec: HitRecord) -> Option<(Ray, Color)> {
         let attenuation = Color::new(1.0, 1.0, 1.0);
         let ri = if rec.front_face {
             1.0 / self.refraction_index
         } else {
             self.refraction_index
         };
 
         let unit_direction = unit_vector(r_in.direction());
-        let refracted = refract(unit_direction, rec.normal, ri);
+        let cos_theta = f64::min(dot(-unit_direction, rec.normal), 1.0);
+        let sin_theta = f64::sqrt(1.0 - cos_theta * cos_theta);
 
-        let scattered = Ray::new(rec.p, refracted);
+        let cannot_refract = ri * sin_theta > 1.0;
+        let direction = if cannot_refract {
+            reflect(unit_direction, rec.normal)
+        } else {
+            refract(unit_direction, rec.normal, ri)
+        };
+
+        let scattered = Ray::new(rec.p, direction);
 
         Some((scattered, attenuation))
     }
 }

Listing 76: [material.rs] Dielectric material class with reflection


Attenuation is always 1 — the glass surface absorbs nothing.

If we render the prior scene with the new dielectric::scatter() function, we see … no change. Huh?

Well, it turns out that given a sphere of material with an index of refraction greater than air, there's no incident angle that will yield total internal reflection — neither at the ray-sphere entrance point nor at the ray exit. This is due to the geometry of spheres, as a grazing incoming ray will always be bent to a smaller angle, and then bent back to the original angle on exit.

So how can we illustrate total internal reflection? Well, if the sphere has an index of refraction less than the medium it's in, then we can hit it with shallow grazing angles, getting total external reflection. That should be good enough to observe the effect.

We'll model a world filled with water (index of refraction approximately 1.33), and change the sphere material to air (index of refraction 1.00) — an air bubble! To do this, change the left sphere material's index of refraction to

\[ \frac{\text{index of refraction of air}}{\text{index of refraction of water}} \]

diff --git a/src/main.rs b/src/main.rs
index 3894014..6e42461 100644
--- a/src/main.rs
+++ b/src/main.rs
@@ -1,46 +1,46 @@
 use code::{
     camera::Camera,
     hittable_list::HittableList,
     material::{Dielectric, Lambertian, Metal},
     prelude::*,
     sphere::Sphere,
 };
 
 fn main() -> std::io::Result<()> {
     let mut world = HittableList::new();
 
     let material_ground = Rc::new(Lambertian::new(Color::new(0.8, 0.8, 0.0)));
     let material_center = Rc::new(Lambertian::new(Color::new(0.1, 0.2, 0.5)));
-    let material_left = Rc::new(Dielectric::new(1.5));
+    let material_left = Rc::new(Dielectric::new(1.0 / 1.33));
     let material_right = Rc::new(Metal::new(Color::new(0.8, 0.6, 0.2), 1.0));
 
     world.add(Rc::new(Sphere::new(
         Point3::new(0.0, -100.5, -1.0),
         100.0,
         material_ground,
     )));
     world.add(Rc::new(Sphere::new(
         Point3::new(0.0, 0.0, -1.2),
         0.5,
         material_center,
     )));
     world.add(Rc::new(Sphere::new(
         Point3::new(-1.0, 0.0, -1.0),
         0.5,
         material_left,
     )));
     world.add(Rc::new(Sphere::new(
         Point3::new(1.0, 0.0, -1.0),
         0.5,
         material_right,
     )));
 
     env_logger::init();
 
     Camera::default()
         .with_aspect_ratio(16.0 / 9.0)
         .with_image_width(400)
         .with_samples_per_pixel(100)
         .with_max_depth(50)
         .render(&world)
 }

Listing 77: [main.rs] Left sphere is an air bubble in water


This change yields the following render:

Air bubble sometimes refracts, sometimes reflects

Image 17: Air bubble sometimes refracts, sometimes reflects


Here you can see that more-or-less direct rays refract, while glancing rays reflect.

Schlick Approximation

Now real glass has reflectivity that varies with angle — look at a window at a steep angle and it becomes a mirror. There is a big ugly equation for that, but almost everybody uses a cheap and surprisingly accurate polynomial approximation by Christophe Schlick. This yields our full glass material:

diff --git a/src/material.rs b/src/material.rs
index 951f550..20477f9 100644
--- a/src/material.rs
+++ b/src/material.rs
@@ -1,97 +1,104 @@
 use crate::{hittable::HitRecord, prelude::*};
 
 pub trait Material {
     fn scatter(&self, _r_in: Ray, _rec: HitRecord) -> Option<(Ray, Color)> {
         None
     }
 }
 
 #[derive(Debug, Default, Clone, Copy)]
 pub struct Lambertian {
     albedo: Color,
 }
 
 impl Lambertian {
     pub fn new(albedo: Color) -> Self {
         Self { albedo }
     }
 }
 
 impl Material for Lambertian {
     fn scatter(&self, _r_in: Ray, rec: HitRecord) -> Option<(Ray, Color)> {
         let mut scatter_direction = rec.normal + random_unit_vector();
 
         // Catch degenerate scatter direction
         if scatter_direction.near_zero() {
             scatter_direction = rec.normal;
         }
 
         let scattered = Ray::new(rec.p, scatter_direction);
         let attenuation = self.albedo;
 
         Some((scattered, attenuation))
     }
 }
 
 #[derive(Debug, Default, Clone, Copy)]
 pub struct Metal {
     albedo: Color,
     fuzz: f64,
 }
 
 impl Metal {
     pub fn new(albedo: Color, fuzz: f64) -> Self {
         let fuzz = if fuzz < 1.0 { fuzz } else { 1.0 };
         Self { albedo, fuzz }
     }
 }
 
 impl Material for Metal {
     fn scatter(&self, r_in: Ray, rec: HitRecord) -> Option<(Ray, Color)> {
         let mut reflected = reflect(r_in.direction(), rec.normal);
         reflected = unit_vector(reflected) + (self.fuzz * random_unit_vector());
         let scattered = Ray::new(rec.p, reflected);
         let attenuation = self.albedo;
 
         (dot(scattered.direction(), rec.normal) > 0.0).then(|| (scattered, attenuation))
     }
 }
 
 #[derive(Debug, Default, Clone, Copy)]
 pub struct Dielectric {
     /// Refractive index in vacuum or air, or the ratio of the material's refractive index over
     /// the refractive index of the enclosing media
     refraction_index: f64,
 }
 
 impl Dielectric {
     pub fn new(refraction_index: f64) -> Self {
         Self { refraction_index }
     }
+
+    fn reflectance(cosine: f64, refraction_index: f64) -> f64 {
+        let r0 = (1.0 - refraction_index) / (1.0 + refraction_index);
+        let r0 = r0 * r0;
+        r0 + (1.0 - r0) * f64::powi(1.0 - cosine, 5)
+    }
 }
 
 impl Material for Dielectric {
     fn scatter(&self, r_in: Ray, rec: HitRecord) -> Option<(Ray, Color)> {
         let attenuation = Color::new(1.0, 1.0, 1.0);
         let ri = if rec.front_face {
             1.0 / self.refraction_index
         } else {
             self.refraction_index
         };
 
         let unit_direction = unit_vector(r_in.direction());
         let cos_theta = f64::min(dot(-unit_direction, rec.normal), 1.0);
         let sin_theta = f64::sqrt(1.0 - cos_theta * cos_theta);
 
         let cannot_refract = ri * sin_theta > 1.0;
-        let direction = if cannot_refract {
+        let direction = if cannot_refract || Dielectric::reflectance(cos_theta, ri) > rand::random()
+        {
             reflect(unit_direction, rec.normal)
         } else {
             refract(unit_direction, rec.normal, ri)
         };
 
         let scattered = Ray::new(rec.p, direction);
 
         Some((scattered, attenuation))
     }
 }

Listing 78: [material.rs] Full glass material


Modeling a Hollow Glass Sphere

Let's model a hollow glass sphere. This is a sphere of some thickness with another sphere of air inside it. If you think about the path of a ray going through such an object, it will hit the outer sphere, refract, hit the inner sphere (assuming we do hit it), refract a second time, and travel through the air inside. Then it will continue on, hit the inside surface of the inner sphere, refract back, then hit the inside surface of the outer sphere, and finally refract and exit back into the scene atmosphere.

The outer sphere is just modeled with a standard glass sphere, with a refractive index of around 1.50 (modeling a refraction from the outside air into glass). The inner sphere is a bit different because its refractive index should be relative to the material of the surrounding outer sphere, thus modeling a transition from glass into the inner air.

This is actually simple to specify, as the refraction_index parameter to the dielectric material can be interpreted as the ratio of the refractive index of the object divided by the refractive index of the enclosing medium. In this case, the inner sphere would have an refractive index of air (the inner sphere material) over the index of refraction of glass (the enclosing medium), or \( 1.00/1.50=0.67 \).

Here's the code:

diff --git a/src/main.rs b/src/main.rs
index 6e42461..c0d6703 100644
--- a/src/main.rs
+++ b/src/main.rs
@@ -1,46 +1,52 @@
 use code::{
     camera::Camera,
     hittable_list::HittableList,
     material::{Dielectric, Lambertian, Metal},
     prelude::*,
     sphere::Sphere,
 };
 
 fn main() -> std::io::Result<()> {
     let mut world = HittableList::new();
 
     let material_ground = Rc::new(Lambertian::new(Color::new(0.8, 0.8, 0.0)));
     let material_center = Rc::new(Lambertian::new(Color::new(0.1, 0.2, 0.5)));
-    let material_left = Rc::new(Dielectric::new(1.0 / 1.33));
+    let material_left = Rc::new(Dielectric::new(1.5));
+    let material_bubble = Rc::new(Dielectric::new(1.0 / 1.5));
     let material_right = Rc::new(Metal::new(Color::new(0.8, 0.6, 0.2), 1.0));
 
     world.add(Rc::new(Sphere::new(
         Point3::new(0.0, -100.5, -1.0),
         100.0,
         material_ground,
     )));
     world.add(Rc::new(Sphere::new(
         Point3::new(0.0, 0.0, -1.2),
         0.5,
         material_center,
     )));
     world.add(Rc::new(Sphere::new(
         Point3::new(-1.0, 0.0, -1.0),
         0.5,
         material_left,
     )));
     world.add(Rc::new(Sphere::new(
+        Point3::new(-1.0, 0.0, -1.0),
+        0.4,
+        material_bubble,
+    )));
+    world.add(Rc::new(Sphere::new(
         Point3::new(1.0, 0.0, -1.0),
         0.5,
         material_right,
     )));
 
     env_logger::init();
 
     Camera::default()
         .with_aspect_ratio(16.0 / 9.0)
         .with_image_width(400)
         .with_samples_per_pixel(100)
         .with_max_depth(50)
         .render(&world)
 }

Listing 79: [main.rs] Scene with hollow glass sphere


And here's the result:

A hollow glass sphere

Image 18: A hollow glass sphere


Positionable Camera

Cameras, like dielectrics, are a pain to debug, so I always develop mine incrementally. First, let’s allow for an adjustable field of view (fov). This is the visual angle from edge to edge of the rendered image. Since our image is not square, the fov is different horizontally and vertically. I always use vertical fov. I also usually specify it in degrees and change to radians inside a constructor — a matter of personal taste.

Camera Viewing Geometry

First, we'll keep the rays coming from the origin and heading to the \( z = -1\) plane. We could make it the \( z = -2 \) plane, or whatever, as long as we made ℎ a ratio to that distance. Here is our setup:

Camera viewing geometry (from the side)

Figure 18: Camera viewing geometry (from the side)


This implies \( h = \tan (\frac{\theta}{2}) \). Our camera now becomes:

diff --git a/src/camera.rs b/src/camera.rs
index 1927898..8e256aa 100644
--- a/src/camera.rs
+++ b/src/camera.rs
@@ -1,168 +1,179 @@
 use crate::{hittable::Hittable, prelude::*};
 
 pub struct Camera {
     /// Ratio of image width over height
     pub aspect_ratio: f64,
     /// Rendered image width in pixel count
     pub image_width: i32,
     // Count of random samples for each pixel
     pub samples_per_pixel: i32,
     // Maximum number of ray bounces into scene
     pub max_depth: i32,
+    // Vertical view angle (field of view)
+    pub vfov: f64,
 
     /// Rendered image height
     image_height: i32,
     // Color scale factor for a sum of pixel samples
     pixel_samples_scale: f64,
     /// Camera center
     center: Point3,
     /// Location of pixel 0, 0
     pixel00_loc: Point3,
     /// Offset to pixel to the right
     pixel_delta_u: Vec3,
     /// Offset to pixel below
     pixel_delta_v: Vec3,
 }
 
 impl Default for Camera {
     fn default() -> Self {
         Self {
             aspect_ratio: 1.0,
             image_width: 100,
             samples_per_pixel: 10,
             max_depth: 10,
+            vfov: 90.0,
             image_height: Default::default(),
             pixel_samples_scale: Default::default(),
             center: Default::default(),
             pixel00_loc: Default::default(),
             pixel_delta_u: Default::default(),
             pixel_delta_v: Default::default(),
         }
     }
 }
 
 impl Camera {
     pub fn with_aspect_ratio(mut self, aspect_ratio: f64) -> Self {
         self.aspect_ratio = aspect_ratio;
 
         self
     }
 
     pub fn with_image_width(mut self, image_width: i32) -> Self {
         self.image_width = image_width;
 
         self
     }
 
     pub fn with_samples_per_pixel(mut self, samples_per_pixel: i32) -> Self {
         self.samples_per_pixel = samples_per_pixel;
 
         self
     }
 
     pub fn with_max_depth(mut self, max_depth: i32) -> Self {
         self.max_depth = max_depth;
 
         self
     }
 
+    pub fn with_vfov(mut self, vfov: f64) -> Self {
+        self.vfov = vfov;
+
+        self
+    }
+
     pub fn render(&mut self, world: &impl Hittable) -> std::io::Result<()> {
         self.initialize();
 
         println!("P3");
         println!("{} {}", self.image_width, self.image_height);
         println!("255");
 
         for j in 0..self.image_height {
             info!("Scanlines remaining: {}", self.image_height - j);
             for i in 0..self.image_width {
                 let mut pixel_color = Color::new(0.0, 0.0, 0.0);
                 for _sample in 0..self.samples_per_pixel {
                     let r = self.get_ray(i, j);
                     pixel_color += Self::ray_color(r, self.max_depth, world);
                 }
                 write_color(std::io::stdout(), self.pixel_samples_scale * pixel_color)?;
             }
         }
         info!("Done.");
 
         Ok(())
     }
 
     fn initialize(&mut self) {
         self.image_height = {
             let image_height = (self.image_width as f64 / self.aspect_ratio) as i32;
             if image_height < 1 { 1 } else { image_height }
         };
 
         self.pixel_samples_scale = 1.0 / self.samples_per_pixel as f64;
 
         self.center = Point3::new(0.0, 0.0, 0.0);
 
         // Determine viewport dimensions.
         let focal_length = 1.0;
-        let viewport_height = 2.0;
+        let theta = self.vfov.to_radians();
+        let h = f64::tan(theta / 2.0);
+        let viewport_height = 2.0 * h * focal_length;
         let viewport_width =
             viewport_height * (self.image_width as f64) / (self.image_height as f64);
 
         // Calculate the vectors across the horizontal and down the vertical viewport edges.
         let viewport_u = Vec3::new(viewport_width, 0.0, 0.0);
         let viewport_v = Vec3::new(0.0, -viewport_height, 0.0);
 
         // Calculate the horizontal and vertical delta vectors from pixel to pixel.
         self.pixel_delta_u = viewport_u / self.image_width as f64;
         self.pixel_delta_v = viewport_v / self.image_height as f64;
 
         // Calculate the location of the upper left pixel.
         let viewport_upper_left =
             self.center - Vec3::new(0.0, 0.0, focal_length) - viewport_u / 2.0 - viewport_v / 2.0;
         self.pixel00_loc = viewport_upper_left + 0.5 * (self.pixel_delta_u + self.pixel_delta_v);
     }
 
     fn get_ray(&self, i: i32, j: i32) -> Ray {
         // Construct a camera ray originating from the origin and directed at randomly sampled
         // point around the pixel location i, j.
 
         let offset = Self::sample_square();
         let pixel_sample = self.pixel00_loc
             + ((i as f64 + offset.x()) * self.pixel_delta_u)
             + ((j as f64 + offset.y()) * self.pixel_delta_v);
 
         let ray_origin = self.center;
         let ray_direction = pixel_sample - ray_origin;
 
         Ray::new(ray_origin, ray_direction)
     }
 
     fn sample_square() -> Vec3 {
         // Returns the vector to a random point in the [-.5,-.5]-[+.5,+.5] unit square.
         Vec3::new(
             rand::random::<f64>() - 0.5,
             rand::random::<f64>() - 0.5,
             0.0,
         )
     }
 
     fn _sample_disk(radius: f64) -> Vec3 {
         // Returns a random point in the unit (radius 0.5) disk centered at the origin.
         radius * random_in_unit_disk()
     }
 
     fn ray_color(r: Ray, depth: i32, world: &impl Hittable) -> Color {
         // If we've exceeded the ray bounce limit, no more light is gathered.
         if depth <= 0 {
             return Color::new(0.0, 0.0, 0.0);
         }
 
         if let Some(rec) = world.hit(r, Interval::new(0.001, INFINITY)) {
             if let Some((scattered, attenuation)) = rec.mat.scatter(r, rec.clone()) {
                 return attenuation * Self::ray_color(scattered, depth - 1, world);
             }
             return Color::new(0.0, 0.0, 0.0);
         }
 
         let unit_direction = unit_vector(r.direction());
         let a = 0.5 * (unit_direction.y() + 1.0);
         (1.0 - a) * Color::new(1.0, 1.0, 1.0) + a * Color::new(0.5, 0.7, 1.0)
     }
 }

Listing 80: [camera.rs] Camera with adjustable field-of-view (fov)


We'll test out these changes with a simple scene of two touching spheres, using a 90° field of view.

diff --git a/src/main.rs b/src/main.rs
index c0d6703..e906c8c 100644
--- a/src/main.rs
+++ b/src/main.rs
@@ -1,52 +1,37 @@
 use code::{
     camera::Camera,
     hittable_list::HittableList,
     material::{Dielectric, Lambertian, Metal},
     prelude::*,
     sphere::Sphere,
 };
 
 fn main() -> std::io::Result<()> {
     let mut world = HittableList::new();
 
-    let material_ground = Rc::new(Lambertian::new(Color::new(0.8, 0.8, 0.0)));
-    let material_center = Rc::new(Lambertian::new(Color::new(0.1, 0.2, 0.5)));
-    let material_left = Rc::new(Dielectric::new(1.5));
-    let material_bubble = Rc::new(Dielectric::new(1.0 / 1.5));
-    let material_right = Rc::new(Metal::new(Color::new(0.8, 0.6, 0.2), 1.0));
+    let r = f64::cos(PI / 4.0);
+
+    let material_left = Rc::new(Lambertian::new(Color::new(0.0, 0.0, 1.0)));
+    let material_right = Rc::new(Lambertian::new(Color::new(1.0, 0.0, 0.0)));
 
     world.add(Rc::new(Sphere::new(
-        Point3::new(0.0, -100.5, -1.0),
-        100.0,
-        material_ground,
-    )));
-    world.add(Rc::new(Sphere::new(
-        Point3::new(0.0, 0.0, -1.2),
-        0.5,
-        material_center,
-    )));
-    world.add(Rc::new(Sphere::new(
-        Point3::new(-1.0, 0.0, -1.0),
-        0.5,
+        Point3::new(-r, 0.0, -1.0),
+        r,
         material_left,
     )));
     world.add(Rc::new(Sphere::new(
-        Point3::new(-1.0, 0.0, -1.0),
-        0.4,
-        material_bubble,
-    )));
-    world.add(Rc::new(Sphere::new(
-        Point3::new(1.0, 0.0, -1.0),
-        0.5,
+        Point3::new(r, 0.0, -1.0),
+        r,
         material_right,
     )));
 
     env_logger::init();
 
     Camera::default()
         .with_aspect_ratio(16.0 / 9.0)
         .with_image_width(400)
         .with_samples_per_pixel(100)
         .with_max_depth(50)
+        .with_vfov(90.0)
         .render(&world)
 }

Listing 81: [main.rs] Scene with wide-angle camera


This gives us the rendering:

A wide-angle view

Image 19: A wide-angle view


Positioning and Orienting the Camera

To get an arbitrary viewpoint, let’s first name the points we care about. We’ll call the position where we place the camera lookfrom, and the point we look at lookat. (Later, if you want, you could define a direction to look in instead of a point to look at.)

We also need a way to specify the roll, or sideways tilt, of the camera: the rotation around the lookat-lookfrom axis. Another way to think about it is that even if you keep lookfrom and lookat constant, you can still rotate your head around your nose. What we need is a way to specify an “up” vector for the camera.

Camera view direction

Figure 19: Camera view direction


We can specify any up vector we want, as long as it's not parallel to the view direction. Project this up vector onto the plane orthogonal to the view direction to get a camera-relative up vector. I use the common convention of naming this the “view up” (vup) vector. After a few cross products and vector normalizations, we now have a complete orthonormal basis \( (u, v, w) \) to describe our camera’s orientation. \( u \) will be the unit vector pointing to camera right, \( v \) is the unit vector pointing to camera up, \( w \) is the unit vector pointing opposite the view direction (since we use right-hand coordinates), and the camera center is at the origin.

Camera view up direction

Figure 20: Camera view up direction


Like before, when our fixed camera faced \( -Z \), our arbitrary view camera faces \( -w \). Keep in mind that we can — but we don’t have to — use world up \( (0, 1, 0) \) to specify vup. This is convenient and will naturally keep your camera horizontally level until you decide to experiment with crazy camera angles.

diff --git a/src/camera.rs b/src/camera.rs
index 8e256aa..44da965 100644
--- a/src/camera.rs
+++ b/src/camera.rs
@@ -1,179 +1,220 @@
 use crate::{hittable::Hittable, prelude::*};
 
 pub struct Camera {
     /// Ratio of image width over height
     pub aspect_ratio: f64,
     /// Rendered image width in pixel count
     pub image_width: i32,
     // Count of random samples for each pixel
     pub samples_per_pixel: i32,
     // Maximum number of ray bounces into scene
     pub max_depth: i32,
     // Vertical view angle (field of view)
     pub vfov: f64,
+    /// Point camera is looking from
+    pub lookfrom: Point3,
+    /// Point camera is looking at
+    pub lookat: Point3,
+    /// Camera-relative "up" direction
+    pub vup: Vec3,
 
     /// Rendered image height
     image_height: i32,
     // Color scale factor for a sum of pixel samples
     pixel_samples_scale: f64,
     /// Camera center
     center: Point3,
     /// Location of pixel 0, 0
     pixel00_loc: Point3,
     /// Offset to pixel to the right
     pixel_delta_u: Vec3,
     /// Offset to pixel below
     pixel_delta_v: Vec3,
+    /// Camera frame basis vector - right
+    u: Vec3,
+    /// Camera frame basis vector - up
+    v: Vec3,
+    /// Camera frame basis vector - opposite view direction
+    w: Vec3,
 }
 
 impl Default for Camera {
     fn default() -> Self {
         Self {
             aspect_ratio: 1.0,
             image_width: 100,
             samples_per_pixel: 10,
             max_depth: 10,
             vfov: 90.0,
+            lookfrom: Point3::new(0.0, 0.0, 0.0),
+            lookat: Point3::new(0.0, 0.0, -1.0),
+            vup: Point3::new(0.0, 1.0, 0.0),
             image_height: Default::default(),
             pixel_samples_scale: Default::default(),
             center: Default::default(),
             pixel00_loc: Default::default(),
             pixel_delta_u: Default::default(),
             pixel_delta_v: Default::default(),
+            u: Default::default(),
+            v: Default::default(),
+            w: Default::default(),
         }
     }
 }
 
 impl Camera {
     pub fn with_aspect_ratio(mut self, aspect_ratio: f64) -> Self {
         self.aspect_ratio = aspect_ratio;
 
         self
     }
 
     pub fn with_image_width(mut self, image_width: i32) -> Self {
         self.image_width = image_width;
 
         self
     }
 
     pub fn with_samples_per_pixel(mut self, samples_per_pixel: i32) -> Self {
         self.samples_per_pixel = samples_per_pixel;
 
         self
     }
 
     pub fn with_max_depth(mut self, max_depth: i32) -> Self {
         self.max_depth = max_depth;
 
         self
     }
 
     pub fn with_vfov(mut self, vfov: f64) -> Self {
         self.vfov = vfov;
 
         self
     }
 
+    pub fn with_lookfrom(mut self, lookfrom: Point3) -> Self {
+        self.lookfrom = lookfrom;
+
+        self
+    }
+
+    pub fn with_lookat(mut self, lookat: Point3) -> Self {
+        self.lookat = lookat;
+
+        self
+    }
+
+    pub fn with_vup(mut self, vup: Vec3) -> Self {
+        self.vup = vup;
+
+        self
+    }
+
     pub fn render(&mut self, world: &impl Hittable) -> std::io::Result<()> {
         self.initialize();
 
         println!("P3");
         println!("{} {}", self.image_width, self.image_height);
         println!("255");
 
         for j in 0..self.image_height {
             info!("Scanlines remaining: {}", self.image_height - j);
             for i in 0..self.image_width {
                 let mut pixel_color = Color::new(0.0, 0.0, 0.0);
                 for _sample in 0..self.samples_per_pixel {
                     let r = self.get_ray(i, j);
                     pixel_color += Self::ray_color(r, self.max_depth, world);
                 }
                 write_color(std::io::stdout(), self.pixel_samples_scale * pixel_color)?;
             }
         }
         info!("Done.");
 
         Ok(())
     }
 
     fn initialize(&mut self) {
         self.image_height = {
             let image_height = (self.image_width as f64 / self.aspect_ratio) as i32;
             if image_height < 1 { 1 } else { image_height }
         };
 
         self.pixel_samples_scale = 1.0 / self.samples_per_pixel as f64;
 
-        self.center = Point3::new(0.0, 0.0, 0.0);
+        self.center = self.lookfrom;
 
         // Determine viewport dimensions.
-        let focal_length = 1.0;
+        let focal_length = (self.lookfrom - self.lookat).length();
         let theta = self.vfov.to_radians();
         let h = f64::tan(theta / 2.0);
         let viewport_height = 2.0 * h * focal_length;
         let viewport_width =
             viewport_height * (self.image_width as f64) / (self.image_height as f64);
 
+        // Calculate the u,v,w unit basis vectors for the camera coordinate frame.
+        self.w = unit_vector(self.lookfrom - self.lookat);
+        self.u = unit_vector(cross(self.vup, self.w));
+        self.v = cross(self.w, self.u);
+
         // Calculate the vectors across the horizontal and down the vertical viewport edges.
-        let viewport_u = Vec3::new(viewport_width, 0.0, 0.0);
-        let viewport_v = Vec3::new(0.0, -viewport_height, 0.0);
+        let viewport_u = viewport_width * self.u; // Vector across viewport horizontal edge
+        let viewport_v = viewport_height * -self.v; // Vector down viewport vertical edge
 
         // Calculate the horizontal and vertical delta vectors from pixel to pixel.
         self.pixel_delta_u = viewport_u / self.image_width as f64;
         self.pixel_delta_v = viewport_v / self.image_height as f64;
 
         // Calculate the location of the upper left pixel.
         let viewport_upper_left =
-            self.center - Vec3::new(0.0, 0.0, focal_length) - viewport_u / 2.0 - viewport_v / 2.0;
+            self.center - (focal_length * self.w) - viewport_u / 2.0 - viewport_v / 2.0;
         self.pixel00_loc = viewport_upper_left + 0.5 * (self.pixel_delta_u + self.pixel_delta_v);
     }
 
     fn get_ray(&self, i: i32, j: i32) -> Ray {
         // Construct a camera ray originating from the origin and directed at randomly sampled
         // point around the pixel location i, j.
 
         let offset = Self::sample_square();
         let pixel_sample = self.pixel00_loc
             + ((i as f64 + offset.x()) * self.pixel_delta_u)
             + ((j as f64 + offset.y()) * self.pixel_delta_v);
 
         let ray_origin = self.center;
         let ray_direction = pixel_sample - ray_origin;
 
         Ray::new(ray_origin, ray_direction)
     }
 
     fn sample_square() -> Vec3 {
         // Returns the vector to a random point in the [-.5,-.5]-[+.5,+.5] unit square.
         Vec3::new(
             rand::random::<f64>() - 0.5,
             rand::random::<f64>() - 0.5,
             0.0,
         )
     }
 
     fn _sample_disk(radius: f64) -> Vec3 {
         // Returns a random point in the unit (radius 0.5) disk centered at the origin.
         radius * random_in_unit_disk()
     }
 
     fn ray_color(r: Ray, depth: i32, world: &impl Hittable) -> Color {
         // If we've exceeded the ray bounce limit, no more light is gathered.
         if depth <= 0 {
             return Color::new(0.0, 0.0, 0.0);
         }
 
         if let Some(rec) = world.hit(r, Interval::new(0.001, INFINITY)) {
             if let Some((scattered, attenuation)) = rec.mat.scatter(r, rec.clone()) {
                 return attenuation * Self::ray_color(scattered, depth - 1, world);
             }
             return Color::new(0.0, 0.0, 0.0);
         }
 
         let unit_direction = unit_vector(r.direction());
         let a = 0.5 * (unit_direction.y() + 1.0);
         (1.0 - a) * Color::new(1.0, 1.0, 1.0) + a * Color::new(0.5, 0.7, 1.0)
     }
 }

Listing 82: [camera.rs] Positionable and orientable camera


We'll change back to the prior scene, and use the new viewpoint:

diff --git a/src/main.rs b/src/main.rs
index e906c8c..2e30baf 100644
--- a/src/main.rs
+++ b/src/main.rs
@@ -1,37 +1,56 @@
 use code::{
     camera::Camera,
     hittable_list::HittableList,
     material::{Dielectric, Lambertian, Metal},
     prelude::*,
     sphere::Sphere,
 };
 
 fn main() -> std::io::Result<()> {
     let mut world = HittableList::new();
 
-    let r = f64::cos(PI / 4.0);
-
-    let material_left = Rc::new(Lambertian::new(Color::new(0.0, 0.0, 1.0)));
-    let material_right = Rc::new(Lambertian::new(Color::new(1.0, 0.0, 0.0)));
+    let material_ground = Rc::new(Lambertian::new(Color::new(0.8, 0.8, 0.0)));
+    let material_center = Rc::new(Lambertian::new(Color::new(0.1, 0.2, 0.5)));
+    let material_left = Rc::new(Dielectric::new(1.5));
+    let material_bubble = Rc::new(Dielectric::new(1.0 / 1.5));
+    let material_right = Rc::new(Metal::new(Color::new(0.8, 0.6, 0.2), 1.0));
 
     world.add(Rc::new(Sphere::new(
-        Point3::new(-r, 0.0, -1.0),
-        r,
+        Point3::new(0.0, -100.5, -1.0),
+        100.0,
+        material_ground,
+    )));
+    world.add(Rc::new(Sphere::new(
+        Point3::new(0.0, 0.0, -1.2),
+        0.5,
+        material_center,
+    )));
+    world.add(Rc::new(Sphere::new(
+        Point3::new(-1.0, 0.0, -1.0),
+        0.5,
         material_left,
     )));
     world.add(Rc::new(Sphere::new(
-        Point3::new(r, 0.0, -1.0),
-        r,
+        Point3::new(-1.0, 0.0, -1.0),
+        0.4,
+        material_bubble,
+    )));
+    world.add(Rc::new(Sphere::new(
+        Point3::new(1.0, 0.0, -1.0),
+        0.5,
         material_right,
     )));
 
     env_logger::init();
 
     Camera::default()
         .with_aspect_ratio(16.0 / 9.0)
         .with_image_width(400)
         .with_samples_per_pixel(100)
         .with_max_depth(50)
         .with_vfov(90.0)
+        .with_lookfrom(Point3::new(-2.0, 2.0, 1.0))
+        .with_lookat(Point3::new(0.0, 0.0, -1.0))
+        .with_vup(Point3::new(0.0, 1.0, 0.0))
         .render(&world)
 }

Listing 83: [main.rs] Scene with alternate viewpoint


to get:

A distant view

Image 20: A distant view


And we can change field of view:

diff --git a/src/main.rs b/src/main.rs
index 2e30baf..f7deb5e 100644
--- a/src/main.rs
+++ b/src/main.rs
@@ -1,56 +1,56 @@
 use code::{
     camera::Camera,
     hittable_list::HittableList,
     material::{Dielectric, Lambertian, Metal},
     prelude::*,
     sphere::Sphere,
 };
 
 fn main() -> std::io::Result<()> {
     let mut world = HittableList::new();
 
     let material_ground = Rc::new(Lambertian::new(Color::new(0.8, 0.8, 0.0)));
     let material_center = Rc::new(Lambertian::new(Color::new(0.1, 0.2, 0.5)));
     let material_left = Rc::new(Dielectric::new(1.5));
     let material_bubble = Rc::new(Dielectric::new(1.0 / 1.5));
     let material_right = Rc::new(Metal::new(Color::new(0.8, 0.6, 0.2), 1.0));
 
     world.add(Rc::new(Sphere::new(
         Point3::new(0.0, -100.5, -1.0),
         100.0,
         material_ground,
     )));
     world.add(Rc::new(Sphere::new(
         Point3::new(0.0, 0.0, -1.2),
         0.5,
         material_center,
     )));
     world.add(Rc::new(Sphere::new(
         Point3::new(-1.0, 0.0, -1.0),
         0.5,
         material_left,
     )));
     world.add(Rc::new(Sphere::new(
         Point3::new(-1.0, 0.0, -1.0),
         0.4,
         material_bubble,
     )));
     world.add(Rc::new(Sphere::new(
         Point3::new(1.0, 0.0, -1.0),
         0.5,
         material_right,
     )));
 
     env_logger::init();
 
     Camera::default()
         .with_aspect_ratio(16.0 / 9.0)
         .with_image_width(400)
         .with_samples_per_pixel(100)
         .with_max_depth(50)
-        .with_vfov(90.0)
+        .with_vfov(20.0)
         .with_lookfrom(Point3::new(-2.0, 2.0, 1.0))
         .with_lookat(Point3::new(0.0, 0.0, -1.0))
         .with_vup(Point3::new(0.0, 1.0, 0.0))
         .render(&world)
 }

Listing 84: [main.rs] Change field of view


to get:

Zooming in

Image 21: Zooming in


Defocus Blur

Now our final feature: defocus blur. Note, photographers call this depth of field, so be sure to only use the term defocus blur among your raytracing friends.

The reason we have defocus blur in real cameras is because they need a big hole (rather than just a pinhole) through which to gather light. A large hole would defocus everything, but if we stick a lens in front of the film/sensor, there will be a certain distance at which everything is in focus. Objects placed at that distance will appear in focus and will linearly appear blurrier the further they are from that distance. You can think of a lens this way: all light rays coming from a specific point at the focus distance — and that hit the lens — will be bent back to a single point on the image sensor.

We call the distance between the camera center and the plane where everything is in perfect focus the focus distance. Be aware that the focus distance is not usually the same as the focal length — the focal length is the distance between the camera center and the image plane. For our model, however, these two will have the same value, as we will put our pixel grid right on the focus plane, which is focus distance away from the camera center.

In a physical camera, the focus distance is controlled by the distance between the lens and the film/sensor. That is why you see the lens move relative to the camera when you change what is in focus (that may happen in your phone camera too, but the sensor moves). The “aperture” is a hole to control how big the lens is effectively. For a real camera, if you need more light you make the aperture bigger, and will get more blur for objects away from the focus distance. For our virtual camera, we can have a perfect sensor and never need more light, so we only use an aperture when we want defocus blur.

A Thin Lens Approximation

A real camera has a complicated compound lens. For our code, we could simulate the order: sensor, then lens, then aperture. Then we could figure out where to send the rays, and flip the image after it's computed (the image is projected upside down on the film). Graphics people, however, usually use a thin lens approximation:

Camera lens model

Figure 21: Camera lens model


We don’t need to simulate any of the inside of the camera — for the purposes of rendering an image outside the camera, that would be unnecessary complexity. Instead, I usually start rays from an infinitely thin circular “lens”, and send them toward the pixel of interest on the focus plane (focal_length away from the lens), where everything on that plane in the 3D world is in perfect focus.

In practice, we accomplish this by placing the viewport in this plane. Putting everything together:

  1. The focus plane is orthogonal to the camera view direction.
  2. The focus distance is the distance between the camera center and the focus plane.
  3. The viewport lies on the focus plane, centered on the camera view direction vector.
  4. The grid of pixel locations lies inside the viewport (located in the 3D world).
  5. Random image sample locations are chosen from the region around the current pixel location.
  6. The camera fires rays from random points on the lens through the current image sample location.

Camera focus plane

Figure 22: Camera focus plane


Generating Sample Rays

Without defocus blur, all scene rays originate from the camera center (or lookfrom). In order to accomplish defocus blur, we construct a disk centered at the camera center. The larger the radius, the greater the defocus blur. You can think of our original camera as having a defocus disk of radius zero (no blur at all), so all rays originated at the disk center (lookfrom).

So, how large should the defocus disk be? Since the size of this disk controls how much defocus blur we get, that should be a parameter of the camera class. We could just take the radius of the disk as a camera parameter, but the blur would vary depending on the projection distance. A slightly easier parameter is to specify the angle of the cone with apex at viewport center and base (defocus disk) at the camera center. This should give you more consistent results as you vary the focus distance for a given shot.

Since we'll be choosing random points from the defocus disk, we'll need a function to do that: random_in_unit_disk(). This function works using the same kind of method we use in random_unit_vector(), just for two dimensions.

diff --git a/src/vec3.rs b/src/vec3.rs
index df0939e..d4352e1 100644
--- a/src/vec3.rs
+++ b/src/vec3.rs
@@ -1,176 +1,190 @@
 use std::{
     fmt::Display,
     ops::{Add, AddAssign, Div, DivAssign, Index, IndexMut, Mul, MulAssign, Neg, Sub},
 };
 
 #[derive(Debug, Default, Clone, Copy)]
 pub struct Vec3 {
     pub e: [f64; 3],
 }
 
 pub type Point3 = Vec3;
 
 impl Vec3 {
     pub fn new(e0: f64, e1: f64, e2: f64) -> Self {
         Self { e: [e0, e1, e2] }
     }
 
     pub fn x(&self) -> f64 {
         self.e[0]
     }
 
     pub fn y(&self) -> f64 {
         self.e[1]
     }
 
     pub fn z(&self) -> f64 {
         self.e[2]
     }
 
     pub fn length(&self) -> f64 {
         f64::sqrt(self.length_squared())
     }
 
     pub fn length_squared(&self) -> f64 {
         self.e[0] * self.e[0] + self.e[1] * self.e[1] + self.e[2] * self.e[2]
     }
 }
 
 impl Neg for Vec3 {
     type Output = Self;
 
     fn neg(self) -> Self::Output {
         Self::Output {
             e: self.e.map(|e| -e),
         }
     }
 }
 
 impl Index<usize> for Vec3 {
     type Output = f64;
 
     fn index(&self, index: usize) -> &Self::Output {
         &self.e[index]
     }
 }
 
 impl IndexMut<usize> for Vec3 {
     fn index_mut(&mut self, index: usize) -> &mut Self::Output {
         &mut self.e[index]
     }
 }
 
 impl AddAssign for Vec3 {
     fn add_assign(&mut self, rhs: Self) {
         self.e[0] += rhs.e[0];
         self.e[1] += rhs.e[1];
         self.e[2] += rhs.e[2];
     }
 }
 
 impl MulAssign<f64> for Vec3 {
     fn mul_assign(&mut self, rhs: f64) {
         self.e[0] *= rhs;
         self.e[1] *= rhs;
         self.e[2] *= rhs;
     }
 }
 
 impl DivAssign<f64> for Vec3 {
     fn div_assign(&mut self, rhs: f64) {
         self.mul_assign(1.0 / rhs);
     }
 }
 
 impl Display for Vec3 {
     fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
         write!(f, "{} {} {}", self.e[0], self.e[1], self.e[2])
     }
 }
 
 impl Add for Vec3 {
     type Output = Self;
 
     fn add(self, rhs: Self) -> Self::Output {
         Self::Output {
             e: [
                 self.e[0] + rhs.e[0],
                 self.e[1] + rhs.e[1],
                 self.e[2] + rhs.e[2],
             ],
         }
     }
 }
 
 impl Sub for Vec3 {
     type Output = Self;
 
     fn sub(self, rhs: Self) -> Self::Output {
         Self::Output {
             e: [
                 self.e[0] - rhs.e[0],
                 self.e[1] - rhs.e[1],
                 self.e[2] - rhs.e[2],
             ],
         }
     }
 }
 
 impl Mul for Vec3 {
     type Output = Self;
 
     fn mul(self, rhs: Self) -> Self::Output {
         Self::Output {
             e: [
                 self.e[0] * rhs.e[0],
                 self.e[1] * rhs.e[1],
                 self.e[2] * rhs.e[2],
             ],
         }
     }
 }
 
 impl Mul<f64> for Vec3 {
     type Output = Self;
 
     fn mul(self, rhs: f64) -> Self::Output {
         Self::Output {
             e: [self.e[0] * rhs, self.e[1] * rhs, self.e[2] * rhs],
         }
     }
 }
 
 impl Mul<Vec3> for f64 {
     type Output = Vec3;
 
     fn mul(self, rhs: Vec3) -> Self::Output {
         rhs.mul(self)
     }
 }
 
 impl Div<f64> for Vec3 {
     type Output = Self;
 
     fn div(self, rhs: f64) -> Self::Output {
         self * (1.0 / rhs)
     }
 }
 
 #[inline]
 pub fn dot(u: Vec3, v: Vec3) -> f64 {
     u.e[0] * v.e[0] + u.e[1] * v.e[1] + u.e[2] * v.e[2]
 }
 
 #[inline]
 pub fn cross(u: Vec3, v: Vec3) -> Vec3 {
     Vec3::new(
         u.e[1] * v.e[2] - u.e[2] * v.e[1],
         u.e[2] * v.e[0] - u.e[0] * v.e[2],
         u.e[0] * v.e[1] - u.e[1] * v.e[0],
     )
 }
 
 #[inline]
 pub fn unit_vector(v: Vec3) -> Vec3 {
     v / v.length()
 }
+
+#[inline]
+pub fn random_in_unit_disk() -> Vec3 {
+    loop {
+        let p = Vec3::new(
+            rand::random_range(-1.0..1.0),
+            rand::random_range(-1.0..1.0),
+            0.0,
+        );
+        if p.length_squared() < 1.0 {
+            return p;
+        }
+    }
+}

Listing 85: [vec3.rs] Generate random point inside unit disk


Now let's update the camera to originate rays from the defocus disk:

diff --git a/src/camera.rs b/src/camera.rs
index 44da965..6b51a5b 100644
--- a/src/camera.rs
+++ b/src/camera.rs
@@ -1,220 +1,259 @@
 use crate::{hittable::Hittable, prelude::*};
 
 pub struct Camera {
     /// Ratio of image width over height
     pub aspect_ratio: f64,
     /// Rendered image width in pixel count
     pub image_width: i32,
     // Count of random samples for each pixel
     pub samples_per_pixel: i32,
     // Maximum number of ray bounces into scene
     pub max_depth: i32,
     // Vertical view angle (field of view)
     pub vfov: f64,
     /// Point camera is looking from
     pub lookfrom: Point3,
     /// Point camera is looking at
     pub lookat: Point3,
     /// Camera-relative "up" direction
     pub vup: Vec3,
+    /// Variation angle of rays through each pixel
+    pub defocus_angle: f64,
+    /// Distance from camera lookfrom point to plane of perfect focus
+    pub focus_dist: f64,
 
     /// Rendered image height
     image_height: i32,
     // Color scale factor for a sum of pixel samples
     pixel_samples_scale: f64,
     /// Camera center
     center: Point3,
     /// Location of pixel 0, 0
     pixel00_loc: Point3,
     /// Offset to pixel to the right
     pixel_delta_u: Vec3,
     /// Offset to pixel below
     pixel_delta_v: Vec3,
     /// Camera frame basis vector - right
     u: Vec3,
     /// Camera frame basis vector - up
     v: Vec3,
     /// Camera frame basis vector - opposite view direction
     w: Vec3,
+    /// Defocus disk horizontal radius
+    defocus_disk_u: Vec3,
+    /// Defocus disk vertical radius
+    defocus_disk_v: Vec3,
 }
 
 impl Default for Camera {
     fn default() -> Self {
         Self {
             aspect_ratio: 1.0,
             image_width: 100,
             samples_per_pixel: 10,
             max_depth: 10,
             vfov: 90.0,
             lookfrom: Point3::new(0.0, 0.0, 0.0),
             lookat: Point3::new(0.0, 0.0, -1.0),
             vup: Point3::new(0.0, 1.0, 0.0),
+            defocus_angle: 0.0,
+            focus_dist: 10.0,
             image_height: Default::default(),
             pixel_samples_scale: Default::default(),
             center: Default::default(),
             pixel00_loc: Default::default(),
             pixel_delta_u: Default::default(),
             pixel_delta_v: Default::default(),
             u: Default::default(),
             v: Default::default(),
             w: Default::default(),
+            defocus_disk_u: Default::default(),
+            defocus_disk_v: Default::default(),
         }
     }
 }
 
 impl Camera {
     pub fn with_aspect_ratio(mut self, aspect_ratio: f64) -> Self {
         self.aspect_ratio = aspect_ratio;
 
         self
     }
 
     pub fn with_image_width(mut self, image_width: i32) -> Self {
         self.image_width = image_width;
 
         self
     }
 
     pub fn with_samples_per_pixel(mut self, samples_per_pixel: i32) -> Self {
         self.samples_per_pixel = samples_per_pixel;
 
         self
     }
 
     pub fn with_max_depth(mut self, max_depth: i32) -> Self {
         self.max_depth = max_depth;
 
         self
     }
 
     pub fn with_vfov(mut self, vfov: f64) -> Self {
         self.vfov = vfov;
 
         self
     }
 
     pub fn with_lookfrom(mut self, lookfrom: Point3) -> Self {
         self.lookfrom = lookfrom;
 
         self
     }
 
     pub fn with_lookat(mut self, lookat: Point3) -> Self {
         self.lookat = lookat;
 
         self
     }
 
     pub fn with_vup(mut self, vup: Vec3) -> Self {
         self.vup = vup;
 
         self
     }
 
+    pub fn with_defocus_angle(mut self, defocus_angle: f64) -> Self {
+        self.defocus_angle = defocus_angle;
+
+        self
+    }
+
+    pub fn with_focus_dist(mut self, focus_dist: f64) -> Self {
+        self.focus_dist = focus_dist;
+
+        self
+    }
+
     pub fn render(&mut self, world: &impl Hittable) -> std::io::Result<()> {
         self.initialize();
 
         println!("P3");
         println!("{} {}", self.image_width, self.image_height);
         println!("255");
 
         for j in 0..self.image_height {
             info!("Scanlines remaining: {}", self.image_height - j);
             for i in 0..self.image_width {
                 let mut pixel_color = Color::new(0.0, 0.0, 0.0);
                 for _sample in 0..self.samples_per_pixel {
                     let r = self.get_ray(i, j);
                     pixel_color += Self::ray_color(r, self.max_depth, world);
                 }
                 write_color(std::io::stdout(), self.pixel_samples_scale * pixel_color)?;
             }
         }
         info!("Done.");
 
         Ok(())
     }
 
     fn initialize(&mut self) {
         self.image_height = {
             let image_height = (self.image_width as f64 / self.aspect_ratio) as i32;
             if image_height < 1 { 1 } else { image_height }
         };
 
         self.pixel_samples_scale = 1.0 / self.samples_per_pixel as f64;
 
         self.center = self.lookfrom;
 
         // Determine viewport dimensions.
-        let focal_length = (self.lookfrom - self.lookat).length();
         let theta = self.vfov.to_radians();
         let h = f64::tan(theta / 2.0);
-        let viewport_height = 2.0 * h * focal_length;
+        let viewport_height = 2.0 * h * self.focus_dist;
         let viewport_width =
             viewport_height * (self.image_width as f64) / (self.image_height as f64);
 
         // Calculate the u,v,w unit basis vectors for the camera coordinate frame.
         self.w = unit_vector(self.lookfrom - self.lookat);
         self.u = unit_vector(cross(self.vup, self.w));
         self.v = cross(self.w, self.u);
 
         // Calculate the vectors across the horizontal and down the vertical viewport edges.
         let viewport_u = viewport_width * self.u; // Vector across viewport horizontal edge
         let viewport_v = viewport_height * -self.v; // Vector down viewport vertical edge
 
         // Calculate the horizontal and vertical delta vectors from pixel to pixel.
         self.pixel_delta_u = viewport_u / self.image_width as f64;
         self.pixel_delta_v = viewport_v / self.image_height as f64;
 
         // Calculate the location of the upper left pixel.
         let viewport_upper_left =
-            self.center - (focal_length * self.w) - viewport_u / 2.0 - viewport_v / 2.0;
+            self.center - (self.focus_dist * self.w) - viewport_u / 2.0 - viewport_v / 2.0;
         self.pixel00_loc = viewport_upper_left + 0.5 * (self.pixel_delta_u + self.pixel_delta_v);
+
+        // Calculate the camera defocus disk basis vectors.
+        let defocus_radius = self.focus_dist * f64::tan((self.defocus_angle / 2.0).to_radians());
+        self.defocus_disk_u = self.u * defocus_radius;
+        self.defocus_disk_v = self.v * defocus_radius;
     }
 
     fn get_ray(&self, i: i32, j: i32) -> Ray {
-        // Construct a camera ray originating from the origin and directed at randomly sampled
-        // point around the pixel location i, j.
+        // Construct a camera ray originating from the defocus disk and directed at a randomly
+        // sampled point around the pixel location i, j.
 
         let offset = Self::sample_square();
         let pixel_sample = self.pixel00_loc
             + ((i as f64 + offset.x()) * self.pixel_delta_u)
             + ((j as f64 + offset.y()) * self.pixel_delta_v);
 
-        let ray_origin = self.center;
+        let ray_origin = if self.defocus_angle <= 0.0 {
+            self.center
+        } else {
+            self.defocus_disk_sample()
+        };
         let ray_direction = pixel_sample - ray_origin;
 
         Ray::new(ray_origin, ray_direction)
     }
 
     fn sample_square() -> Vec3 {
         // Returns the vector to a random point in the [-.5,-.5]-[+.5,+.5] unit square.
         Vec3::new(
             rand::random::<f64>() - 0.5,
             rand::random::<f64>() - 0.5,
             0.0,
         )
     }
 
     fn _sample_disk(radius: f64) -> Vec3 {
         // Returns a random point in the unit (radius 0.5) disk centered at the origin.
         radius * random_in_unit_disk()
     }
 
+    fn defocus_disk_sample(&self) -> Point3 {
+        // Returns a random point in the camera defocus disk.
+        let p = random_in_unit_disk();
+
+        self.center + (p[0] * self.defocus_disk_u) + (p[1] * self.defocus_disk_v)
+    }
+
     fn ray_color(r: Ray, depth: i32, world: &impl Hittable) -> Color {
         // If we've exceeded the ray bounce limit, no more light is gathered.
         if depth <= 0 {
             return Color::new(0.0, 0.0, 0.0);
         }
 
         if let Some(rec) = world.hit(r, Interval::new(0.001, INFINITY)) {
             if let Some((scattered, attenuation)) = rec.mat.scatter(r, rec.clone()) {
                 return attenuation * Self::ray_color(scattered, depth - 1, world);
             }
             return Color::new(0.0, 0.0, 0.0);
         }
 
         let unit_direction = unit_vector(r.direction());
         let a = 0.5 * (unit_direction.y() + 1.0);
         (1.0 - a) * Color::new(1.0, 1.0, 1.0) + a * Color::new(0.5, 0.7, 1.0)
     }
 }

Listing 86: [camera.rs] Camera with adjustable depth-of-field


Using a large aperture:

diff --git a/src/main.rs b/src/main.rs
index f7deb5e..51d420b 100644
--- a/src/main.rs
+++ b/src/main.rs
@@ -1,56 +1,58 @@
 use code::{
     camera::Camera,
     hittable_list::HittableList,
     material::{Dielectric, Lambertian, Metal},
     prelude::*,
     sphere::Sphere,
 };
 
 fn main() -> std::io::Result<()> {
     let mut world = HittableList::new();
 
     let material_ground = Rc::new(Lambertian::new(Color::new(0.8, 0.8, 0.0)));
     let material_center = Rc::new(Lambertian::new(Color::new(0.1, 0.2, 0.5)));
     let material_left = Rc::new(Dielectric::new(1.5));
     let material_bubble = Rc::new(Dielectric::new(1.0 / 1.5));
     let material_right = Rc::new(Metal::new(Color::new(0.8, 0.6, 0.2), 1.0));
 
     world.add(Rc::new(Sphere::new(
         Point3::new(0.0, -100.5, -1.0),
         100.0,
         material_ground,
     )));
     world.add(Rc::new(Sphere::new(
         Point3::new(0.0, 0.0, -1.2),
         0.5,
         material_center,
     )));
     world.add(Rc::new(Sphere::new(
         Point3::new(-1.0, 0.0, -1.0),
         0.5,
         material_left,
     )));
     world.add(Rc::new(Sphere::new(
         Point3::new(-1.0, 0.0, -1.0),
         0.4,
         material_bubble,
     )));
     world.add(Rc::new(Sphere::new(
         Point3::new(1.0, 0.0, -1.0),
         0.5,
         material_right,
     )));
 
     env_logger::init();
 
     Camera::default()
         .with_aspect_ratio(16.0 / 9.0)
         .with_image_width(400)
         .with_samples_per_pixel(100)
         .with_max_depth(50)
         .with_vfov(20.0)
         .with_lookfrom(Point3::new(-2.0, 2.0, 1.0))
         .with_lookat(Point3::new(0.0, 0.0, -1.0))
         .with_vup(Point3::new(0.0, 1.0, 0.0))
+        .with_defocus_angle(10.0)
+        .with_focus_dist(3.4)
         .render(&world)
 }

Listing 87: [main.rs] Scene camera with depth-of-field


We get:

Spheres with depth-of-field

Image 22: Spheres with depth-of-field


A Final Render

Let’s make the image on the cover of this book — lots of random spheres.

diff --git a/src/main.rs b/src/main.rs
index 51d420b..54ff4ef 100644
--- a/src/main.rs
+++ b/src/main.rs
@@ -1,58 +1,86 @@
 use code::{
     camera::Camera,
     hittable_list::HittableList,
-    material::{Dielectric, Lambertian, Metal},
+    material::{Dielectric, Lambertian, Material, Metal},
     prelude::*,
     sphere::Sphere,
 };
 
 fn main() -> std::io::Result<()> {
     let mut world = HittableList::new();
 
-    let material_ground = Rc::new(Lambertian::new(Color::new(0.8, 0.8, 0.0)));
-    let material_center = Rc::new(Lambertian::new(Color::new(0.1, 0.2, 0.5)));
-    let material_left = Rc::new(Dielectric::new(1.5));
-    let material_bubble = Rc::new(Dielectric::new(1.0 / 1.5));
-    let material_right = Rc::new(Metal::new(Color::new(0.8, 0.6, 0.2), 1.0));
-
-    world.add(Rc::new(Sphere::new(
-        Point3::new(0.0, -100.5, -1.0),
-        100.0,
-        material_ground,
-    )));
+    let ground_material = Rc::new(Lambertian::new(Color::new(0.5, 0.5, 0.5)));
     world.add(Rc::new(Sphere::new(
-        Point3::new(0.0, 0.0, -1.2),
-        0.5,
-        material_center,
+        Point3::new(0.0, -1000.0, 0.0),
+        1000.0,
+        ground_material,
     )));
+
+    for a in -11..11 {
+        for b in -11..11 {
+            let choose_mat: f64 = rand::random();
+            let center = Point3::new(
+                a as f64 + 0.9 * rand::random::<f64>(),
+                0.2,
+                b as f64 + 0.9 * rand::random::<f64>(),
+            );
+
+            if (center - Point3::new(4.0, 0.2, 0.0)).length() > 0.9 {
+                let sphere_material: Rc<dyn Material> = if choose_mat < 0.8 {
+                    // diffuse
+                    let albedo = Color::random() * Color::random();
+
+                    Rc::new(Lambertian::new(albedo))
+                } else if choose_mat < 0.95 {
+                    // metal
+                    let albedo = Color::random_range(0.5, 1.0);
+                    let fuzz = rand::random_range(0.0..0.5);
+
+                    Rc::new(Metal::new(albedo, fuzz))
+                } else {
+                    // glass
+
+                    Rc::new(Dielectric::new(1.5))
+                };
+
+                world.add(Rc::new(Sphere::new(center, 0.2, sphere_material)));
+            }
+        }
+    }
+
+    let material1 = Rc::new(Dielectric::new(1.5));
     world.add(Rc::new(Sphere::new(
-        Point3::new(-1.0, 0.0, -1.0),
-        0.5,
-        material_left,
+        Point3::new(0.0, 1.0, 0.0),
+        1.0,
+        material1,
     )));
+
+    let material2 = Rc::new(Lambertian::new(Color::new(0.4, 0.2, 0.1)));
     world.add(Rc::new(Sphere::new(
-        Point3::new(-1.0, 0.0, -1.0),
-        0.4,
-        material_bubble,
+        Point3::new(-4.0, 1.0, 0.0),
+        1.0,
+        material2,
     )));
+
+    let material3 = Rc::new(Metal::new(Color::new(0.7, 0.6, 0.5), 0.0));
     world.add(Rc::new(Sphere::new(
-        Point3::new(1.0, 0.0, -1.0),
-        0.5,
-        material_right,
+        Point3::new(4.0, 1.0, 0.0),
+        1.0,
+        material3,
     )));
 
     env_logger::init();
 
     Camera::default()
         .with_aspect_ratio(16.0 / 9.0)
-        .with_image_width(400)
-        .with_samples_per_pixel(100)
+        .with_image_width(1200)
+        .with_samples_per_pixel(500)
         .with_max_depth(50)
         .with_vfov(20.0)
-        .with_lookfrom(Point3::new(-2.0, 2.0, 1.0))
-        .with_lookat(Point3::new(0.0, 0.0, -1.0))
+        .with_lookfrom(Point3::new(13.0, 2.0, 3.0))
+        .with_lookat(Point3::new(0.0, 0.0, 0.0))
         .with_vup(Point3::new(0.0, 1.0, 0.0))
-        .with_defocus_angle(10.0)
-        .with_focus_dist(3.4)
+        .with_defocus_angle(0.6)
+        .with_focus_dist(10.0)
         .render(&world)
 }

Listing 88: [main.rs] Final scene


(Note that the code above differs slightly from the project sample code: the samples_per_pixel is set to 500 above for a high-quality image that will take quite a while to render. The project source code uses a value of 10 in the interest of reasonable run times while developing and validating.)

This gives:

Final scene

Image 23: Final scene


An interesting thing you might note is the glass balls don’t really have shadows which makes them look like they are floating. This is not a bug — you don’t see glass balls much in real life, where they also look a bit strange, and indeed seem to float on cloudy days. A point on the big sphere under a glass ball still has lots of light hitting it because the sky is re-ordered rather than blocked.

Next Steps

You now have a cool ray tracer! What next?

Book 2: Ray Tracing: The Next Week

The second book in this series builds on the ray tracer you've developed here. This includes new features such as:

  • Motion blur — Realistically render moving objects.
  • Bounding volume hierarchies — speeding up the rendering of complex scenes.
  • Texture maps — placing images on objects.
  • Perlin noise — a random noise generator very useful for many techniques.
  • Quadrilaterals — something to render besides spheres! Also, the foundation to implement disks, triangles, rings or just about any other 2D primitive.
  • Lights — add sources of light to your scene.
  • Transforms — useful for placing and rotating objects.
  • Volumetric rendering — render smoke, clouds and other gaseous volumes.

Book 3: Ray Tracing: The Rest of Your Life

This book expands again on the content from the second book. A lot of this book is about improving both the rendered image quality and the renderer performance, and focuses on generating the right rays and accumulating them appropriately.

This book is for the reader seriously interested in writing professional-level ray tracers, and/or interested in the foundation to implement advanced effects like subsurface scattering or nested dielectrics.

Other Directions

There are so many additional directions you can take from here, including techniques we haven't (yet?) covered in this series. These include:

Triangles — Most cool models are in triangle form. The model I/O is the worst and almost everybody tries to get somebody else’s code to do this. This also includes efficiently handling large meshes of triangles, which present their own challenges.

Parallelism — Run \( N \) copies of your code on \( N \) cores with different random seeds. Average the \( N \) runs. This averaging can also be done hierarchically where \( N/2 \) pairs can be averaged to get \( N/4 \) images, and pairs of those can be averaged. That method of parallelism should extend well into the thousands of cores with very little coding.

Shadow Rays — When firing rays at light sources, you can determine exactly how a particular point is shadowed. With this, you can render crisp or soft shadows, adding another degreee of realism to your scenes.

Have fun, and please send me your cool images!

Acknowledgments

Original Manuscript Help

  • Dave Hart
  • Jean Buckley

Web Release

Corrections and Improvements

Special Thanks

Thanks to the team at Limnu for help on the figures.

These books are entirely written in Morgan McGuire's fantastic and free Markdeep library. To see what this looks like, view the page source from your browser. 1

Thanks to Helen Hu for graciously donating her https://github.com/RayTracing/ GitHub organization to this project.


  1. We used mdBook instead. Thank you for the great tool to document and write books!

Citing This Book

Consistent citations make it easier to identify the source, location and versions of this work. If you are citing this book, we ask that you try to use one of the following forms if possible.

Basic Data

  • Title (series): “Ray Tracing in One Weekend Series”
  • Title (book): “Ray Tracing in One Weekend”
  • Author: Peter Shirley, Trevor David Black, Steve Hollasch
  • Version/Edition: v4.0.2
  • Date: 2025-04-25
  • URL (series): https://raytracing.github.io
  • URL (book): https://raytracing.github.io/books/raytracinginoneweekend.html

Snippets

Markdown

[_Ray Tracing in One Weekend_](https://raytracing.github.io/books/RayTracingInOneWeekend.html)

HTML

<a href="https://raytracing.github.io/books/RayTracingInOneWeekend.html">
    <cite>Ray Tracing in One Weekend</cite>
</a>

LaTeX adn BibTex

~\cite{Shirley2025RTW1}

@misc{Shirley2025RTW1,
   title = {Ray Tracing in One Weekend},
   author = {Peter Shirley, Trevor David Black, Steve Hollasch},
   year = {2025},
   month = {April},
   note = {\small \texttt{https://raytracing.github.io/books/RayTracingInOneWeekend.html}},
   url = {https://raytracing.github.io/books/RayTracingInOneWeekend.html}
}

BibLaTeX

\usepackage{biblatex}

~\cite{Shirley2025RTW1}

@online{Shirley2025RTW1,
   title = {Ray Tracing in One Weekend},
   author = {Peter Shirley, Trevor David Black, Steve Hollasch},
   year = {2025},
   month = {April},
   url = {https://raytracing.github.io/books/RayTracingInOneWeekend.html}
}

IEEE

“Ray Tracing in One Weekend.” raytracing.github.io/books/RayTracingInOneWeekend.html
(accessed MMM. DD, YYYY)

MLA

Ray Tracing in One Weekend. raytracing.github.io/books/RayTracingInOneWeekend.html
Accessed DD MMM. YYYY.