Fall 2022
Quick facts:
In this project, you’ll implement a pretty fully-featured raytracer. Raytracing is a really elegant algorithm that allows for a lot of physically-realistic effects with a relatively small amount of code (Paul Heckbert fit a raytracer on the back of his business card!)
Complete this project in groups of 2. Instructions for group setup are below.
The assignment is not small. There are 10 separate
TODO
s, some of which have multiple parts. Most of the TODOs
involve less than 10 lines of code, though some will be more. I ran
sloc
on my solution and the skeleton and the difference in
lines of actual code (“source”) was less than 200. However, many of
those lines are backed by math that you need to understand to be able to
write them.
Before accepting the Github Classroom assignment, make sure you know
who you’re working with on this project. Then, click the Github
Classroom link in the Canvas assignment. The first group member to
accept the invite should create a new group. Name the group with
the two WWU usernames of the group members in alphabetical order,
followed by “a2”, all separated by an underscore. For example,
if I were working with Brian Hutchinson on this project, one of us would
create a group named hutchib2_wehrwes_a2
and we’d both join
that group.
The setup is similar to A1, except the package is named
WWURay
:
$ cd path/to/repo/WWURay
$ julia --project
julia> ]
(WWUMeshes) pkg> instantiate # (this is a one-time setup step)
pkg> <backspace>
julia> using Revise
julia> using WWURay
julia> WWURay.main(1, 1, 300, 300, "results/out_0.png") # should already "work"
Verify that out_0.png
has been created in the
results
directory; it should be a 300x300 image containing
all black pixels. This is your starting point - your job in this project
is to make the picture more colorful!
A2 involves significantly more moving parts than A1 did, and thus has more “architecture” to it and uses more Julia features. This section gives you an overview of some of the concepts that are new in this assignment.
Types in Julia are optional: you can tell Julia what type something is, but you don’t have to. The general reasons to specify types are as assertions (to make catching bugs easier) and performance (to give the compiler more information). The performance benefits of specifying types are often not what you think, since everything is JIT compiled with all applicable types at runtime anyway. The consensus at this point, though it’s still early, seems to be that you should only assert types if you have a good reason to. I’m not sure all the skeleton code follows this advice very well, but I tried.
Type annotations look like this:
3::Int
or
3.0::Float64
The most important time to pay attention to types in this project is
in function signatures where the arguments of a function are typed. For
example, in Cameras.jl
we have a the function header:
function pixel_to_ray(camera::CanonicalCamera, i, j)
The first argument must be of type CanonicalCamera
,
whereas the second and third can be anything (formally, they are of type
Any
, which is the supertype of all types). From context,
they should probably be integers, but I didn’t bother to tell the
compiler that. These type annotations play a major role in the next
section.
See https://docs.julialang.org/en/v1/manual/types/ if you wish to geek out over the full details of Julia’s type system.
The biggest departure from most languages you’ve probably encountered is that Julia is opinionatedly not object-oriented. Instead of object-oriented programming, class hierarchies, polymorphism, etc. etc. etc., Julia uses a paradigm called multiple dispatch to enable modular design. All the software architecture decisions in the raytracer have more or less been made for you, but nonetheless it’s important for you to understand how the raytracer is designed.
The basic idea of multiple dispatch is that each function (i.e., a subroutine with a particular name) can have multiple methods, each of which handles different numbers of arguments and types. You can think of this as method overloading on steroids. This ends up being a powerful and flexible replacement for many object-oriented patterns and constructs.
Let’s look at an example of this in the skeleton code. In
WWURay.jl
, there is a function called
determine_color
. Its purpose is to determine the color seen
along a given ray - basically, this is most of the inner loop of the
raytracer. The function takes several arguments that are needed to do
the shading calcuation, but in particular notice its first parameter,
shader
. There are two separate definitions of the function,
one with shader::Flat
, and another with
shader::Normal
. At runtime, when we call
determine_color
, the type of the shader
argument will determine which of these methods is called. One of your
tasks will be to implement a third method of this
function.
In an object-oriented language we might have a Shader superclass with
an abstract determine_color
method implemented by each of
its subclasses. Multiple dispatch allows you decide which implementation
of a function is called based on any combination of its arguments’
types. In this project, we keep it mostly pretty simple - most functions
dispatch on only one of their arguments’ types.
import
, using
As in A1, in that all the modules in separate files are
submodules of the WWURay
module. They are
included using include
statements, which work like you’d
expect (as if the file’s contents were included inline). Even after
including the files, the code in WWURay
still doesn’t have
automatic access to their functionality because it is wrapped in
submodules. There are two ways to get access to names in other
modules:
import MyModule
provides access to qualified names in
MyModule
, as in MyModule.some_function()
using MyModule
exposes all names in
MyModule
that are explicitly export
ed. It also
has the same effect as import
such that all names,
regardless of whether they are export
ed, are accessible
when qualified with the module name.In this assignment, you will complete the following tasks:
Basic ray generation for a canonical perspective camera has been
implemented for you. See the CanonicalCamera
type and the
pixel_to_ray
function in Cameras.jl
. This
camera is at the origin looking down the -z axis with a
viewport width of 1 and a focal length of 1, for a horizontal field of
view of 60 degrees.
The next thing we need is to tell whether a ray intersects an object.
We start with spheres. Your task is to implement
ray_intersect(ray::Ray, object::Sphere
in
Scenes.jl
. If the ray does not intersect the sphere, this
method should return nothing
; otherwise, it should return a
HitRecord
struct containing information about the
intersection. The fields of the HitRecord
should be filled
in as follows:
t
is the value of the ray’s parameter t at
which the ray intersects the sphere. If there are two intersections, use
the one with smaller t
.intersection
is a Vec3 giving the position of the
intersection pointnormal
is a Vec3 giving the normal vector of the
sphere’s surface at the intersection pointuv
is the texture coordinate or nothing
;
we’ll come back to this later, so for now set this to
nothing
object
is the Sphere
object that was hit
by the rayWith multiple objects in the scene, we need to make sure that we base
the pixel’s color on the closest object. Fill in
closest_intersect
in WWUray.jl
. This function
takes a Scene
, a Ray
, and bounding values
tmin
and tmax
. We’ll use tmin
and
tmax
for a few different purposes, but most obviously we
need to make sure we don’t get ray intersections behind the camera
(i.e., where t
is less than 1, or whatever the focal
distance of the camera is).
The closest_intersect
function should call the
ray_intersect
on each object in the scene
and
return the resulting HitRecord
object for the closest
intersection whose t
value lies between tmin
and tmax
. If no objects are intersected, it should return
nothing
.
Take a look at the traceray
function. This currently
performs two tasks:
closest_intersect
to get a HitRecord for the
closest object along the ray; If there is no object, it returns the
background color associated with the scene.determine_color
to choose the color for the
pixel based on the information in the HitRecord
. This will
already work correctly for objects with the Flat
shading
model, because determine_color
simply returns the diffuse
color for such objects.You’re finally about to have a working raytracer! Fill in the missing
part of the main
function in WWUray.jl
. For
each pixel in the image, call Cameras.pixel_to_ray
to
generate a viewing ray for that pixel. Then, call the
traceray
function and set the resulting color to that pixel
of the canvas. Because we only want to consider objects in front of the
camera, set tmin
to 1 and tmax
to
Inf
. For now, traceray
ignores the
rec_depth
parameter; you can leave it out and it will
default to 1.
At this point, you should be able to run
julia> WWURay.main(1, 1, 200, 300, "results/out_1.png")
and see an image like this:
Whoa! We just rendered something like the Japanese flag!
That’s pretty cool, but doesn’t look like a physically realistic
scene - everything is flat-colored. To get towards more
realistic-looking scenes, we need lights and
shading. The file Lights.jl
contains types
for lights that can be included in a scene. For this assignment, we’ll
support directional lights and point
lights.
The key thing we’ll need to know when deciding a pixel’s color due to
a given light is the direction of the light from the pixel. In
Lights.jl
, implement the two methods of the
light_direction
function:
light_direction(light::DirectionalLight, point::Vec3)
light_direction(light::PointLight, point::Vec3)
Since our scenes can have multiple lights, we need to calculate the
color contribution from each light source. In TODO 4c, we’ll handle
lights one at a time by calling shade_light
in the
determine_color
method that handles
PhysicalShadingModel
shaders. To support this, implement
the first method of the shade_light
function, which
calculates the (Lambertian
) shading component for a given
point and light source, then multiplies that by the point’s diffuse
color. Recall that the diffuse color is calculated as: \[
L = k_d I \textrm{max}(0, n \cdot \ell)
\] where \(k_d\) is the diffuse
albedo, \(I\) is the light intensity,
\(n\) is the surface normal, and \(\ell\) is the light direction. You can use
the get_diffuse
function from Materials.jl
to
get the diffuse color of an object; its second argument is a texture
coordinate which we aren’t handling yet, but it will behave just fine if
you pass nothing
, so I suggest passing in
hitrec.uv
, which will be nothing
for now but
will eventually contain texture coordinates. You already wrote the
light_direction
function to find \(\ell\), and the Light
object
can tell you its intensity.
Now we can implement determine_color
, calling
shade_light
for each light source. At this point, your
raytracer should be able to render objects with Lambertian shading
models. Try it out with
julia> WWURay.main(2, 1, 300, 300, "results/out_2.png")
Test Scene 2 uses a directional light source, so if your code for the
directional method of the light_direction
function,
shade_light
, and determine_color
are correct,
you should get the following image:
Test Scene 3 uses a point light light source, so if the
PointLight
variant of the light_direction
method is implemented correctly, you should be able to run:
julia> WWURay.main(3, 1, 300, 300, "results/out_3.png")
and get a sinister-looking result that looks like this:
Next, let’s enable shinier spheres by implementing the Blinn-Phong
shading model, which uses a diffuse component just like the Lambertian
model, but adds a specular component. Recall that the Blinn-Phong
shading equation is: \[
L = k_d I \textrm{max}(0, n \cdot \ell) + k_s I \textrm{max}(0, n \cdot
h)^p
\] where \(k_d\) and \(k_s\) are the diffuse and specular colors,
\(n\) is the surface normal, \(\ell\) is the light direction, \(I\) is the light intensity, \(p\) is the specular exponent, and \(h\) is the half-vector between the viewing
and lighting directions, defined as \[
h = \frac{v + \ell}{||v + \ell||}
\] where \(v\) is the view
direction. The specular color and exponent are stored as part of the
BlinnPhong
shading model struct; the view direction can be
determined using the incoming viewing ray.
If the BlinnPhong
method of the shade_light
function is working correctly, you should be able to render Test Scene
4
julia> WWURay.main(4, 1, 300, 300, "results/out_4.png")
and get the following result:
At this point, if you render Test Scene 5:
julia> WWURay.main(5, 1, 300, 300, "results/out_5.png")
you’ll get an image that looks like this:
There’s “realistic” lighting here, but one important thing is missing: shadows. One of the most awesome things about raytracing is that many physical phenomena of light transport, from shadows to reflections to refraction, are actually pretty simple to do. When we hit an intersection, before we go ahead and compute the contribution from a given light source, all we have to do is first ask: is the light source actually visible from this point? Or in other terms, if I shoot a ray from this point towards the light source, does it hit any objects before it arrives at the light?
Implement both methods of the is_shadowed
function.
Because DirectionalLight
s have constant direction, you can
think of them as being infinitely far away, so the method for
DirectionalLight
s is the simplest. If a ray from the point
towards the light direction hits an object, then the object is in the
way of the light source and the point won’t receive light from that
source. The only thing to be careful of is the tmin
and
tmax
values: in a world of perfect precision, we’d use
0
and Inf
. However, you have to be careful to
avoid hitting the object that you’re starting from - so instead of 0,
set tmin
to some small constant, such as
1e-8
.
The PointLight
method of is_shadowed
is no
more complicated, but the choice of tmin
and
tmax
is different. Because the light source exists at a
point in space, we only want a point on a surface to be shadowed if
there’s an object between the point and the light source. This requires
us to know how far away the light source is. The easiest way to account
for this is to set up your shadow ray to have its origin at the surface
and its direction vector point directly to the light source (i.e., its
length is the distance from the point to the light). This way, the
values of t
we’re interested in finding intersections at
are simply from the surface to t=1
, past which we’d be
hitting an object beyond the light source which therefore wouldnt’ be
casting a shadow.
determine_color
Now that we’ve implemented is_shadowed
, all we need to
do is modify determine_color
to call it. For each light, we
now want to check if the point is shadowed with respect to that light,
only adding in the contribution from the light source if it’s not
obscured by another object.
At this point, if we try again on Test Scene 5, we’ll see shadows cast on the ground by each of the floating spheres:
julia> WWURay.main(5, 1, 300, 300, "results/out_5.png")
As with shadows, adding mirror-like surfaces is not that complicated. If the surface were 100% reflective (i.e., a perfect mirror), we’d determine at what direction the ray would bounce off, then shoot a new ray in that direction and determine the color of that surface. If that surface happens to have mirror-like properties, well… this sounds like a use case for recursion!
Our renderer will support surfaces that do some of each - they have a
surface color shaded according to some already-supported shading model,
but that color is mixed with the color of whatever is reflected in the
mirror-like surface. Notice that the Material
struct in
Materials.jl
has a thus-far-ignored field called
mirror_coeff
. This is a value between 0 and 1 that
determines what fraction of the color is determined by the local color
(this surface’s shading model and diffuse color) vs. mirror
reflection.
Modify the traceray
function as follows:
mirror_coeff
value, determine the direction the light will
reflect, then recursively call traceray
to find the
reflected_color
seen in that direction.mirror_coeff
times the
reflected color and 1-mirror_coeff
times the local
color.traceray
to avoid infintely-reflecting mirrors (and also to
keep runtime under control). In my implementation I call
traceray
with a rec_depth
of 8.At this point, Test Scene 6 should look like this:
julia> WWURay.main(6, 1, 300, 300, "results/out_6.png")
Spheres are great and all, but it would be nice to be able to render
more general surfaces, too. In this step, we’ll add support for triangle
meshes like the ones you generated in A1. The good news is that from the
perspective of the renderer, a triangle mesh is simply a big pile of
triangles, each of which can be treated as a separate object that needs
to be rendered. I’ve set up the necessary types for you, so all you have
to do for this step is implement another method of the
ray_intersect
function to support ray-triangle
intersection; the existing rendering pipeline takes care of the
rest!
The OBJMeshes.jl
from A1 is included, so we can load
meshes from files and you can drop in your A1 code to generate spheres
and cylinders as well. In the second part of Scenes.jl
,
you’ll find the types set up for dealing with triangle meshes. Since the
OBJ format specifies geometry but not material properties, I’ve defined
a new type called Triangle
that associates an
OBJTriangle
, the OBJMesh
it belongs to, and a
material
with which to render it. The
create_triangles
function creates an array of
Triangle
s given an OBJMesh
and a material.
The job of ray_intersect
, if the ray does indeed hit the
object, is to fill in the HitRecord
. Many of these fields
get filled in similarly to how they were in the sphere: t
,
intersection
, and object
are all easy to
determine. Normals and texture coordinates are less obvious - for
spheres, we filled these in using very sphere-specific reasoning.
Textures will be handled later in TODO 9, so you can simply
uv
to nothing
for the moment. That leaves
surface normals.
Since our triangle comes from an OBJ mesh, we know that it can have vertex normals stored for each corner, but we also know it doesn’t have to. If it does have vertex normals, we’d like to interpolate smoothly between them so our smooth objects don’t look faceted; if it doesn’t, then the best we can do is to use the normal of the triangle face. So:
HitRecord
’s normal to the plane normal of the
triangleAfter this is completed correctly, you should be able to render Test Scene 7 and see our favorite bunny staring contemplatively at itself in a mirror:
julia> WWURay.main(7, 1, 300, 300, "results/out_7.png")
[HW2 Problem 2 has much in common with this TODO] So
far we’ve been using Test Camera 1, which is a
CanonicalCamera
, defined in Cameras.jl
. The
same file also has a PerspectiveCamera
type that supports
perspective cameras with arbitrary positions, orientations, and focal
lengths. It is not fully general in that it still assumes a viewport
width of 1 centered on the optical axis (i.e., centered at \((u,v) = (0,0)\)) and parallel to the \(uv\) plane, so it can’t handle shifted or
oblique perspective views. To support this camera type, we need two
things: a constructor and a pixel_to_ray
method to generate
rays from a PerspectiveCamera
.
PerspectiveCamera
To build a PerspectiveCamera
, we could simply specify
the origin and \(\vec{u}, \vec{v},
\vec{w}\) axes, but this is cumbersome; usually we want something
a little more intuitive. The constructor for
PerspectiveCamera
in Cameras.jl
takes:
eye
- a 3-vector specifying the eye position (center of
projection)view
- a 3-vector giving the direction in which the
camera is lookingup
- a 3-vector giving the direction that is considered
“up” from the viewer’s perspective. This is not necessarily orthogonal
to view
. Often in scenes defined the way we’d normally
think of them, this would be \((0, 1,
0)\).focal
- a floating-point scalar that specifies how far
the image plane is frome the eyeas well as image dimensions. To get \(\vec{u}, \vec{v}, \vec{w}\), we need to “square up” a basis from these vectors. See Marschner 4.3 and 2.4.7 for details on this.
Implement the pixel_to_ray
method for
PerspectiveCamera
s. This step is analogous to the
pixel_to_ray
method for CanonicalCamera
s;
you’ll still need to convert pixel coordinates into \((u,v)\) coordinates, but the origin of the
rays is now the camera’s eye
point, and the \((u,v)\) are coordinates in terms of the
\(\vec{u}, \vec{v}, \vec{w}\) basis
specifying its orientation.
Test Scene 8 is actually the same as Test Scene 7. If we use Test Camera 2, we’ll see it from a different perspective:
julia> WWURay.main(8, 2, 300, 300, "results/out_8.png")
Camera 3 has a very wide field-of-view (i.e., short focal length), which creates some weird-looking perspective distortion when rendering Scene 6:
julia> WWURay.main(6, 3, 300, 300, "results/out_8b.png")
So far we’ve ignored texture
field of the
Material
struct. Materials.jl
includes a type
Texture
that represents some image data to be applied as a
texture. This requires changing get_diffuse
to look up a
texture value instead of the object’s diffuse color, and modifying the
ray_intersect
methods to store accurate uv
coordinates in the HitRecord
s that they return. Here’s a
suggested approach:
Materials.jl
contains a function
get_texture_value
that takes a Texture
and a
Vec2
of \((u, v)\)
coordinates and returns an RGB value at those coordinates. This requires
converting the \((u,v)\) coordinates,
which are in the range [0,1] into pixel coordinates, which are in the
range determined by the image size. This mapping will probably not
result in an integer pixel value; there are several ways to deal with
this. The simplest is to round to the nearest integer pixel coordinates
(nearest-neighbor); the next simplest is to do bilinear
interpolation; you can get even fancier with schemes like bicubic
and so on.1 My implementation uses
nearest-neighbor.
get_diffuse
This one’s pretty simple - modify get_diffuse
to check
whether a texture is present and call get_texture_value
if
so; otherwise, return the object’s diffuse color as before.
ray_intersect
(sphere)[Your solution to Problem 1 on HW 2 will be useful for this
TODO] The final step is to make sure that correct texture
coordinates are supplied to the above, by filling in the correct value
for uv
in the HitRecord
objects produced when
intersecting rays. Modify the ray_intersect
method for
Sphere
s to add correct texture coordinates. Use the exact
texture mapping function we used in A1.
ray_intersect
(triangle)Modify the ray_intersect
method for
Triangle
s to add correct texture coordinates to the
HitRecord
. Use barycentric interpolation among the texture
coordinates at the triangle’s three vertices.
If texture coordinates are working for both triangles and spheres, Test Scene 9 should look like this:
julia> WWURay.main(9, 1, 300, 300, "results/out_9.png")
Each group member should complete this step individually. The deadline for this step is one day after the code deadline. Please make sure each partner has pushed their artifact code to github and submitted the artifact on Canvas before the artifact deadline.
If you haven’t already, take a look at TestScenes.jl
to
see how the scenes you’ve been rendering so far are set up. In this
task, you’ll create and render an interesting scene of your own
design.
Add the code for your scene to TestScenes.jl
in a new
function called
artifact_<username>(img_height, img_width)
, where
username
is replaced by your WWU username. This function
should return a 2-tuple of (Scene, Camera) that can be passed into the
main
function to render your scene. Render a
1000-pixel-wide image of your scene, save it in png
format,
and submit it to Canvas.
If the code to produce your artifact relies on extensions, put the
artifact code in the extensions
branch and note this in
your readme file.
If you’ve implemented the animation extension, your artifact may be a
video instead of a still image. In this case, save the video as
artifact_<username>.mp4
.
Best Artifact Prize Artifacts will be showcased to the class and and the winner(s) of a vote for best artifact will receive a few extra credit points on the assignment.
results
directory.TestScenes.jl
and
pushed to github.Points are earned for correctness and deducted for deficiencies in clarity or efficiency.
580 only:
Deductions:
This section may be expanded if more extension ideas arise - stay tuned, and feel free to propose your own.
There are so many cool places to go from here! 580 students should complete at least 2 category-points worth of extra credit. Some of the category 2 ones (combinations or further elaborations on them) point in the direction of possible final projects. If you’re unsure about what you’re getting into with any of these, please come talk to me. You can also devise your own extensions that are not on this list, but please run them by me first to be sure you’re not wasting time.
Before starting on your extensions, please create a
separate extensions
branch in your repository and complete
your extensions there. Include your readme.txt
in both
branches. I will use my own modified TestScenes.jl
for
grading purposes, so your master
branch should contain a
working version of the base assignment without any architectural
changes.
OrthographicCamera
to generate parallel
rays from an Orthographic camera and a
GeneralPerspectiveCamera
that supports a viewport that is
not centered at (u,v) = (0, 0). In both cases, the camera should also
support oblique views by allowing the user to specify the image plane
normal separately from the view direction. See 4.3.1-4.3.2 for details.
Play around with these cameras and create images that show the effects
of using different camera models. See Tilt-Shift
photography for examples of real-life uses for such cameras. This
would be a good one to combine with depth-of-field, to create the
“miniature world” aesthetic that often charcaterizes tilt-shift
photography.TestScenes.jl
has a helper method that implements a hacky
approach to scaling and translating the position coordinates. Modify
Scenes.jl
so that the position, scale, and orientation of
objects can be specified generally by a transformation matrix. Include
helper functions to generate standard transformations to make it easy to
model more complex and varied scenes.TestScenes.jl
) specify a parametric equation that
calculates the position of an object, light, or camera as a function of
a time
variable, then generate and render a sequence of
scenes over increasing values of time
. This would be a good
one to combine with implementing transformations, since that allows for
more flexibility in how objects can move. Create one or more videos
showcasing your system’s animation capabilities.SpotLight
, which is like a point light
source but only emits light in a cone of directions. The modeler
specifies the light’s primary direction and the angle of the cone within
which light is emitted. The spotlight should support “soft” edges: make
the intensity of the light fall off gradually as a function of angle to
the spotlight’s direction, and include a softness parameter that lets
the modeler control how suddenly it falls off.Texture
s to
specify more properties than just color. Examples include shading
parameters (such as the specular coefficient in Blinn-Phong),
normal/bump maps, displacement maps, and environment maps. See Chapter
11 for more on this.Implement one or more of the techniques from Chapter 13:
Implement an acceleration structure, such as a Bounding Voume Hierarchy or a Binary Space Partitioning Tree to make the ray-scene intersection loop run in sub-linear time.
From the perspective of physical accuracy, the two shading models we’ve implemented (Lambertian and Blinn-Phong) are rubbish. They look somewhat plausible, but don’t come close to matching up with what surfaces look like in reality. One family of physically-based shading model, known as Microfacet models, is based on a theory of approximating surfaces by random distributions of microscopic flat surfaces (facets), then analyzing the physics of how light interacts with that geometry. You can read up on microfacet models here, and here.
For this extension, refactor the shading code to apply a generic shading model that multiplies the cosine term and light intensity by a generic function that dispatches on BRDF type; then implement at least one microfacet BRDF (Beckmann and GCX are good choices) in addition to a Lambertian and Blinn-Phong BRDF and produce a comparison of the results. You’ll definitely have to go looking for details on this - come talk to me if you want to do this and need pointers to the right resources.
If one group member wants to do extra credit but the other does not,
that’s fine - just include a note on this in your readme
and the group member who completed the extensions will get the
credit.
If you implement any of these, you must include:
TestScenes.jl
results
readme
file in txt, md, html, or pdf format in the
base WWURay
directory with: