Fall 2022
Quick facts:
In this assignment, you will put the mathematics behind matrix transformations into practice, using transformations to render 3D geometric primitives. You will implement four simple programs to render and transform 3D geometric primitives using JS and HTML5 canvas. You will also learn the basics of OpenGL and write your own shaders that run in your browser.
As usual, the skeleton code is available via a Github Classroom invitation link in the Canvas assignment. You’ll complete the assignment individually, so there’s no step required to create a team - just click the link and accept the assignment to create your repository.
As you work through the project, feel free to (optionally) add notes in the HTML file under each task where it says “report problems, comments…”. I’m particularly interested in anything where I didn’t cover what you needed to know, or if you found something especially confusing.
This project is done in Javascript, everybody’s other favorite language whose name begins with ‘J’. If you’re not familiar with Javascript, don’t panic: neither am I. There aren’t too many things you need to know about the language to get by in this assignment. Here, I’ll provide a tour of the skeleton code and try to point out and explain some oddities you might encounter, particularly in the matrix library.
The code is fairly cleanly segmented into separate files. Your main
entrypoint from a running-and-testing perspective is
a3.html
. When you open this in a browser, you’ll see a
single webpage with a separate canvas for each task in the assignment.
The HTML file has only a bit of javascript code, the
setupAssignment
function, which it calls on page load. It
also includes quite a pile of other javascript code via some
<script>
tags. You will not need to
write any code in the following files:
glUtil.js
is where all the messy practicalities of
creating an OpenGL context and setting it up to draw into an HTML
canvas
element are hidden away. This is encapsulated in the
setupTask
function, which is called once for each task.
Aside from the GL setup code, this file also sets up event handlers to
allow the canvas to respond to click-and-drag mouse interaction.uiUtil.js
contains code to implement the sliders used
to move the joints around in Task 4.cube.js
and sphere.js
contain geometry
information for the cubes and spheres used in the assignment. For the
wireframe renderer, these are stored as indexed line segment sets, and
for the GL portion they’re stored as indexed triangle meshes.Not all languages can be as awesome as Julia. As a case in point,
Javascript doesn’t have multi-dimensional arrays built in.
matrix.js
contains a small library that implements a
SimpleMatrix
object to patch this shortcoming. If you’re
not familiar with Javascript, the syntax may appear a bit weird, but
it’s really just OOP hacked onto a language that doesn’t have it. The
matrix library is set up as follows:
In Javascript, functions are themselves objects, and objects are
really just key-value stores (kind of like dict
s in Python
or Julia). Classes aren’t a real thing.
A “class”-like thing can be implemented by writing a constructor
function that assigns properties to itself (this
). For
example, the SimpleMatrix
constructor assigns an array to
its m
field to store its numeric data.
Instance-method-like things are added to a “class” by creating or
modifying its prototype
object. A function’s
prototype
is an object that is shared among all instances;
this way, we have only one copy of each instance method, rather than one
per instance. In matrix.js
, the instance methods are added
all in one go, and they mainly just call class methods (static methods,
in Java lingo):
SimpleMatrix.prototype = {
inverse: function() {
return SimpleMatrix.inverse(this);
},
transpose: {
// and so forth
}
Class-method-like things are added by modifying the
constructor/class function directly, like
SimpleMatrix.multiplyVector = function(matrix, vector) {...}
In this library, matrices are their own objects, but vectors are simply stored as arrays of length 3 or 4.
After all that weirdness, instances work kind of like you expect:
new
keyword and the constructor. So to create a matrix, you can say
my_mat = new SimpleMatrix()
. Pro tip:
Don’t forget the new
- this will fail silently and cause
headaches.my_mat.inverse()
returns a new SimpleMatrix that is the
inverse of my_mat
my_mat.m
is the array containing the matrix data; it
can be accessed with zero-based indices: my_mat.m[0]
is the
top left entry; this matrix library is row-major by convention, so
my_mat.m[4]
is the first element in the second row.taskN.js
Each task in this assignment lives in its own javascript file, and the skeleton code takes care of running each task in its own canvas in the HTML file. Each file is largely based around implementing “methods” of another “class”, much like was done in the matrix library.
For the WebGL tasks, notice that the GLSL code for the shaders lives inside backtick-quoted strings, each stored in its own variable.
The developer console is going to be your friend for this assignment - it will tell you about compiler errors, runtime errors and is simultaneously also your debugger.
Opening the developer console depends on your platform and browser:
Check the developer console of your browser for any error messages when you first open the website. You should have the console open while you work on this assignment: It will tell you of any problems that are occurring in your program. You can also make use of console.log to help you debug your program.
Note: The assignment should Just Work on the lab environment. If you’re having any compatibility or platform issues, let me know as early as possible.
In the first three tasks, you will complete the implementation of a barebones Javascript renderer that implements the transformation pipeline we discussed in class and supports a scene graph-like transformation hierarchy.
In Tasks 5-6 (and 7 for 580 students), you’ll work with WebGL code, which implements an object-order rendering pipeline supported by GPU hardware.
In this task, you will finish the implementation of a basic wireframe rasterizer to render the edges of a sphere and a cube. The renderer is given to you with an orthographic projection already implemented (you will start out seeing the sphere up-close-and-personal), and you will only need to change the orthographic projection to a simple perspective projection.
Fill in the missing part of
WireframeMesh.prototype.render
in task1.js
.
This method already contains an implementation of a wireframe renderer,
all you have to do is implement the perspective projection. You can do
this by simply dividing the \(x\) and
\(y\) components by the \(z\) coordinate before rendering the edges.
Don’t overthink this: there’s no matrix pipeline happening. What you’re
doing is equivalent to the following perspective projection matrix,
which is for a camera in canonical position with a viewport distance of
1: \[
\begin{bmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 1 & 0 \\
\end{bmatrix}
\] Although you have very little code to write for this task, you
may find it helpful in later parts if you spend some time now getting to
know how the basic renderer works. If you’ve implemented this task
correctly, you will get an image like the one shown below.
In this task you will implement basic matrix operations and simple transforms. This task builds on Task 1; make sure you have implemented the first task. The end result of this task should be three animating shapes. There should be a cube translating in a circle on the left, a cube shrinking and growing in the center, and a sphere rotating on the right side of the screen.
In matrix.js
:
Fill in SimpleMatrix.translate
. This method should
return a translation matrix.
Fill in SimpleMatrix.scale
. This method should
return a scale matrix.
Fill in SimpleMatrix.rotate
. This method should
return a rotation matrix that rotates by an angle \(a\) (given to you in degrees) around axis
\((x, y, z)\). You can do this using a
similarity transform that moves into a basis constructed from the single
axis vector (Section 2.4.6), rotating in that basis, then rotating back.
Alternatively, you can copy in the analytical result of that procedure,
which you can find on Wikipedia).
Trig, \(\pi\), etc. are available in
Javascript via the Math
object (e.g.,
Math.cos(t)
, Math.PI
).
Fill in SimpleMatrix.multiplyVector
. This method
should return the result of multiplying a 4x4 matrix and a 4 vector. The
input to this method is a 3x1 vector. You can simply extend this vector
to a 4 vector by adding a 1 as the fourth component. Your method can
either return a 4-vector or a 3-vector because in this assignment you
will be ignoring the 4th component of the resulting vector.
In task2.js
:
Fill in WireframeMesh_Two.prototype.render
. This method
will be very similar to the render function you completed for Task 1,
with two small changes: You are now passed three matrices
(model
, view
, projection
), which
you need to assemble into a model-view-projection matrix. In this
assignment, the camera (view) matrix given to you is the
frame-to-canonical matrix.
You’ll find SimpleMatrix.multiply
,
SimpleMatrix.inverse
, and
SimpleMatrix.multiplyVector
helpful for this task.
If you’ve implemented this task correctly, you will get a result like the one shown below. Once you implement rotations you will be able to rotate your scene by clicking and dragging the mouse up and down.
In task3.js
, implement the function
rotateAroundAxisAtPoint
. This function takes an axis (array
of 3 floats), an angle, and a point (array of 3 floats). The function
returns a transformation matrix that rotates around the given axis at
the given point by the given angle. Rotating at the given point means
that the point itself will stay unchanged by the transformation matrix,
and that all other points will rotate around the point if transformed by
the matrix.
Hint: You will need a combination of two translation matrices and a rotation matrix to solve this task. Check the book or the lecture notes on similarity transformations if you aren’t sure how to do this.
If you’ve implemented this task correctly, you will get a result like this. One cube will rotate around the y axis (above) and one cube will rotate around the -x axis (left) about the sphere on the right. One cube will rotate around the -y axis (below) and one cube will rotate around the x axis (right) about the sphere on the left. If you click and drag the mouse up or down, the camera should rotate around the animating shapes.
In this task you will implement a hierarchical transform to make a skeleton move. The skeleton is given by a series of bones that are connected with each other. A bone counts as connected to another bone when it has a non-null parent.
All bones in the skeleton begin as cubes. You can think of the scaling component of their model transformations as the part that gives them their non-cube shapes. This step happens before any other transformations are applied, so it’s separated out from the rest of the model matrix. Each bone’s transformation matrix \(\mathbf{M}\) consists of three subparts, applied in this order:
In task4.js
:
computePoseMatrix
. This function should
compute the pose matrix of the bone - remember this is just the rotation
and translation of the bone, not scaling. First rotate the bone at
this.jointLocation
around this.jointAxis
by
this.jointAngle
, then translate it by
this.position
. Re-use the function you implemented in the
last task. Finally, if the bone has a parent, retrieve its pose matrix
with a call to its parent’s computePoseMatrix
function and
apply it to the child bone.computeModelMatrix
. This function should
return the model matrix of the bone. It should apply the bone’s scaling
first (to determine its dimensions; stored as this.scale
, a
3-vector), then apply the pose matrix.If you’ve implemented the task correctly, you will get a result like this. You will be able to manipulate the skeleton with the sliders below it. If you modify the hip rotation, the whole skeleton should turn; if you modify the hip angle, the whole leg should move; if you modify the knee angle, only the lower part of the leg should move; if you modify the ankle angle, only the foot should move. If you click and drag the mouse up or down, the camera should rotate around the skeleton.
For the second part of this assignment, you will be writing your own OpenGL shaders that will run in your browser. The underlying rasterization pipeline works exactly the same as in the first part of the assignment. The main differences are:
For this assignment, we’ve abstracted away many of the OpenGL
commands. Setup is taken care of for you, mostly in
glUtil.js
). You are encouraged to read the source code and
familiarize yourself with OpenGL, and play around with it to get a
feeling for how OpenGL operates.
To complete the following tasks, you will be writing a series of
OpenGL shaders to transform vertices and assign colors to pixels. At the
most basic level, a shader is a program written in a C-style language
that either runs once for each vertex (a vertex shader) to perform
vertex transformations or once for each pixel (a fragment shader) to
compute the color of the pixel. To complete this assignment, you will
need basic shader knowledge and will need to know what
uniform
s and varying
s do.
There is much to know about WebGL and GLSL, the language in which shaders are written. Although I have made an effort to cover everything you need to know in class, some amount of self-directed learning via searching documentation will inevitably be required. Here are some resources to help you out:
and here’s some more comprehensive documentation links that you shouldn’t need to complete this assignment:
Complete the following tasks in task5.js
:
TriangleMesh.prototype.render
, compute a
model-view-projection matrix (you can re-use code from earlier
tasks).BlackVertexSource
. This is a string containing
the source code for a vertex shader. I have already filled in part of it
for you. Your shader takes a ModelViewProjection matrix and a vertex
position, and should transform the position by the matrix and store the
result in gl_Position
.BlackFragmentSource
. This is a string
containing the fragment shader: A program that determines the color of
each pixel of a triangle. The fragment shader should assign a black
color to the gl_FragColor
output. gl_FragColor
is a 4-vector (red, green, blue, alpha) where each component is in 0-1
range. Write a black color with full alpha.If your shader runs correctly, you should see the exact same scene as in Task 2, only now rendered with solid triangles instead of wireframe:
In this task you will extend your WebGL renderer to do diffuse (or “Lambertian”) shading. The end result of this task should be a spinning red cube and a spinning red sphere.
In task6.js
, start by goingo to
ShadedTriangleMesh.prototype.render
and computing a
model-view-projection matrix (again, feel free to re-use code from prior
tasks).
Next, fill in the vertex shader in LambertVertexSource
.
Similar to before, this vertex shader should transform the vertex
position by the model-view-projection matrix and store the result in
gl_Position
. You are also given data necessary to light the
model: The surface normal at the vertex, the position of the light, the
intensity of the light, the surface color and the ambient color. You
should use this information to compute the diffuse shaded color at the
vertex.
Notes: This diffuse lighting model has two differences from the one we used in our ray tracer. First, there’s an “ambient” term that is simply an amount of light that is assumed to hit all objects regardless of position or orientation. This simply gets added onto the Lambertian shading term. Secondly, the lights in this assignment are physically accurate point lights whose intensities fall of as a function of the square of the distance from the point light.
Hint: The vertex position and vertex normal are in model space, while the light position is in world space. You will need a matrix transformation to get everything into the same space. The vertex shader does not currently have enough information to compute this transformation; you will need to think about what data you need, and add additional uniforms to get the data into the shader.
After the vertex shader has computed the color, you should put it in
a varying
(I’ve added one for you and named it
Color
) so you can read it in the fragment shader.
Fill in LambertFragmentSource
. This fragment shader
should get the color computed by the vertex shader (using a
varying
), and assign this color to
gl_FragColor
.
If you’ve implemented this task correctly, you will get a result like the one shown below. You will be able to pan around the cube by clicking in the image and dragging the mouse up and down.
This task is required for 580 and extra credit for 480 students. Feel free to propose your own extensions for extra credit, but please run them by me first.
This task will require you to make two changes: First, change your shader so the shading computation doesn’t happen only for every vertex, but for every pixel. This requires you to move the shading code from the vertex to the fragment shader. This is necessary because the specular highlight might be small compared to the size of a triangle, so interpolating from vertices will simply skip over the sharp highlight.
For the second part of this task, you will need to implement
Blinn-Phong reflection - the same reflection model you implemented in
A2. I am giving you the parameters you need: n
(the
exponent of the Phong model - we’ve called this s
in the
past) and ks
(specular coefficient, or the intensity of the
Phong lobe). You will need to think about what additional data and
matrices you need to pass as uniforms to your shader to compute all the
necessary data.
Add the result of the phong formula to the diffuse shading. You should get a result like the one below, with diffuse shading and specular highlights on both the cube and the sphere:
Points are earned for correctness and deducted for defficiencies in clarity or efficiency.
580 only:
Deductions:
Many thanks are due to Wojciech Jarosz and his TAs for developing and sharing the assignment on which this assignment is based.