CSCI 480 / 580 - Assignment 3: Object-Order Rendering and WebGL

Scott Wehrwein

Fall 2024

Quick facts:

Introduction

In this assignment, you will put the mathematics behind matrix transformations into practice, using transformations to render 3D geometric primitives. You will implement four simple programs to render and transform 3D geometric primitives using JS and HTML5 canvas. You will also learn the basics of OpenGL and write your own shaders that run in your browser.

Getting Started

As usual, the skeleton code is available via a Github Classroom invitation link in the Canvas assignment. You’ll complete the assignment individually, so there’s no step required to create a team - just click the link and accept the assignment to create your repository.

As you work through the project, feel free to (optionally) add notes in the HTML file under each task where it says “report problems, comments…”. I’m particularly interested in anything where I didn’t cover what you needed to know, or if you found something especially confusing.

Let’s talk about Javascript1

This project is done in Javascript, everybody’s other favorite language whose name begins with ‘J’. If you’re not familiar with Javascript, don’t panic: neither am I. There aren’t too many things you need to know about the language to get by in this assignment. Here, I’ll provide a tour of the skeleton code and try to point out and explain some oddities you might encounter, particularly in the matrix library.

The code is fairly cleanly segmented into separate files. Your main entrypoint from a running-and-testing perspective is a3.html. When you open this in a browser, you’ll see a single webpage with a separate canvas for each task in the assignment. The HTML file has only a bit of javascript code, the setupAssignment function, which it calls on page load. It also includes quite a pile of other javascript code via some <script> tags. You will not need to write any code in the following files:

The Matrix Library

Not all languages can be as awesome as Julia. As a case in point, Javascript doesn’t have multi-dimensional arrays built in. matrix.js contains a small library that implements a SimpleMatrix object to patch this shortcoming. If you’re not familiar with Javascript, the syntax may appear a bit weird, but it’s really just OOP hacked onto a language that doesn’t have it. The matrix library is set up as follows:

taskN.js

Each task in this assignment lives in its own javascript file, and the skeleton code takes care of running each task in its own canvas in the HTML file. Each file is largely based around implementing “methods” of another “class”, much like was done in the matrix library.

For the WebGL tasks, notice that the GLSL code for the shaders lives inside backtick-quoted strings, each stored in its own variable.

Debugging with The Developer Console

The developer console is going to be your friend for this assignment - it will tell you about compiler errors, runtime errors and is simultaneously also your debugger.

Opening the developer console depends on your platform and browser:

Check the developer console of your browser for any error messages when you first open the website. You should have the console open while you work on this assignment: It will tell you of any problems that are occurring in your program. You can also make use of console.log to help you debug your program.

Note: The assignment should Just Work on the lab environment. If you’re having any compatibility or platform issues, let me know as early as possible.

Online Resources

Tasks - Overview

In the first three tasks, you will complete the implementation of a barebones Javascript renderer that implements the transformation pipeline we discussed in class and supports a scene graph-like transformation hierarchy.

  1. Perspective division - implement a quick shortcut version of the perspective projection we covered in class.
  2. Matrix operations - build a library of transformation matrix routines; these are used to build model, view, and projection matrices, which you’ll assemble into a full end-to-end transformation matrix.
  3. Similarity transforms - use composition to implement the ability to rotate an object around an axis.
  4. Hierarchical transforms - add scene graph functionality to a multi-part model so child objects move along with their parents.

In Tasks 5-6 (and 7 for 580 students), you’ll work with WebGL code, which implements an object-order rendering pipeline supported by GPU hardware.

  1. OpenGL basics - implement a fragment shader that colors a surface black.
  2. OpenGL shaders - implement a vertex and fragment shader that support diffuse (Lambertian) shading.
  3. (580 only) Blinn-Phong shading - implement shaders to support Blinn-Phong shading.

Javascript Wireframe Rendering

1. Simple Projection

In this task, you will finish the implementation of a basic wireframe rasterizer to render the edges of a sphere and a cube. The renderer is given to you with an orthographic projection already implemented (you will start out seeing the sphere up-close-and-personal), and you will only need to change the orthographic projection to a simple perspective projection.

Fill in the missing part of WireframeMesh.prototype.render in task1.js. This method already contains an implementation of a wireframe renderer, all you have to do is implement the perspective projection. You can do this by simply dividing the \(x\) and \(y\) components by the \(z\) coordinate before rendering the edges. Don’t overthink this: there’s no matrix pipeline happening. What you’re doing is equivalent to the following perspective projection matrix, which is for a camera in canonical position with a viewport distance of 1: \[ \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 1 & 0 \\ \end{bmatrix} \] Although you have very little code to write for this task, you may find it helpful in later parts if you spend some time now getting to know how the basic renderer works. If you’ve implemented this task correctly, you will get an image like the one shown below.

2. Matrix Operations

In this task you will implement basic matrix operations and simple transforms. This task builds on Task 1; make sure you have implemented the first task. The end result of this task should be three animating shapes. There should be a cube translating in a circle on the left, a cube shrinking and growing in the center, and a sphere rotating on the right side of the screen.

In matrix.js:

In task2.js:

Fill in WireframeMesh_Two.prototype.render. This method will be very similar to the render function you completed for Task 1, with two small changes: You are now passed three matrices (model, view, projection), which you need to assemble into a model-view-projection matrix. In this assignment, the camera (view) matrix given to you is the frame-to-canonical matrix.

You’ll find SimpleMatrix.multiply, SimpleMatrix.inverse, and SimpleMatrix.multiplyVector helpful for this task.

If you’ve implemented this task correctly, you will get a result like the one shown below. Once you implement rotations you will be able to rotate your scene by clicking and dragging the mouse up and down.

Note: Due to a minor bug in the skeleton, the previews for Tasks 2, 3, and 4 are scaled differently from what you would expect. The camera will be zoomed further in than this. To adjust the output such that your should match the following, change the 30 to a 45 in the projection matrix constructor (this number is the field of view).

3. Similarity Transforms

In task3.js, implement the function rotateAroundAxisAtPoint. This function takes an axis (array of 3 floats), an angle, and a point (array of 3 floats). The function returns a transformation matrix that rotates around the given axis at the given point by the given angle. Rotating at the given point means that the point itself will stay unchanged by the transformation matrix, and that all other points will rotate around the point if transformed by the matrix.

Hint: You will need a combination of two translation matrices and a rotation matrix to solve this task. Check the book or the lecture notes on similarity transformations if you aren’t sure how to do this.

If you’ve implemented this task correctly, you will get a result like this. One cube will rotate around the y axis (above) and one cube will rotate around the -x axis (left) about the sphere on the right. One cube will rotate around the -y axis (below) and one cube will rotate around the x axis (right) about the sphere on the left. If you click and drag the mouse up or down, the camera should rotate around the animating shapes.

4. Hierarchical Transforms

In this task you will implement a hierarchical transform to make a skeleton move. The skeleton is given by a series of bones that are connected with each other. A bone counts as connected to another bone when it has a non-null parent.

All bones in the skeleton begin as cubes. You can think of the scaling component of their model transformations as the part that gives them their non-cube shapes. This step happens before any other transformations are applied, so it’s separated out from the rest of the model matrix. Each bone’s transformation matrix \(\mathbf{M}\) consists of three subparts, applied in this order:

  1. The bone’s scale matrix \(\mathbf{M_{scale}}\), which determines its shape.
  2. The bone’s pose matrix, \(\mathbf{M_{pose}}\), which translates and rotates it to position it with respect to its own model coordinates system.
  3. If the bone is connected to another bone, its parent’s pose matrix \(\mathbf{M_{parent}}\)is applied so the bone is transformed along with the bone it’s connected to.

In task4.js:

If you’ve implemented the task correctly, you will get a result like this. You will be able to manipulate the skeleton with the sliders below it. If you modify the hip rotation, the whole skeleton should turn; if you modify the hip angle, the whole leg should move; if you modify the knee angle, only the lower part of the leg should move; if you modify the ankle angle, only the foot should move. If you click and drag the mouse up or down, the camera should rotate around the skeleton.

OpenGL Rendering

For the second part of this assignment, you will be writing your own OpenGL shaders that will run in your browser. The underlying rasterization pipeline works exactly the same as in the first part of the assignment. The main differences are:

For this assignment, we’ve abstracted away many of the OpenGL commands. Setup is taken care of for you, mostly in glUtil.js). You are encouraged to read the source code and familiarize yourself with OpenGL, and play around with it to get a feeling for how OpenGL operates.

To complete the following tasks, you will be writing a series of OpenGL shaders to transform vertices and assign colors to pixels. At the most basic level, a shader is a program written in a C-style language that either runs once for each vertex (a vertex shader) to perform vertex transformations or once for each pixel (a fragment shader) to compute the color of the pixel. To complete this assignment, you will need basic shader knowledge and will need to know what uniforms and varyings do.

There is much to know about WebGL and GLSL, the language in which shaders are written. Although I have made an effort to cover everything you need to know in class, some amount of self-directed learning via searching documentation will inevitably be required. Here are some resources to help you out:

and here’s some more comprehensive documentation links that you shouldn’t need to complete this assignment:

5. OpenGL Basics

Complete the following tasks in task5.js:

If your shader runs correctly, you should see the exact same scene as in Task 2, only now rendered with solid triangles instead of wireframe:

6. OpenGL Shaders

In this task you will extend your WebGL renderer to do diffuse (or “Lambertian”) shading. The end result of this task should be a spinning red cube and a spinning red sphere.

In task6.js, start by goingo to ShadedTriangleMesh.prototype.render and computing a model-view-projection matrix (again, feel free to re-use code from prior tasks).

Next, fill in the vertex shader in LambertVertexSource. Similar to before, this vertex shader should transform the vertex position by the model-view-projection matrix and store the result in gl_Position. You are also given data necessary to light the model: The surface normal at the vertex, the position of the light, the intensity of the light, the surface color and the ambient color. You should use this information to compute the diffuse shaded color at the vertex.

Notes: This diffuse lighting model has two differences from the one we used in our ray tracer. First, there’s an “ambient” term that is simply an amount of light that is assumed to hit all objects regardless of position or orientation. This simply gets added onto the Lambertian shading term. Secondly, the lights in this assignment are physically accurate point lights whose intensities fall of as a function of the square of the distance from the point light.

Hint: The vertex position and vertex normal are in model space, while the light position is in world space. You will need a matrix transformation to get everything into the same space. The vertex shader does not currently have enough information to compute this transformation; you will need to think about what data you need, and add additional uniforms to get the data into the shader.

After the vertex shader has computed the color, you should put it in a varying (I’ve added one for you and named it Color) so you can read it in the fragment shader.

Fill in LambertFragmentSource. This fragment shader should get the color computed by the vertex shader (using a varying), and assign this color to gl_FragColor.

If you’ve implemented this task correctly, you will get a result like the one shown below. You will be able to pan around the cube by clicking in the image and dragging the mouse up and down.

7. (580 Only) Blinn-Phong Shading

This task is required for 580 and extra credit for 480 students. Feel free to propose your own extensions for extra credit, but please run them by me first.

This task will require you to make two changes: First, change your shader so the shading computation doesn’t happen only for every vertex, but for every pixel. This requires you to move the shading code from the vertex to the fragment shader. This is necessary because the specular highlight might be small compared to the size of a triangle, so interpolating from vertices will simply skip over the sharp highlight.

For the second part of this task, you will need to implement Blinn-Phong reflection - the same reflection model you implemented in A2. I am giving you the parameters you need: n (the exponent of the Phong model - we’ve called this s in the past) and ks (specular coefficient, or the intensity of the Phong lobe). You will need to think about what additional data and matrices you need to pass as uniforms to your shader to compute all the necessary data.

Add the result of the phong formula to the diffuse shading. You should get a result like the one below, with diffuse shading and specular highlights on both the cube and the sphere:

How and What to Submit

  1. Commit all of your changes and push to Github before the deadline.
  2. Fill out the A3 Survey quiz on Canvas.

Rubric

Points are earned for correctness and deducted for defficiencies in clarity or efficiency.

Correctness

580 only:

Deductions:

Acknowledgements

Many thanks are due to Wojciech Jarosz and his TAs for developing and sharing the assignment on which this assignment is based.


  1. I’ll just leave this here in case you haven’t seen it.↩︎