CSCI 480 / 580 - Written Homework 2

Complete the following problems and submit your solutions to the HW1 assignment on Canvas. For all questions, justify your answers either or showing your work or giving a brief explanation. Please typeset your solutions using latex or similar. You may work with your classmates on these problems, but you must write up your own solutions individually without using notes or photos made during your collaborative discussions. As usual, the markdown source for this document is available here.

  1. [This problem will help you with A2 TODO 9] In A1, we used a particular texture mapping function for the sphere. In my implementation, I mapped the parameter variables \((\phi, \theta)\) to texture coordinates \((u, v)\). In contrast, when raytracing an ideal sphere, we end up with the \(x,y,z\) coordinates of the intersection point on the surface of the sphere, but we don’t have easy access to the \(\phi\) and \(\theta\) coordinates of that point. To apply a texture to our raytraced sphere we need to be able to take the \((x,y,z)\) coordinates of a surface point and find its \((u,v)\) coordinates. Given a sphere centered at \((c_x, c_y, c_z)\) with radius \(r\), and a point \((x,y,z)\) on its surface, derive a formula for the \((u,v)\) coordinates of the point using the same texture mapping function as we used in A1. You may find Julia’s atan function helpful; note that contrary to what you might expect, atan is not exactly the arctangent function, but what is often known in other languages as atan2.

  2. [This problem may help you with A2 TODO 8. The camera in A2 is Perspective not orthographic, but several steps are the same.] In class we derived viewing ray generation for general perspective cameras. We also talked about cameras that perform a parallel projection, but didn’t derive viewing rays. Remember that in a parallel (sometimes called orthographic projection), the “viewport” lies in the \(uv\) plane of the camera’s local coordinate system, and each pixel’s ray begins at the center of the pixel and points in the view direction. Assume a square viewport with \(vh = vw = 1\). Write a view ray generation procedure (in pseudocode or similar - mathy pseudocode is fine) that takes the following inputs and returns the viewing ray for pixel \((i, j)\) of an orthographic camera:

    eye - the 3D position of the center of the viewport
    view - the view direction of the camera
    up - a vector pointing "up" in the scene
    img_h, img_w, the integer pixel dimensions of the image
    i, j, the pixel coordinates (Julia-style, 1-indexed from the top left)
  3. Consider a triangle in 3D, specified by the 3 points \(\mathbf{a}\), \(\mathbf{b}\), \(\mathbf{c}\). In each of the following, \(\mathbf{x}\) is a point in the plane occupied by the triangle. For each of the following scenarios, what you can be determined about the barycentric coordinates (\(\alpha,\beta, \gamma\)) of the point \(\mathbf{x}\) with respect to the triangle? If the coordinates can be determined exactly, give their values. If not, give the most strict inequality possible about each. If the situation does not constrain a given coordinate at all, you can simply say (for example) \(\alpha \in \mathbb{R}\). Hint: Draw a picture and/or refer to Figure 2.36 in the book.

    1. \(\mathbf{x} = \mathbf{c}\)
    2. \(\mathbf{x} = \mathbf{b}\)
    3. \(\mathbf{x}\) lies on the edge between \(\mathbf{b}\) and \(\mathbf{c}\).
    4. \(\mathbf{x}\) is outside the triangle
    5. \((\mathbf{c} - \mathbf{a}) \cdot (\mathbf{b} - \mathbf{a}) = 0\) and \((\mathbf{x} - \mathbf{a}) \cdot (\mathbf{c} - \mathbf{a}) > 0\) and \((\mathbf{x} - \mathbf{a}) \cdot (\mathbf{b} - \mathbf{a}) > 0\)
  4. [This problem is an optional but recommended exercise] Repeat the derivation for ray-sphere intersection, but for a general sphere with radius \(R\) centered at \(\mathbf{c}\). You can check your work by seeing if your result matches the one in Section 4.4.1, but I recommend doing the derivation on your own first.