Lecture 5 Problems - CSCI 476/576

  1. In terms of uniqueness and invariance, discuss why single pixels, described using their RGB values, do not make good features for image matching.
  2. For each of the following (linearized) error function shapes, describe the image patch that gave rise to it:
    1. Flat in all directions
    2. Steep in one direction, flat in the orthogonal direction
    3. Steep in all directions

Let’s investigate the behavior of the Harris corner detector on the three image patches shown below. \[ \begin{bmatrix} 2 & 2 & 2\\ 2 & 2 & 2\\ 0 & 0 & 0 \end{bmatrix} \begin{bmatrix} 0 & 2 & 2\\ 0 & 2 & 2\\ 0 & 0 & 0 \end{bmatrix} \begin{bmatrix} 2 & 2 & 2\\ 0 & 2 & 2\\ 0 & 0 & 2 \end{bmatrix} \] 4. Compute the structure tensor for each of the above patches. I have it on good authority that these images are noise-free, so we can safely skip the Sobel filter and compute gradients using 3x1 and 1x3 centered finite difference filters and repeat padding.

  1. Using software of your choice (e.g., np.linalg.eigvals, or use the formula described here), compute the smallest eigenvalue of each of the structure tensors you computed in the prior problem.

  2. Write psudeocode (or working Python code if you like, based on our lecture repository codebase) for Harris scores (i.e., smallest eigenvalue of the structure tensor for each pixel). You should make (exact or pseudocody) use of filtering and other routines that already exist in the lecture codebase.

def harris_score(img):
  """ Returns the smaller eigenvalue of the structure tensor for each pixel in img.
  Pre: img is grayscale, float, [0,1]. """
  1. Consider the following harris corner detection result, computed using the code we saw in class:

Some of these points would be better characterized as edge patches, rather than corner patches. Why did our code pick them up, and what would we need to change in order to get only things that really do look like corners in the image?