Let’s investigate the behavior of the Harris corner detector on the
three image patches shown below. \[
\begin{bmatrix}
2 & 2 & 2\\
2 & 2 & 2\\
0 & 0 & 0
\end{bmatrix}
\begin{bmatrix}
0 & 2 & 2\\
0 & 2 & 2\\
0 & 0 & 0
\end{bmatrix}
\begin{bmatrix}
2 & 2 & 2\\
0 & 2 & 2\\
0 & 0 & 2
\end{bmatrix}
\] 4. Compute the structure tensor for each of the above patches.
I have it on good authority that these images are noise-free, so we can
safely skip the Sobel filter and compute gradients using 3x1 and 1x3
centered finite difference filters and repeat
padding.
Using software of your choice (e.g.,
np.linalg.eigvals
, or use the formula described here),
compute the smallest eigenvalue of each of the structure tensors you
computed in the prior problem.
Write psudeocode (or working Python code if you like, based on our lecture repository codebase) for Harris scores (i.e., smallest eigenvalue of the structure tensor for each pixel). You should make (exact or pseudocody) use of filtering and other routines that already exist in the lecture codebase.
def harris_score(img):
""" Returns the smaller eigenvalue of the structure tensor for each pixel in img.
Pre: img is grayscale, float, [0,1]. """
Some of these points would be better characterized as edge patches, rather than corner patches. Why did our code pick them up, and what would we need to change in order to get only things that really do look like corners in the image?