Corner detection

A corner, in computer vision is a point where there is a sudden change in intensities in two directions. For an image, if we compute its gradient and then plot and ,we can observe the following -

  1. For flat areas, the gradients are pretty close to zero as there is no sudden change in intensity values. And so the plotted points will all be clustered uniformly near the origin
  2. For areas with an edge, and will be correlated (might be positive or negative correlation) as any sudden increase in one direction, most likely will also show an increase in another direction and therefore will show a linear trend
  3. For areas with a corner, there'll be atleast two edges, resulting in two linear clusters.

We fit an ellipse to the cluster of points with as semi-major axis and as semi-minor axis. For the above cases,

  1. will be true when and are both small
  2. will be true, when is much greater than
  3. will be true when and are both large

Harris method

We define a Harris corner response function , such that

Where, (empirically determined). If is greater than a certain threshold then, we consider it a corner.