Simultaneous Localization And Mapping(SLAM) is a problem in robotics, where the robot has to find out where it is, while simultaneously studying (mapping) the environment. The SLAM problem today is considered to be "solved", or atleast have a solution that isn't far off the expected results.
A SLAM system can be split into two major parts - the frontend and the backend. The frontend is also called visual odometry(VO). The problem that VO has is to calculate the trajectory of a camera, based on adjacent frames. These frames are not necessarily adjacent in the video stream, but they are picked as keyframes, based on certain criteria.
A set of features is chosen, and the set of points on the features are monitored. The assumption is that, the points will not move too much relative to each other in adjacent keyframes, and therefore can provide us with information about the camera's pose, and the points' locations. These points are often called landmarks in vSLAM.
Corners are easier to detect, and not as uniform as edges are. So, corners are often used as features. The methods that we use for corner detection will have to be stable enough to detect corners, on say camera rotation. Some common techniques that are often proposed for this problem are -
SIFT is expensive to compute and therefore is not really suitable for realtime applications involving SLAM.
A feature point is made of two main parts -
Oriented FAST and Rotated BRIEF(ORB) uses the FAST detector for key points and BRIEF descriptor for descriptors.
Oriented FAST adds on scale and orientation information to FAST keypoints. The orientation part is easy, it is the vector from the centroid to geometric center. As for scale, image pyramids are used. Research required on image pyramids.
Rotated BRIEF uses BRIEF but uses the orientation information from Oriented FAST to calculate a Steer BRIEF feature which has higher rotation invariance.