Fork me on GitHub

orbslam2-code

ORB-SLAM2 code

structure

  • Tracking.cpp

  • LocalMapping.cpp

  • LoopClosing.cpp

  • Viewer.cpp

stereo_EuRoC

  • LoadImages()
  • create SLAM system: ORB_SLAM2::System SLAM
  • vector for tracking time statistics: vTimesTrack
  • rectify images: rectifyer
  • pass images to SLAM: SLAM.TrackStereo
  • wait to load next frame
  • SLAM.Shutdown()
  • time statistics
  • save trajectory

code

system

System::System()

- load ORB vocabulary (ORBVocabulary class, ORBVoc.txt)   
- create keyframe database (KeyFrameDatabase class, initialized with *mpVocabulary*) 
- create map  
- create drawers (used by map)   
- initialize tracking thread  
- initialize local mapping thread & launch 
- initialize loop closing thread & launch
- initialize Viewer thread & launch
- set pointers between threads
  • some important names:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    //---ORB
    mpVocabulary;
    //---keyfrane
    mpKeyFrameDatabase;
    //---map
    mpMap;
    //---drawer
    mpFrameDrawer;
    mpMapDrawer;
    //---tracking
    mpTracker;
    //---local mapping
    mpLocalMapper;
    mptLocalMapping;
    //---loop closing
    mpLoopCloser;
    mptLoopClosing;
    //---viewer
    mpViewer

System::TrackStereo

- check GUI options   
- mpTracker->GrabImageStereo

System::SaveTrajectoryTUM

- mpMap->GetAllKeyFrames()

- transforamtion (1st keyframe is the origin) *GetPoseInverse()*

- framepose stored relative to its reference keyframe (lRit), the timestamp (lT), tracking state (lbL)

- if reference keyframe was culled, traverse the spanning tree to get a suitable keyframe

Tracking

Tracking::GrabImageStereo

- RGB to Gray

- mCurrentFrame 
Frame(mImGray,imGrayRight,timestamp,mpORBExtractorLeft,mpORBExtractorRight,mpORBVocabulary,mK,mDistCoef,mbf,mThDepth);

- Track()

- return mTcw (camera pose W2C)

Frame

Frame::Frame

// stereo initialization    
- Frame ID 
- get scale level info (ORBextractor class)   
- ORB Extraction   
- threadLeft (Frame::ExtractORB > .join())   
- threadRight (Frame::ExtractORB > .join())   
- UndistortKeyPoints()   
- ComputeStereoMatches(): compute depths if matches   
- depth info: mvuRight & mvDepth  
- mvpMapPoints, mvOutlier

Frame::UndistortKeyPoints

- *N* feature points  
- cv::undistortPoints()   
- mvKeysUn: corrected // mvKeys, mvKeysRight   
- **redundant** in stereo case

Frame::ComputeStereoMatches

- assign keypoints to row table // vRowIndices
- compute range of rows
- set limits for search //minD, maxD, minZ
- for each left keypoint search a match in the right
- SAD (subpixel match by correlation, IF |deltaR| >1, continue)
- matched points culling // mvuRight, mvDepth

Frame::UnprojectStereo

// backproject a keypoint (if stereo/depth info available) into 3D world coordinates.

// Rotation, translation & camera center
mRcw;   //Rotation from world2camera
mtcw;   //Translation from world2camera
mRwc;   //Rotation from camera2world
m0w;    //Translation from camera2world;

Tracking

//      |fx 0   cx|
//  K = |0  fy  cy| 
//      |0  0   1 |
//  corrected coef: [k1 k2 p1 p2 k3]
//  mThDepth: Threshold4 close/far points

Tracking::Track()

IF NOT_INITIALIZED
- StereoInitialization()
ELSE
- CheckReplacedInLastFrame()
    - IF mVelocity.empty()
    TrackReferenceKeyFrame()
    - ELSE
    TrackWithMotionModel()

Tracking::StereoInitialization

IF N > 500
- set pose to the origin
- create keyframe
- insert keyframe in the map
- create mappoints and associate to keyframe 
// Frame::UnprojectStereo()
// MapPoint::ComputeDistinctiveDescriptors() > find best descriptors for MapPoint; using median of dists
// MapPoint::UpdateNormalAndDepth() > update observations: mNormalVector & mfMaxDistance, mfMinDistance

Tracking::TrackReferenceFrame()

- mCurrentFrame.ComputeBoW();
- ORBmatcher.SearchByBoW()
- initialize pose by *mLastFrame*
- Optimizer::PoseOptimization(&mCurrentFrame)
- discard outliers

Tracking::TrackWithMotionModel

- Tracking::UpdateLastFrame
    - Const Velocity Model, estimate current pose
    - project points seen in previous frame
- Based on CVM, tracking MapPoints in the last frame
    - IF nmathces < 20, uses a wider window search (th > 2*th) 
    //---ORBmatcher.SearchByProjection
- optimize frame pose with all matches 
// Optimizer::PoseOptimization(&mCurrentFrame)
- discard outliers of mvpMapPoints(feature >> MapPoint)

Tracking::UpdateLastFrame()

- update pose according to reference keyframe
// mlRelativeFramePoses: store the reference keyframe for each frame and its relative transformation
- IF stereo OR RGBD
    - sort points according to measured depth
    - rank depths in ascending order
    - IF nPoints > 100, break

Tracking::Relocalization()

Relocalization is performed when tracking is lost
- compte BoW Vector
- mpKeyFrameDB->DetectRelocalizationCandidates(&mCurrentFrame)
- ORB matching with each candidate 
    - IF enough matches, set up PnP solver
- perform iterations of P4P RANSAC, until a camera pose supported by enough inliers
- Optimizer::PoseOptimization(&mCurrentFrame)
- IF few inliers,search by projection & optimization

ORBmatcher

ORBmatcher::SearchByProjection

SearchByProjection(currentFrame, lastFrame, th, bMono)

1. project MapPoints in the last frame
2. match & culling

KeyFrameDatabase

KeyFrameDatabase::DetectRelocalizationCandidates

find similar keyframes in relocalization
- search all keyframes that share a word with current frame
- find keyframes share enough words & decide 
    Th: minCommonWords = maxCommonWords*0.8f
- compute similarity score
- accumulate score by covisiblity
    One Group: Keyframe + GetBestCovisibilityKeyFrames(10)
    >> bestAccScore & minScoreToRetain = 0.75f*bestAccScore
    return group(>minScoreToRetain) memeber with highest score
-------------The end Thanks-------------
You donation will be huge motivation for me.