Recent advances in the methods of locating a device (smartphone, robot) in relation
to its environment make it possible to consider the deployment of augmented reality solutions and autonomous robots. The interest of RGB-D cameras in such a context is notable since it allows to directly acquire the depth map of the perceived scene.
The objective of this post docorate consists in developping a new SLAM (Simultaneous Localisation and Mapping) method relying on a depth sensor.
To reach a solution both robust, accurate and with small CPU/memory comsumption, the depth image will be exploited though a direct and sparse approach. The resulting solution will be then combined with the solution of "RGB SLAM Constrained to a CAD model" developped in our laboratory, resulting finaly in an "RGB-D SLAM Constrained to a CAD model"