The Main Elements Of The Navigation Stack

By Leonid Bulyga, Robot Navigation System Developer at NTRLab

To successfully function, a robot needs to know its place in space, in other words, its exact location on the map.

The Navigation Stack, which includes SLAM, allows the robot to build a map, determine its position on it and move around relying on its “feelings.”

Let us consider in more detail what each element of the navigation stack is responsible for.

for blog post


This scheme shows the main elements of the navigation stack.


Carries out such things as: planning and laying out the route, global and local; bypassing obstacles; and drawing up a map of local and global obstacles. It also contains instructions for overcoming the obstacles. As can be seen from the figure, the main and only message expected from it is the linear and angular velocity, i.e., how quickly and in what direction the robot moves. In order to get this you need to have:

  1. map_server
  2. sensor sources
  3. odometry source
  4. tf transforms
  5. AMCL


The robot will receive map reading from the server, but map still needs to be downloaded. The most direct way to draw it is by transferring the original dimensions and saving it in the correct format. The second way to build a map is using the navigation stack – gmapping.    

Building a map is similar to walking in an unknown direction until the circle closes, while carefully looking around.

Sensor sources

Visual sensors can be simple or 360 degree  lidars, sensor Kinect or special types of cameras like Tango or ARCore from Google, which allow you to get a point cloud.

Odometry source

Not all robots can fly, but even those that can use an odometry source. The easiest odometer is set on the wheel and we can get the exact distance traveled (relatively). If the robot has no odometer, then you need to use visual odometry.

tf transforms

for blog post 2


Tf transforms helps the robot navigate in space. There is always a basic frame from which all the others are calculated, usually at the base of the robot. For example, knowing the global frame (our map) and the base frame the robot understands at what height its visual sensor is located.

This is of major importance for the static and moving parts of the robot. It is usually configured once and changes only when the robot configuration is changed.


“AMCL is a probabilistic localization system for a robot moving in 2D”. This localization system is based on statistical distributions and helps the robot cope with errors accumulating from the odometer.

Using the data from the visual sensor, AMCL compares its position on the map to its real world position and makes adjustments as necessary.  For example, if the robot rode / flew 2 meters on the odometer when, in fact, it was actually 2.2 meters, then AMCL will correct the position on the map.

If there is no odometer in the robot configuration, then a SLAM approach can be applied  — hector_mapping. Its main feature is the ability to obtain a robot position without odometry data. It is an alternative to gmapping.

This presentation of the navigation stack can be called classic.

Share this article with your network.

Leave a Reply

Your email address will not be published. Required fields are marked *