# Tutorial 5

# Simultaneous Localization And Mapping

# Concept

This tutorial will focus less on the concept of SLAM and more on its application. SLAM stands for Simultaneous Localization And Mapping. In real-world robotics, localization is a common problem that is essential for proper navigation.

Noise is an unavoidable error that arises from the integration of accelerometers as localization estimators. This phenomenon can be observed through an online demonstration provided by the University of Texas: https://amrl.cs.utexas.edu/interactive-particle-filters/

The essence of SLAM is to solve the localization problem by building a map out of lidar scans and then localizing by combining the map and future scans. The details about scan matching and other details of SLAM are beyond the scope of these tutorials but are explained wonderfully in UPenn's F1Tenth SLAM online lecture: https://www.youtube.com/watch?v=4QT_UDyw584&list=PL7rtKJAz_mPdQ6fdpDkis9WrARUINneLp&index=8

# SLAM Toolbox

For our uses, we'll be focusing on the application of SLAM using SLAM Toolbox, a ROS2 library that provides many solutions for robotics localization and mapping. A demo on their GitHub presents the authors operating SLAM on a roomba to build a map of a room whilst providing a near exact position of the robot:

Our utility will look similar by the end of this tutorial. To begin, clone https://github.com/SteveMacenski/slam_toolbox into your src directory and then build your workspace. This library is quite extensive and will take a while to build, this is normal.

In the meantime, navigate to and edit slam_toolbox/config/mapper_params_online_sync.yaml, or similar. Online is in reference to whether SLAM is running in a production/competitive environment vs offline, where it collects as much information as possible.

You can touch any of these parameters in the future, but the essentials for proper operation are the following:

    # ROS Parameters
    odom_frame: ego_racecar/odom
    map_frame: map
    base_frame: ego_racecar/base_link
    scan_topic: /scan
    ...
    mode: mapping  # localization or mapping

    ...

    ...
    max_laser_range: 50.0 #for rastering images
    ...

Once complete, build again, source, then run ros2 launch slam_toolbox online_sync_launch.py. The SLAM library should launch. If you head to your simulation in foxglove studio, several new 3D topics should appear.

After a few laps around the track, the robot should have built a decent map of the environment. To save the map or serialize it, check out https://github.com/SteveMacenski/slam_toolbox/tree/ros2/srvs. In Foxglove Studio you can simply add a Call Service panel and execute any of these actions. This will be essential for race line generation in the future.

Over time, you may notice that the library struggles to build a perfect map of our circuit. This can be attributed to the nature of SLAM and our track's lack of features. A solution would be to edit the map and add various features (small extrusions, dents, etc.) throughout the track for the vehicle. However, this can typically be avoided because SLAM toolbox can be run to use localization and correct imperfections that arise from the map's inaccuracy. In a real world environment where we are not provided perfect odometry, this would be crucial.

Lastly, another term notable in the field of navigation in autonomy is Particle Filtering. This is the concept of localizing the vehicle given a serialized map. To do so, change the configuration to mode: localization, run the library, deserialize the saved map using the service, and exploit the beauty that is localization.

# Closing

Although SLAM doesn't necessarily help us in the development of virtual autonomous vehicles, they are a fundamental aspect of autonomous robotics in the real world and are an unavoidable subject in racing competitions. Additionally, it is a critical aspect of the generation of racing lines which will be the subject of the next tutorial.