Today we rely on GPS and Google Maps to tell us how to get from point A to point B. Similarly, Simultaneous Localization and Mapping (SLAM) is how autonomous vehicles find their way in the world. In this article, we will discuss how autonomous vehicles use Simultaneous Localization and Mapping to navigate the environment around them!
What is SLAM?
Simultaneous Localization and Mapping involves two things: mapping and localization. The purpose of SLAM is to build a globally consistent representation of the environment. Let’s briefly cover the two main aspects.
Why Mapping is Important
Currently available standard definition (SD) or ADAS maps are used in infotainment systems to assist human drivers to perform turn-by-turn navigation. However, the accuracy and data quality of these maps does not meet the requirements needed for autonomous vehicles (AVs).
Maps built for AV purposes are called high-definition (HD) maps. These maps are centimeter-level accurate, which is orders of magnitude higher than what SD maps provide. HD maps are highly accurate representations of the road, with features including lane geometry, traffic signs, intersections, live map updates, and more.
They also provide visibility beyond the range of on-board sensors like LiDAR and cameras. Companies like HERE, TomTom, and Ushr provide HD Maps as a service today.
Why Localization is Important
Localization is the answer to ‘Where am I’ on the map. It allows AVs to find their position and orientation. When a vehicle is localized, it can determine its relationship with other elements on the map. For example, the distance to the next traffic light or which exit is coming up next. Most importantly, it can answer whether the car is within its desired lane or not.
Localization feeds into the rest of the AV technology stack like decision making and motion-planning.
SLAM is considered a chicken and egg problem. Localization and mapping need to be done together to get an updated estimate of the map along with an updated location of the vehicle. As a result, when ADAS developers and engineers are working with an accurate map, they are more easily able to localize.
Autonomous vehicles can find themselves driving in places where maps currently do not exist and through areas it has never visited before. Also, having an HD map for the entire world is a difficult task. Using SLAM, the vehicle can autonomously navigate an unknown environment by building a map and simultaneously localizing itself on that map.
Why is SLAM Needed?
GPS can give us an accurate position of where we are in the world, but GPS by itself is unreliable, especially in urban areas with tall buildings. GPS depends on a lot of factors, like the open sky, the number of satellites in line of sight, etc. Even at times when GPS can help provide coarse localization, SLAM can be used to provide a more accurate estimate of the vehicle’s location.
Autonomous vehicles have a host of sensors to solve the ‘Where am I’ or the localization problem, which we discuss in the following section.
Methods of Implementation
The SLAM community has grown rapidly in the last couple of decades, enabling large scale applications ranging from robotics to the self-driving car industry.
SLAM is implemented using multi-sensor fusion based techniques. Sensors include (but not limited to) light detection and ranging (LiDAR), an inertial navigation system (INS), Global Positioning System (GPS), and high definition maps (HD Map).
The need for a variety of sensors is due to the number of challenges when working with SLAM. For example, the sensor data obtained can be very noisy. Each sensor has its own biases and error sources, causing some degradation. Other challenges come from autonomous vehicles moving at higher speeds, which limits the sensor types and adds to algorithm complexity. Most importantly, multiple sensors allow for redundancy, which is needed to adhere to AV safety standards.
Some well-known implementation techniques are Kalman Filter, Particle Filter, Graph-SLAM, and many more. These methods are based on principles of probability, which implies SLAM itself only provides a likelihood of whether the car is within a five-centimeter radius of the traffic pole or what are the chances there is a pedestrian in the way.
Many open-source projects like ROS, Apollo, and Autoware provide a partial or full implementation of algorithms needed for SLAM for a variety of robotics applications.
What Does the Future Look Like?
Is SLAM a solved problem? That is still a debate. The correct answer depends on the application, performance requirements, and challenges present in the environment. For example, slow-speed robotics applications in confined environments are significantly easier than fast-moving automobiles in highly dynamic environments. Across the industry, engineers are devising new frameworks for the development and validation of safe automated driving systems.
SLAM in autonomous vehicles still has numerous challenges and is pushing the frontiers of research as autonomous vehicles also have an ethical aspect of driving on public roads. The SLAM community is hard at work addressing these and other challenges in the interest of safer roads for us all.