-
Notifications
You must be signed in to change notification settings - Fork 0
Blog Post #2 (SLAM)
Coming out of last week with a strong understanding and implementation of GMapping, this week we began our second iteration, where we are attempting to recreate and implement parts of GMapping. We initially looked at the two main components of SLAM, localization and mapping, to determine which part we should dive deeper into for this iteration. We decided to dive into mapping for a few reasons, which we'll get into later.
Localization is a hard problem to begin with; we've even spent weeks of this class attempting to implement one on a known map. This problem becomes exponentially more challenging when combined with an unknown and probabilistic map as we're attempting to do. Our research in the previous week has shown that GMapping uses a Roa-Blackwellized particle filter. In a Rao-Blackwellized particle filter, each particle stores its own probabilistic map which is updated every time it receives new sensor readings according to the likelihood and probable accuracy of that sensor reading.
As we've already had experience with working with a particle filter, and this algorithm requires a working mapping algorithm, we've decided to focus on mapping for now, and make the Rao-Blackwellized particle filter implementation a stretch goal for when we finish mapping.
This is where the majority of our effort has been focused. We've decided to implement a mapping algorithm based on the assumption that our sensor readings and odometry are exactly accurate, meaning we can disregard the particle filter for now as we will "know" exactly where we are at all times. We have developed this algorithm with scaling to include probabilities in mind, so that if time allows, a simultaneous localization algorithm will not be too difficult to implement.
The algorithm we are using for mapping stems from GMapping's algorithm, but assumes that the probabilities of location and sensor readings are always 1. Every time we receive sensor readings, we create a map of a certain resolution with origin at the current pose of the robot. We then stitch this map together with the previously stored map and resolve any "merging conflicts", where the data for the two maps conflicts. This data typically conflicts relatively often, and is usually handled by probabilistic methods, but for our implementation we are assuming the sensors are absolutely correct, and handle each case with a priority system.
When stitching two maps together, we often run into merge conflicts, where the same cell in both maps have conflicting data. Currently to handle this issue, we are using very conservative and safe metric, priority! In order of priority, our map contains this data for each cell: Wall (1), Empty (0), Unknown (-1). This means that if two conflicting sensor readings for the same cell read "Wall" and "Empty", the map will assume a wall lives there. This algorithm seems to work reasonably well, and generates a decently accurate map, with perhaps a few too many wall readings.
Our current implementation of mapping seems to work well. The sensor readings and wheel encoders are accurate enough that the map created seems reasonable. As we drive the robot around, however, we have noticed some drift in the odometry readings that could be fixed through a particle filter. Additionally, we do end up with a few too many walls, and walls are typically larger than they are in reality, probably because we are using a conservative 'walls take priority' algorithm. As we move towards a probabilistic approach, we will be looking to solve these issues.