Skip to content

Blog Post #3 (SLAM)

ksoltan edited this page Dec 14, 2018 · 1 revision

SLAM

Functionality

The main goal of our part of the project was to do a deep dive in SLAM, then attempt to implement parts of it. We spent time figuring out how the Rao-Blackwellized particle filter, the algorithm behind the most popular SLAM implementation: GMapping, worked. Next we began dissecting GMapping and implementing parts of it in stages.

We based our final mapper on GMapping's own mapper by looking at the source code and figuring out the various data structures they use, as well as how to use said data structures. Our implementation is based on an occupancy grid, where each pixel has a value representing whether the space is empty, a wall, or unknown. Using trigonometry, each angle of laser scan data maps to a line in the occupancy grid. We fill the occupancy grid along this line with 'empty' cells until we reach a wall, or the maximum distance where we trust the laser scan data. The scan will register that ending point as a wall, all previous cells along the line as empty, and all future cells along that line as unknown.

The next major part we implemented after building the initial map is stitching together the existing map and maps formed from new laser scan data. To do this, the robot position is compared to the origin to determine how much the new scan is translated compared to the original map. Next, if need be, the existing map is expanded to accommodate the new scan. The new scan is then transcribed over to the original map, and goes through a priority value check for the overlapping cells. As opposed to a probabilistic algorithm, which would have come with our next iteration, we utilized a conservative priority algorithm so that our map would be as safe as possible for our planning team. Within this priority algorithm, walls have the highest priority, meaning when stitching two maps together walls will overwrite any conflicting data. Unknown cells have the lowest priority, meaning they will be overwritten by everything in the event of a collision. We only added one other classification of cells: empty (safe to traverse). Theoretically we could have added an arbitrary number of other cell classifications, but we did not find any that we deemed especially valuable.

At this point, we added some headers to the occupancy grid and were able to pass it off as a ROS node compatible occupancy grid on a map server to the planning team. We also found it valuable to save the maps as a png file instead (as opposed to the pgm file from GMapping) for easier real time viewing on our end.

Architecture

Our second and final iteration of SLAM consists of two major classes, Slammer and Map, as well as numerous helper functions.

The Slammer class acts as the run-time manager that contains the ROS Node and is responsible for all subscribers and publishers. It also stores the current, most-updated map and stitches this map together with a newly created map of the current position every time a flag requesting new laser scan data is set to true and it receives that new laser scan data.

Meanwhile, the Map class not only stores the occupancy grid data, but also contains multiple functions to modify itself, such as the bound_check() function to increase the size of the map when stitching together with a new map, and the stitch() function itself, designed to output a new map that is the combination of the two inputted maps utilizing our conservative priority algorithm. It also contains the function showMap(), which brings up a window displaying the current map, and saves an image of that map for easy storage and use later.

The typical process of creating a map and implementing that new map data into our current overall map involves these steps:

  1. map_from_scan() This function is called by the Slammer when it receives new laser scan data and wants to create a new map to integrate the new laser scan data into its current map. This function takes in four arguments: the new laser scan data, the current estimated pose of the robot to be used as the origin of the new map, the resolution of the new map (the number of square meters each cell within the occupancy grid will represent), and the maximum distance we can expect to trust the laser scan data. Using trigonometry, we generate the new occupancy grid as described earlier and assign it to a new Map object. Before we create the occupancy grid, however, we must create a new map object and array to store this data, where create_empty_map() comes in.
  2. create_empty_map() Called from within map_from_scan() creates an M x N array using the size and resolution of the map that map_from_scan() wants to create, and fills it all with unknown cells.
  3. bound_check() After the empty Map object is created inside create_empty_map() and populated using map_from_scan(), the Slammer object now has its new map. The next step in the process is stitching its old map with this new one to create a fully up to date map. Before we can stitch these two maps together, however, we must resize one of the maps to be the correct size to hold both the old and new data. bound_check() is called, and utilizes the size of both maps and their relative origins to determine the exact size the new map should be.
  4. stitch() Finally, we can stitch the two maps together. We first instantiate a new empty 2D array of the correct size as determined by bound_check() using create_empty_map(). We then begin populating this array in a similar manner to map_from_scan, using our priority algorithm. Every cell within this new 2D array is checked for conflicting information from the two maps being stitched. If both maps agree on the cell or don't know about the cell, we assign the appropriate value. In the case where they do conflict, we use our priority algorithm to determine the best conservative estimate for the contents of that cell. After iterating through every cell to create the new, fully updated 2D array, we assign a new origin point to the Map object to match the original robot pose so that we have a good reference point for where to stitch new maps. We then assign this new 2D array as the Slammer object's official map.
  5. show_map() Lastly, show_map() is called by the Slammer object to both show the current map, and save that map as a png image file.

Iteration and Results

In testing and iterating on our algorithm, one of the most troubling issues remained the uncertainty of the robot's laser scan data, and how little we could actually trust it. To best show this issue, we placed the robot in a box and took a single laser scan at a high resolution.

Single laser scan in a box

As this image makes apparent, the laser scan misses a significant portion of the walls around it. The worst part about this issue is how consistent it is as well. Leaving the robot in the same place and continuing to take data does not change the position of, nor fix, the missed wall scans. The only way to correct for this is by taking new data at a new pose, either by translating, rotating, or both, and attempting to stitch together the two maps created at these two locations.

Keeping this issue in mind, we had another major decision to make: the resolution we would take data at and store. The higher resolution map we created, the better, higher fidelity data we would have and the fewer errors we'd have to correct for in the future. However, high resolution data comes with many issues. The main fault with high resolution data was computational complexity. We often ran into issues where we could not process the laser scan data and create a map fast enough, such that we were looking for new data before we had finished working through our previous data. Through trial and error, we discovered what we believe to be a good middle ground, where each cell in the occupancy grid represents 1/400th of a square meter. The above map of the robot in the box was taken such that each cell in that occupancy grid represented 1/10000th of a square meter. The below map is a good example of a very low resolution map.

Each cell represents 1/100th of a square meter

In this example of low resolution data, we were pleasantly surprised by the quality of the data. The environment we placed the robot in was relatively complex, but the walls are very apparent and relatively accurate. The issue with low resolution data like this was that a lot of data was lost in its creation. For example, here hundreds of laser scan data points overlapped because of the low resolution. Each cell is either safe to traverse, or it is not. A cell cannot be partially safe to traverse as we must assume that the entire cell is unsafe to traverse because we do not have the data resolution to determine which parts of the cell we can and cannot travel into. Any data about which parts of these cells are safe and which aren't is completely lost here.

Below are some examples of mappings we created at our final resolution. They give a significant amount of information and are overall very accurate.

Exploring the AC109 Entrance

Exploring the Machine Shop Entrance