Skip to content

Commit

Permalink
Typo Fixes and figure for application to new robots
Browse files Browse the repository at this point in the history
  • Loading branch information
DanielChaseButterfield committed Jan 4, 2025
1 parent 23e878d commit 288b40e
Show file tree
Hide file tree
Showing 4 changed files with 9 additions and 6 deletions.
5 changes: 4 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# MI-HGNN for contact estimation/classification
This repository implements a Morphology-Inspired Heterogeneous Graph Neural Network (MI-HGNN) for estimating contact information on the feet of a quadruped robot. For more details, see our publication "[MI-HGNN: Morphology-Informed Heterogeneous Graph Neural Network for Legged Robot Contact Perception](https://arxiv.org/abs/2409.11146)".
This repository implements a Morphology-Inspired Heterogeneous Graph Neural Network (MI-HGNN) for estimating contact information on the feet of a quadruped robot. For more details, see our publication "[MI-HGNN: Morphology-Informed Heterogeneous Graph Neural Network for Legged Robot Contact Perception](https://arxiv.org/abs/2409.11146)". Additionally, it can be applied to a variety of robot structures and datasets!

![Figure 2](paper/website_images/banner_image.png)

Expand Down Expand Up @@ -40,6 +40,8 @@ We provide code for replicating the exact experiments in our paper and provide f

Although our paper's scope was limited to application of MI-HGNN on quadruped robots for contact perception, it can easily be applied to other multi-body dynamical systems and on other tasks/datasets, following the steps below:

<img src="paper/website_images/MI-HGNN Potential Applications.png" alt="MI-HGNN Potential Applications" width="800">

1. Add new URDF files for your robots by following the instructions in `urdf_files/README.md`.
2. Incorporate your custom dataset using our `FlexibleDataset` class and starter `CustomDatasetTemplate.py` file by following the instructions at `src/mi_hgnn/datasets_py/README.md`.
3. After making your changes, rebuild the library following the instructions in [#Installation](#installation). To make sure that your changes haven't
Expand All @@ -48,6 +50,7 @@ broken critical functionality, run the test cases with the command `python -m un

We've designed the library to be easily applicable to a variety of datasets and robots, and have provided a variety of customization options in training, dataset creation, and logging. We're excited to see everything you can do with the MI-HGNN!


### Simulated A1 Dataset

To evaluate the performance of our model on GRF estimation, we generated our own simulated GRF dataset, which we now contribute to the community as well. We recorded proprioceptive sensor data and the corresponding ground truth GRFs by operating an A1 robot in the [Quad-SDK](https://github.com/lunarlab-gatech/quad_sdk_fork) simulator. In total, our dataset comprises of 530,779 synchronized data samples with a variety of frictions, terrains, and speeds. All of the different sequences are outlined in the table below:
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion src/mi_hgnn/datasets_py/CustomDatasetTemplate.py
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ class CustomDataset_sequence1(CustomDataset):
'/file/d/' and '/view?usp=sharing'. Take this string, and paste it as the first return argument below.
"""
def get_file_id_and_loc(self):
return "17h4kMUKMymG_GzTZTMHPgj-ImDKZMg3R", "Google"
return "<Your_String_Here>", "Google"

class CustomDataset_sequence2(CustomDataset):
"""
Expand Down
8 changes: 4 additions & 4 deletions src/mi_hgnn/datasets_py/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,14 +36,14 @@ Next, scroll down to the bottom of the file where it says `DATASET SEQUENCES`. A

This is a clean way for data loading, as it allows the user to later combine different sequences as they'd like with the `torch.utils.data.ConcatDataset` function (see `research/train_classification_sample_eff.py` for an example). Defining these classes also means that training an MI-HGNN model on a different computer doesn't require the user to manually download any datasets, as `FlexibleDataset` will do it for you.

Also, when the files are downloaded, they will be renamed to the value provided by `get_downloaded_dataset_file_name()`. Overwrite this function so that the file extension is correct `.mat` for a Matlab file, `.bag` for a ROSbag file, etc.
Also, when the files are downloaded, they will be renamed to the value provided by `get_downloaded_dataset_file_name()`. Overwrite this function so that the file extension is correct (`.mat` for a Matlab file, `.bag` for a ROSbag file, etc).

### Implementing Data Processing
Now that you can load your dataset files, you need to implement processing. This step should be implemented in `process()`, and should convert the file from whatever format it is currently in into a `.mat` file for fast training speeds. You'll also need to provide code for extracting the number of dataset entries in this sequence, which will be saved into a .txt file for future use.

Implement this function. You can see `quadSDKDataset.py` for an example of converting a ROSbag file into a .mat file.

### Loading data for use with FlexibleDataset
### Implementing Data Loading
Now that data is loaded and processed, you can now implement the function for opening the .mat file and extracting the relevant dataset sequence.
This should be done in `load_data_at_dataset_seq()`. The .mat file you saved in the last step will now be available at `self.mat_data` for easy access.
Note that this function will also need to use the `self.history_length` parameter to support training with a history of measurements. See `CustomDatasetTemplate.py` for details, and see `LinTzuYaunDataset.py` for a proper implementation of this function.
Expand All @@ -58,6 +58,6 @@ Since its easy for the user to provide the wrong URDF file for a dataset sequenc
This name should be pasted into `get_expected_urdf_name()`.

### Facilitating Data Sorting
Finally, the last step is to tell `FlexibleDataset` what order your dataset data is in. For example, which index in the joint position array corresponds to a specific joint in the URDF file? To do this, you'll implement `get_urdf_name_to_dataset_array_index()`.
Finally, the last step is to tell `FlexibleDataset` what order your dataset data is in. For example, which index in the joint position array corresponds to a specific joint in the URDF file? To do this, you'll implement `get_urdf_name_to_dataset_array_index()`. See `CustomDatasetTemplate.py` for more details.

After doing this, your dataset will work with our current codebase for training MLP and MI-HGNN models! You can now instantiate your dataset and use it in a similar manner to the datasets in the `research` directory. Happy Training!
After doing this, your dataset will work with our current codebase for training MLP and MI-HGNN models! You can now instantiate your dataset and use it like in the examples in the `research` directory. Happy Training!

0 comments on commit 288b40e

Please sign in to comment.