-
Notifications
You must be signed in to change notification settings - Fork 124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PointCloud Sensor Elevation Uncertainty Model #93
Comments
Hi jwag, thanks a lot for going through our works and checking the details. Implementing better sensor noise model is some TODOs that I could not finish yet due to the time constraint. Thanks. |
Hey mktk1117, I am interested in making PR if you think it would actually make an improvement in the performance of the system. It seems like this has worked rather well for you all in DARPA Sub-T and other real world scenarios. (I apologize for the long response here. I am very interested in this work and am trying to make sure that I understand what are problems before I spend time trying to make any improvements) My understanding is that one of the main motivations for creating elevation_mapping_cupy (EM_cupy) was to to speed up the elevation_mapping approach by Fankhauser [3] (I will refer to it as EM) , handle larger point-clouds via GPU processing, and enable tighter integration with learning systems for traversability and semantics. The EM_cupy approach to propagation of uncertainty and accounting for pose drift given a moving map frame is different than the EM approach. I guess I have a few questions regarding you design decisions that I was hoping you could answer before I make any changes.
References: |
Hello! I have started work on the PR, but would really like some feedback if you are available. I am also still interested in your response to the above questions. I have currently implemented a SensorProcessor class that takes in a pointcloud, some transforms, and some uncertainty estimates and then produces a propagated uncertainty for each point. I have done this using the built in cupy functions without the use of custom kernels. I have benchmarked against numpy and they both run about the same with cupy running a bit slower. IDK if that means I have done something wrong or not (I haven't worked with cupy before). My plan is to integrate this into the elevation_mapping class next. I also have LaTex documentation on the derivation that I could provide after some cleanup. |
First, thanks for making this library. I am planning to use it to implement some mapping of soil properties during bulldozing earthmoving operations.
I have been working my way through the code for Elevation Mapping CUPY and reference back to the paper [1] along the way. I am struggling with understanding how the measurement uncertainty is being computed for a pointcloud. The paper references another paper that develops a noise model for a Kinect sensor [2] as justification for replacing the elevation noise model of Fankhauser [3], equations 1-3, with noise model quadratic in the distance of the measured point from the sensor. The noise model proposed in [2] is composed of a radial component and an axial component which are determined to be linear and quadratic in depth to point respectively. If we make the assumption that the axial component is small wrt. the axial component (as is justifiable for larger distances as shown in [2]) then we can assume the quadratic in depth axial component is a decent approximation of the uncertainty distribution. In order to properly account for the sensor uncertainty in the z direction of the map, this axial uncertainty must then be projected along the z axis. The axial uncertainty lies along the unit vector between the sensor and the point. This projection is not discussed in in the paper [1].
In the function add_points_kernel() the custom_kernels.py script a point measurement is provided (presumably in the sensor frame) and the function z_noise() is used to compute the height variance as v = sensor_noise_factor * rz *rz where rz is the points z coordinate in the sensor frame. This is not the same as the distance to the point from the sensor d = |p| as described in [1] and [2]. The value of rz depends on the choice of sensor coordinate frame and there is no coordinate frame for a LiDAR or Camera sensor where |p|^2 = rz^2 for every point p.
Based on my understanding the noise should be computed based on d^2 using the function point_sensor_distance() and then projected along the z axis.
I may be misunderstanding something here, so please correct me if I’m wrong.
References:
[1] "Elevation Mapping for Locomotion and Navigation using GPU" Takahiro Miki et. al. 2022
[2] "Modeling Kinect sensor noise for improved 3D reconstruction and tracking" Chuong V. Nguyen, et. al. 2012
[3] "Robot-centric elevation mapping with uncertainty estimates" Peter Fankhauser, et. al. 2014
The text was updated successfully, but these errors were encountered: