Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Strange artifacts when running on KITTI-DC #3

Open
haiphamcse opened this issue Dec 27, 2024 · 3 comments
Open

Strange artifacts when running on KITTI-DC #3

haiphamcse opened this issue Dec 27, 2024 · 3 comments

Comments

@haiphamcse
Copy link

Hi there, loved your work! I recently reproduced your method on KITTI-DC dataset. While the quantitative results nearly match the paper, the qualitative results has a lot of strange artifacts (as given below). Has your team encountered this before?
I have also attached my quantitative results as an indicator for you guys to see if there is any problems there. Looking forward to your reply!

eval_metrics.txt
image

@toshas
Copy link
Collaborator

toshas commented Jan 9, 2025

Could you please clarify, what exactly you refer to as artifacts? Perhaps circles or something would help. Does using the Gradio demo help troubleshooting? It shows the guides and the result. Perhaps the guides have noise - that we did encounter in LiDAR settings

@haiphamcse
Copy link
Author

Hi there, sorry for the late reply. I have highlighted the artifacts in purple, mostly they can be categorized into 2 types:

  • Wrong predictions: such as the sky giving close depths
  • Loss of details: such as the bike rack, traffic signs, these details in the original marigold + ls prediction will be visible
    I have also ran some test on other lidar datasets (waymo & nuscenes) and notice this. Why do you think LiDAR is not as good as the guidance of indoor sensors.
    image

@toshas
Copy link
Collaborator

toshas commented Jan 16, 2025

  • Please check out the Gradio demo -- it will give many insights into what is going on. For example, it is easy to see how the LiDAR guidance points are positioned against the artifacts. You will also see that some places have conflicting LiDAR measurements (especially when you use all LiDAR points instead of subsets as we propose in the paper), which inevitably will lead to artifacts
Screenshot 2025-01-16 at 10 33 20
  • The sky issue could be multifold, too. The base Marigold model has been shown to predict sky values arbitrarily because the model is trained by masking uncertain regions (such as the sky) to enable better generalization. Second, you see in the demo screenshot that the sky never has guidance points because the laser beams never bounce off anything and go into infinity. You may have some better success fixing the sky by using a base Marigold model trained without masking the sky region (we do not provide such a model, but other follow-up works have trained), or relying on something like SAM to segment out the sky region. Note that sky never affects the depth completion metrics, as they are always computed on the held out LiDAR measurements/samples.

HTH

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants