You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi! Thanks for the C++ port of tf-pose-estimation.
I manage to get an average of 6-6.5FPS on a video stream from ZED camera, both running on ROS on a TX2 with TF1..0. That compared to 5-5.5FPS with the original Python implementation (I'm unsure how the original author managed to get ~10FPS with the same setup)
Was wondering how'd the intuition behind the workaround came about. I'm new to Deep Learning and was hoping to squeeze out more performance. Would you know if the performance would just cause a slow startup time or it would affect runtime performance as well. How muxh different would it be to have TensorRT with this as compared to the python implementation.
The text was updated successfully, but these errors were encountered:
Hi! Thanks for the C++ port of tf-pose-estimation.
I manage to get an average of 6-6.5FPS on a video stream from ZED camera, both running on ROS on a TX2 with TF1..0. That compared to 5-5.5FPS with the original Python implementation (I'm unsure how the original author managed to get ~10FPS with the same setup)
Was wondering how'd the intuition behind the workaround came about. I'm new to Deep Learning and was hoping to squeeze out more performance. Would you know if the performance would just cause a slow startup time or it would affect runtime performance as well. How muxh different would it be to have TensorRT with this as compared to the python implementation.
The text was updated successfully, but these errors were encountered: