Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

deepstream-lpr-app: /dvs/p4/build/sw/rel/gpgpu/MachineLearning/myelin_trt8/src/compiler/optimizer/cublas_impl.cpp:477: void add_heuristic_results_to_tactics(std::vector<cublasLtMatmulHeuristicResult_t>&, std::vector<myelin::ir::tactic_attribute_t>&, myelin::ir::tactic_attribute_t&, bool): Assertion `false && "Invalid size written"' failed. Aborted (core dumped) #19

Open
imSrbh opened this issue Feb 25, 2022 · 1 comment

Comments

@imSrbh
Copy link

imSrbh commented Feb 25, 2022

This app is running fine is Deepstream-Devel container [nvcr.io/nvidia/deepstream:6.0-devel]

when I am running with same app in Deepstream-Base container facing below issue.

root@373e8a951251:/app/deepstream_lpr_app/deepstream-lpr-app# ./deepstream-lpr-app 1 2 0 /app/metro_Trim.mp4 out.h264
Request sink_0 pad from streammux
Warning: 'input-dims' parameter has been deprecated. Use 'infer-dims' instead.
Warning: 'input-dims' parameter has been deprecated. Use 'infer-dims' instead.
Now playing: 1
ERROR: [TRT]: 1: [graphContext.h::MyelinGraphContext::24] Error Code 1: Myelin (cuBLASLt error 1 querying major version.)
ERROR: [TRT]: 1: [graphContext.h::MyelinGraphContext::24] Error Code 1: Myelin (cuBLASLt error 1 querying major version.)
ERROR: nvdsinfer_backend.cpp:394 Failed to setOptimizationProfile with idx:0 
ERROR: nvdsinfer_backend.cpp:228 Failed to initialize TRT backend, nvinfer error:NVDSINFER_INVALID_PARAMS
0:00:02.993528390 15515 0x558d7dc0fe30 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1896> [UID = 3]: create backend context from engine from file :/app/deepstream_lpr_app/models/LP/LPR/us_lprnet_baseline18_deployable.etlt_b16_gpu0_fp16.engine failed
0:00:02.994778147 15515 0x558d7dc0fe30 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 3]: deserialize backend context from engine from file :/app/deepstream_lpr_app/models/LP/LPR/us_lprnet_baseline18_deployable.etlt_b16_gpu0_fp16.engine failed, try rebuild
0:00:02.994800887 15515 0x558d7dc0fe30 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 3]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: ShapedWeights.cpp:173: Weights td_dense/kernel:0 has been transposed with permutation of (1, 0)! If you plan on overwriting the weights with the Refitter API, the new weights must be pre-transposed.
WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output.
WARNING: [TRT]: Detected invalid timing cache, setup a local cache instead
deepstream-lpr-app: /dvs/p4/build/sw/rel/gpgpu/MachineLearning/myelin_trt8/src/compiler/optimizer/cublas_impl.cpp:477: void add_heuristic_results_to_tactics(std::vector<cublasLtMatmulHeuristicResult_t>&, std::vector<myelin::ir::tactic_attribute_t>&, myelin::ir::tactic_attribute_t&, bool): Assertion `false && "Invalid size written"' failed.
Aborted (core dumped)
@SuperElectron
Copy link

me too

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants