[New Port Request] Nvidia Triton Inference Server client #43186
Labels
category:new-port
The issue is requesting a new library to be added; consider making a PR!
info:good-first-issue
This issue would be a good issue to get one's feet wet in solving.
Library name
Nvidia Triton Inference Server client
Library description
NVIDIA Triton Inference Server provides a cloud and edge inferencing solution optimized for both CPUs and GPUs. Nvidia provides a c++ client.
Source repository URL
https://github.com/triton-inference-server/client
Project homepage (if different from the source repository)
https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/index.html
Anything else that is useful to know when adding (such as optional features the library may have that should be included)
No response
The text was updated successfully, but these errors were encountered: