Skip to content
This repository has been archived by the owner on Feb 16, 2019. It is now read-only.

Latest commit

 

History

History
25 lines (21 loc) · 1.05 KB

File metadata and controls

25 lines (21 loc) · 1.05 KB

annInferenceServer

This Sample Inference Server supports:

  • convert and maintain a database of pre-trained CAFFE models using caffe2openvx
  • allow multiple TCP/IP client connections for inference work submissions
  • multi-GPU high-throughput live streaming batch scheduler

Command-line usage:

% annInferenceServer [-p port]
                     [-b default-batch-size]
                     [-gpu <comma-separated-list-of-GPUs>]
                     [-q <max-pending-batches>]
                     [-w <server-work-folder>]

Make sure that all executables and libraries are in PATH and LD_LIBRARY_PATH environment variables.

% export PATH=$PATH:/opt/rocm/bin
% export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/rocm/lib

The annInferenceServer works with annInferenceApp.

  • Execute annInferenceServer on the server machine with Radeon Instinct GPUs
  • Execute annInferenceApp on one or more workstations: connect to the server and classify images using any pre-trained neural network