diff --git a/llama_stack/distribution/docker/README.md b/llama_stack/distribution/docker/README.md new file mode 100644 index 0000000000..ea116e0024 --- /dev/null +++ b/llama_stack/distribution/docker/README.md @@ -0,0 +1,23 @@ +# Docker Compose Scripts + +This folder contains scripts to enable starting a distribution using `docker compose`. + + +#### TGI Inference Adapter +``` +$ cd llama_stack/distribution/docker/tgi +$ ls +compose.yaml tgi-run.yaml +$ docker compose up +``` + +The script will first start up TGI server, then start up Llama Stack distribution server hooking up to the remote TGI provider for inference. You should be able to see the following outputs -- +``` +[text-generation-inference] | 2024-10-15T18:56:33.810397Z INFO text_generation_router::server: router/src/server.rs:1813: Using config Some(Llama) +[text-generation-inference] | 2024-10-15T18:56:33.810448Z WARN text_generation_router::server: router/src/server.rs:1960: Invalid hostname, defaulting to 0.0.0.0 +[text-generation-inference] | 2024-10-15T18:56:33.864143Z INFO text_generation_router::server: router/src/server.rs:2353: Connected +INFO: Started server process [1] +INFO: Waiting for application startup. +INFO: Application startup complete. +INFO: Uvicorn running on http://[::]:5000 (Press CTRL+C to quit) +```