From 6963fc97d89580d0e5c82d9b6a75a86a751cd39f Mon Sep 17 00:00:00 2001 From: Shenghai Yuan <140951558+SHYuanBest@users.noreply.github.com> Date: Thu, 26 Dec 2024 11:28:01 +0800 Subject: [PATCH] add teacache --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index aede16d..58aa1f5 100644 --- a/README.md +++ b/README.md @@ -181,7 +181,7 @@ bash run.sh ## 🚀 Cache Inference by TeaCache -[TeaCache](https://github.com/LiewFeng/TeaCache) is a training-free caching approach that estimates and leverages the fluctuating differences among model outputs across timesteps, thereby accelerate the inference. For example, you can use the following command: +[TeaCache](https://github.com/LiewFeng/TeaCache) is a training-free caching approach that estimates and leverages the fluctuating differences among model outputs across timesteps, thereby accelerating the inference. For example, you can use the following command: ``` cd tools/cache_inference @@ -343,4 +343,4 @@ If you find our paper and code useful in your research, please consider giving a - \ No newline at end of file +