Skip to content

Commit

Permalink
Added hot and cold labels to compiler cards and added cards for stabl…
Browse files Browse the repository at this point in the history
…e-fast and oneflow (#47)
  • Loading branch information
JPGoodale authored Dec 21, 2023
1 parent 432fb39 commit abfb5be
Show file tree
Hide file tree
Showing 34 changed files with 102 additions and 0 deletions.
1 change: 1 addition & 0 deletions compilers/apex.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ apex:
- nvidia
- bsd-3-clause

temperature: hot
url: https://github.com/NVIDIA/apex

description: |
Expand Down
1 change: 1 addition & 0 deletions compilers/bladedisc.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ bladedisc:
- mlir
- apache-2.0

temperature: hot
url: https://github.com/alibaba/BladeDISC

description: |
Expand Down
1 change: 1 addition & 0 deletions compilers/candle.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ candle:
- mit
- apache-2.0

temperature: hot
url: https://github.com/huggingface/candle

description: |
Expand Down
1 change: 1 addition & 0 deletions compilers/executorch.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ executorch:
- edge
- bsd-3-clause

temperature: hot
url: https://pytorch.org/executorch-overview

description: |
Expand Down
1 change: 1 addition & 0 deletions compilers/flexgen.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ flexgen:
- llm
- apache-2.0

temperature: hot
url: https://github.com/FMInference/FlexGen

description: |
Expand Down
1 change: 1 addition & 0 deletions compilers/ggml.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ ggml:
- compression
- mit

temperature: hot
url: https://ggml.ai

description: |
Expand Down
1 change: 1 addition & 0 deletions compilers/glow.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ glow:
- compilers
- apache-2.0

temperature: cold
url: https://ai.meta.com/tools/glow/

description: |
Expand Down
1 change: 1 addition & 0 deletions compilers/hidet.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ hidet:
- pytorch
- apache-2.0

temperature: hot
url: https://github.com/hidet-org/hidet

description: |
Expand Down
1 change: 1 addition & 0 deletions compilers/ipex.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ ipex:
- intel
- apache-2.0

temperature: hot
url: https://github.com/intel/intel-extension-for-pytorch

description: |
Expand Down
1 change: 1 addition & 0 deletions compilers/iree.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ iree:
- mlir
- apache-2.0

temperature: hot
url: https://iree.dev

description: |
Expand Down
1 change: 1 addition & 0 deletions compilers/keops.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ keops:
- compilers
- mit

temperature: neutral
url: https://www.kernel-operations.io/keops/index.html

description: |
Expand Down
1 change: 1 addition & 0 deletions compilers/kernl.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ kernl:
- pytorch
- apache-2.0

temperature: neutral
url: https://www.kernl.ai

description: |
Expand Down
1 change: 1 addition & 0 deletions compilers/mlc-llm.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ mlc-llm:
- llm
- apache-2.0

temperature: hot
url: https://llm.mlc.ai

description: |
Expand Down
1 change: 1 addition & 0 deletions compilers/mlgo.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ mlgo:
- compilers
- apache-2.0

temperature: neutral
url: https://github.com/google/ml-compiler-opt

description: |
Expand Down
1 change: 1 addition & 0 deletions compilers/mlir.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ mlir:
- mlir
- apache-2.0

temperature: hot
url: https://mlir.llvm.org

description: |
Expand Down
1 change: 1 addition & 0 deletions compilers/mojo.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ mojo:
- mlir
- proprietary

temperature: hot
url: https://www.modular.com/mojo

description: |
Expand Down
1 change: 1 addition & 0 deletions compilers/oneapi.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ oneapi:
- other-license
- mit

temperature: neutral
url: https://www.oneapi.io

description: |
Expand Down
27 changes: 27 additions & 0 deletions compilers/oneflow.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
oneflow:
name: "Oneflow"

image_url: https://docs.oneflow.org/en/master/assets/product-layer.png

tags:
- compilers
- framework
- mlir
- apache-2.0

temperature: hot
url: https://docs.oneflow.org/en/master/index.html

description: |
OneFlow is a deep learning framework that offers a unified solution for both deep learning
and traditional machine learning tasks. It stands out for its efficient approach to distributed
training, leveraging advanced parallelism and resource management techniques to optimize hardware
usage in large-scale environments. The framework supports both dynamic and static computation graphs,
providing users with the flexibility to choose the most suitable approach for their specific project.
Additionally, OneFlow utilizes MLIR for its codegen, with all modules being compiled to a Oneflow dialect
before lowering to device code.
features:
- "Advanced Distributed Training Efficiency"
- "Support for Both Dynamic and Static Graphs"
- "Intuitive and User-Friendly API"
1 change: 1 addition & 0 deletions compilers/pi.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ pi:
- pytorch
- apache-2.0

temperature: neutral
url: https://github.com/nod-ai/PI#installing

description: |
Expand Down
1 change: 1 addition & 0 deletions compilers/plaidml.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ plaidml:
- mlir
- apache-2.0

temperature: cold
url: https://plaidml.github.io/plaidml/

description: |
Expand Down
1 change: 1 addition & 0 deletions compilers/polyblocks.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ polyblocks:
- mlir
- proprietary

temperature: hot
url: https://www.polymagelabs.com/technology/

description: |
Expand Down
1 change: 1 addition & 0 deletions compilers/shark.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ shark:
- amd
- apache-2.0

temperature: hot
url: https://github.com/nod-ai/SHARK/tree/main

description: |
Expand Down
43 changes: 43 additions & 0 deletions compilers/stable-fast.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
stable-fast:
name: "stable-fast"

image_url: https://docs.oneflow.org/en/master/assets/product-layer.png

tags:
- compilers
- framework
- apache-2.0

temperature: hot
url: https://github.com/chengzeyi/stable-fast

description: |
StableFast is a cutting-edge, ultra-lightweight inference optimization framework designed
specifically for HuggingFace Diffusers on NVIDIA GPUs. It stands out for its exceptional
state-of-the-art (SOTA) inference performance on a wide range of diffuser models, including
the latest StableVideoDiffusionPipeline. One of its most notable features is its rapid model
compilation capability, which significantly outpaces other frameworks like TensorRT or AITemplate
by reducing compilation time from minutes to mere seconds. StableFast supports dynamic shapes, LoRA
(Low-Rank Adaptation), and ControlNet, offering a broad range of functionalities. It incorporates
advanced techniques such as CUDNN Convolution Fusion, low precision and fused GEMM, fused Linear GEGLU,
optimized NHWC & fused GroupNorm, and CUDA Graph and Fused Multihead Attention optimizations. This makes
it a highly versatile and efficient tool for developers. The framework is compatible with various versions
of HuggingFace Diffusers and PyTorch, ensuring broad applicability. Currently tested on Linux and WSL2 in
Windows, StableFast requires PyTorch with CUDA support and specific versions of related tools like xformers
and triton. The ongoing development of StableFast focuses on maintaining its position as a leading inference
optimization framework, with an emphasis on enhancing speed and reducing VRAM usage for transformers. This
commitment to continuous improvement underlines its utility in the rapidly evolving field of deep learning
optimization.
features:
- "Rapid Model Compilation"
- "Supports Dynamic Shape"
- "Compatible with LoRA and ControlNet"
- "CUDNN Convolution Fusion"
- "Low Precision & Fused GEMM Operations"
- "Fused Linear GEGLU"
- "Optimized NHWC & Fused GroupNorm"
- "Enhanced TorchScript Tracing"
- "CUDA Graph Support"
- "Fused Multihead Attention"
- "Broad Compatibility with PyTorch and HuggingFace Diffusers"
1 change: 1 addition & 0 deletions compilers/taco.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ taco:
- compilers
- mit

temperature: cold
url: http://tensor-compiler.org

description: |
Expand Down
1 change: 1 addition & 0 deletions compilers/tensor-comprehensions.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ tiramisu:
- compilers
- apache-2.0

temperature: cold
url: https://github.com/facebookresearch/TensorComprehensions

description: |
Expand Down
1 change: 1 addition & 0 deletions compilers/tensorrt-llm.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ tensorrt-llm:
- inference-optimizer
- apache-2.0

temperature: hot
url: https://github.com/NVIDIA/TensorRT-LLM

description: |
Expand Down
1 change: 1 addition & 0 deletions compilers/tensorrt.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ tensorrt:
- inference-optimizer
- apache-2.0

temperature: hot
url: https://developer.nvidia.com/tensorrt

description: |
Expand Down
1 change: 1 addition & 0 deletions compilers/tinygrad.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ tinygrad:
- framework
- mit

temperature: hot
url: https://tinygrad.org

description: |
Expand Down
1 change: 1 addition & 0 deletions compilers/tiramisu.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ tiramisu:
- compilers
- mit

temperature: cold
url: https://tiramisu-compiler.org

description: |
Expand Down
1 change: 1 addition & 0 deletions compilers/torch-mlir.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ torch-mlir:
- other-license
- bsd-3-clause

temperature: hot
url: https://github.com/llvm/torch-mlir

description: |
Expand Down
1 change: 1 addition & 0 deletions compilers/triton.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ triton:
- nvidia
- mit

temperature: hot
url: https://openai.com/research/triton

description: |
Expand Down
1 change: 1 addition & 0 deletions compilers/tvm.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ tvm:
- compilers
- apache-2.0

temperature: hot
url: https://tvm.apache.org

description: |
Expand Down
1 change: 1 addition & 0 deletions compilers/vllm.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ vllm:
- high-throughput
- apache-2.0

temperature: hot
url: https://vllm.ai

description: |
Expand Down
1 change: 1 addition & 0 deletions compilers/xla.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ xla:
- mlir
- apache-2.0

temperature: hot
url: https://www.tensorflow.org/xla

description: |
Expand Down

0 comments on commit abfb5be

Please sign in to comment.