Skip to content

Commit

Permalink
Add documentation
Browse files Browse the repository at this point in the history
Signed-off-by: Ashwin Vaidya <[email protected]>
  • Loading branch information
ashwinvaidya17 committed Jan 21, 2025
1 parent b780754 commit 33a0e0e
Show file tree
Hide file tree
Showing 3 changed files with 88 additions and 0 deletions.
28 changes: 28 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -180,6 +180,34 @@ anomalib predict --model anomalib.models.Patchcore \

> 📘 **Note:** For advanced inference options including Gradio and OpenVINO, check our [Inference Documentation](https://anomalib.readthedocs.io).
# Training on Intel GPUs

> [!Note]
> Currently, only single GPU training is supported on Intel GPUs.
> These commands were tested on Arc 750 and Arc 770.
Ensure that you have PyTorch with XPU support installed. For more information, please refer to the [PyTorch XPU documentation](https://pytorch.org/docs/stable/notes/get_start_xpu.html)

## 🔌 API

```python
from anomalib.data import MVTec
from anomalib.engine import Engine, SingleXPUStrategy, XPUAccelerator
from anomalib.models import Stfpm

engine = Engine(
strategy=SingleXPUStrategy(),
accelerator=XPUAccelerator(),
)
engine.train(Stfpm(), datamodule=MVTec())
```

## ⌨️ CLI

```bash
anomalib train --model Padim --data MVTec --trainer.accelerator xpu --trainer.strategy xpu_single
```

# ⚙️ Hyperparameter Optimization

Anomalib supports hyperparameter optimization (HPO) using [Weights & Biases](https://wandb.ai/) and [Comet.ml](https://www.comet.com/).
Expand Down
8 changes: 8 additions & 0 deletions docs/source/markdown/guides/how_to/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,13 @@ Learn more about anomalib's deployment capabilities
Learn more about anomalib hpo, sweep and benchmarking pipelines
:::

:::{grid-item-card} {octicon}`cpu` Training on Intel GPUs
:link: ./training_on_intel_gpus/index
:link-type: doc

Learn more about training on Intel GPUs
:::

::::

```{toctree}
Expand All @@ -83,4 +90,5 @@ Learn more about anomalib hpo, sweep and benchmarking pipelines
./models/index
./pipelines/index
./visualization/index
./training_on_intel_gpus/index
```
52 changes: 52 additions & 0 deletions docs/source/markdown/guides/how_to/training_on_intel_gpus/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
# Training on Intel GPUs

This tutorial demonstrates how to train a model on Intel GPUs using anomalib.
Anomalib comes with XPU accelerator and strategy for PyTorch Lightning. This allows you to train your models on Intel GPUs.

> [!Note]
> Currently, only single GPU training is supported on Intel GPUs.
> These commands were tested on Arc 750 and Arc 770.
## Installing Drivers

First, check if you have the correct drivers installed. If you are on Ubuntu, you can refer to the [following guide](https://dgpu-docs.intel.com/driver/client/overview.html).

Another recommended tool is `xpu-smi` which can be installed from the [releases](https://github.com/intel/xpumanager) page.

If everything is installed correctly, you should be able to see your card using the following command:

```bash
xpu-smi discovery
```

## Installing PyTorch

Then, ensure that you have PyTorch with XPU support installed. For more information, please refer to the [PyTorch XPU documentation](https://pytorch.org/docs/stable/notes/get_start_xpu.html)

To ensure that your PyTorch installation supports XPU, you can run the following command:

```bash
python -c "import torch; print(torch.xpu.is_available())"
```

If the command returns `True`, then your PyTorch installation supports XPU.

## 🔌 API

```python
from anomalib.data import MVTec
from anomalib.engine import Engine, SingleXPUStrategy, XPUAccelerator
from anomalib.models import Stfpm

engine = Engine(
strategy=SingleXPUStrategy(),
accelerator=XPUAccelerator(),
)
engine.train(Stfpm(), datamodule=MVTec())
```

## ⌨️ CLI

```bash
anomalib train --model Padim --data MVTec --trainer.accelerator xpu --trainer.strategy xpu_single
```

0 comments on commit 33a0e0e

Please sign in to comment.