forked from intel/ipex-llm
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Add minicpm3 gpu example (intel#12114)
* add minicpm3 gpu example * update GPU example * update --------- Co-authored-by: Huang, Xinshengzi <[email protected]>
- Loading branch information
Showing
4 changed files
with
230 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
146 changes: 146 additions & 0 deletions
146
python/llm/example/GPU/HuggingFace/LLM/minicpm3/README.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,146 @@ | ||
# MiniCPM3 | ||
In this directory, you will find examples on how you could apply IPEX-LLM INT4 optimizations on MiniCPM3 models on [Intel GPUs](../../../README.md). For illustration purposes, we utilize the [openbmb/MiniCPM3-4B](https://huggingface.co/openbmb/MiniCPM3-4B) as a reference MiniCPM3 model. | ||
|
||
## 0. Requirements | ||
To run these examples with IPEX-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../../../README.md#requirements) for more information. | ||
|
||
## Example: Predict Tokens using `generate()` API | ||
In the example [generate.py](./generate.py), we show a basic use case for a MiniCPM3 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations on Intel GPUs. | ||
### 1. Install | ||
#### 1.1 Installation on Linux | ||
We suggest using conda to manage environment: | ||
```bash | ||
conda create -n llm python=3.11 | ||
conda activate llm | ||
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default | ||
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/ | ||
|
||
pip install jsonschema datamodel_code_generator | ||
``` | ||
|
||
#### 1.2 Installation on Windows | ||
We suggest using conda to manage environment: | ||
```bash | ||
conda create -n llm python=3.11 libuv | ||
conda activate llm | ||
|
||
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default | ||
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/ | ||
|
||
pip install jsonschema datamodel_code_generator | ||
``` | ||
|
||
### 2. Configures OneAPI environment variables for Linux | ||
|
||
> [!NOTE] | ||
> Skip this step if you are running on Windows. | ||
This is a required step on Linux for APT or offline installed oneAPI. Skip this step for PIP-installed oneAPI. | ||
|
||
```bash | ||
source /opt/intel/oneapi/setvars.sh | ||
``` | ||
|
||
### 3. Runtime Configurations | ||
For optimal performance, it is recommended to set several environment variables. Please check out the suggestions based on your device. | ||
#### 3.1 Configurations for Linux | ||
<details> | ||
|
||
<summary>For Intel Arc™ A-Series Graphics and Intel Data Center GPU Flex Series</summary> | ||
|
||
```bash | ||
export USE_XETLA=OFF | ||
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1 | ||
export SYCL_CACHE_PERSISTENT=1 | ||
``` | ||
|
||
</details> | ||
|
||
<details> | ||
|
||
<summary>For Intel Data Center GPU Max Series</summary> | ||
|
||
```bash | ||
export LD_PRELOAD=${LD_PRELOAD}:${CONDA_PREFIX}/lib/libtcmalloc.so | ||
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1 | ||
export SYCL_CACHE_PERSISTENT=1 | ||
export ENABLE_SDP_FUSION=1 | ||
``` | ||
> Note: Please note that `libtcmalloc.so` can be installed by `conda install -c conda-forge -y gperftools=2.10`. | ||
</details> | ||
<details> | ||
|
||
<summary>For Intel iGPU</summary> | ||
|
||
```bash | ||
export SYCL_CACHE_PERSISTENT=1 | ||
export BIGDL_LLM_XMX_DISABLED=1 | ||
``` | ||
|
||
</details> | ||
|
||
#### 3.2 Configurations for Windows | ||
<details> | ||
|
||
<summary>For Intel iGPU</summary> | ||
|
||
```cmd | ||
set SYCL_CACHE_PERSISTENT=1 | ||
set BIGDL_LLM_XMX_DISABLED=1 | ||
``` | ||
|
||
</details> | ||
|
||
<details> | ||
|
||
<summary>For Intel Arc™ A-Series Graphics</summary> | ||
|
||
```cmd | ||
set SYCL_CACHE_PERSISTENT=1 | ||
``` | ||
|
||
</details> | ||
|
||
> [!NOTE] | ||
> For the first time that each model runs on Intel iGPU/Intel Arc™ A300-Series or Pro A60, it may take several minutes to compile. | ||
### 4. Running examples | ||
|
||
``` | ||
python ./generate.py --prompt 'What is AI?' | ||
``` | ||
|
||
Arguments info: | ||
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the MiniCPM3 model (e.g. `openbmb/MiniCPM3-4B`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'openbmb/MiniCPM3-4B'`. | ||
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is AI?'`. | ||
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`. | ||
|
||
#### Sample Output | ||
#### [openbmb/MiniCPM3-4B](https://huggingface.co/openbmb/MiniCPM3-4B) | ||
```log | ||
Inference time: xxxx s | ||
-------------------- Prompt -------------------- | ||
<|im_start|>user | ||
AI是什么?<|im_end|> | ||
<|im_start|>assistant | ||
-------------------- Output -------------------- | ||
<s><|im_start|> user | ||
AI是什么?<|im_end|> | ||
<|im_start|> assistant | ||
AI,即人工智能(Artificial Intelligence),是指由人类创造的、能够模拟人类智能的相关理论和实践的一门新兴技术。它使计算机 或其他 | ||
``` | ||
|
||
```log | ||
Inference time: xxxx s | ||
-------------------- Prompt -------------------- | ||
<|im_start|>user | ||
What is AI?<|im_end|> | ||
<|im_start|>assistant | ||
-------------------- Output -------------------- | ||
<s><|im_start|> user | ||
What is AI?<|im_end|> | ||
<|im_start|> assistant | ||
AI, or Artificial Intelligence, is a field of computer science that emphasizes the creation of intelligent machines capable of performing tasks that typically require human intelligence. These tasks include | ||
``` |
82 changes: 82 additions & 0 deletions
82
python/llm/example/GPU/HuggingFace/LLM/minicpm3/generate.py
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,82 @@ | ||
# | ||
# Copyright 2016 The BigDL Authors. | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
# | ||
|
||
import torch | ||
import time | ||
import argparse | ||
|
||
from ipex_llm.transformers import AutoModelForCausalLM | ||
from transformers import AutoTokenizer | ||
|
||
|
||
if __name__ == '__main__': | ||
parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for MiniCPM3 model') | ||
parser.add_argument('--repo-id-or-model-path', type=str, default="openbmb/MiniCPM3-4B", | ||
help='The huggingface repo id for the MiniCPM3 model to be downloaded' | ||
', or the path to the huggingface checkpoint folder') | ||
parser.add_argument('--prompt', type=str, default="What is AI?", | ||
help='Prompt to infer') | ||
parser.add_argument('--n-predict', type=int, default=32, | ||
help='Max tokens to predict') | ||
|
||
args = parser.parse_args() | ||
model_path = args.repo_id_or_model_path | ||
|
||
# Load model in 4 bit, | ||
# which convert the relevant layers in the model into INT4 format | ||
# When running LLMs on Intel iGPUs for Windows users, we recommend setting `cpu_embedding=True` in the from_pretrained function. | ||
# This will allow the memory-intensive embedding layer to utilize the CPU instead of iGPU. | ||
model = AutoModelForCausalLM.from_pretrained(model_path, | ||
load_in_4bit=True, | ||
trust_remote_code=True, | ||
optimize_model=True, | ||
use_cache=True) | ||
|
||
model = model.half().to('xpu') | ||
|
||
# Load tokenizer | ||
tokenizer = AutoTokenizer.from_pretrained(model_path, | ||
trust_remote_code=True) | ||
|
||
# Generate predicted tokens | ||
with torch.inference_mode(): | ||
# here the prompt formatting refers to: https://huggingface.co/openbmb/MiniCPM3-4B#inference-with-transformers | ||
chat = [ | ||
{ "role": "user", "content": args.prompt }, | ||
] | ||
|
||
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) | ||
input_ids = tokenizer.encode(prompt, return_tensors="pt").to('xpu') | ||
|
||
# ipex_llm model needs a warmup, then inference time can be accurate | ||
output = model.generate(input_ids, | ||
do_sample=False, | ||
max_new_tokens=args.n_predict) | ||
# start inference | ||
st = time.time() | ||
|
||
output = model.generate(input_ids, | ||
do_sample=False, | ||
max_new_tokens=args.n_predict) | ||
torch.xpu.synchronize() | ||
end = time.time() | ||
output_str = tokenizer.decode(output[0], skip_special_tokens=False) | ||
|
||
print(f'Inference time: {end-st} s') | ||
print('-'*20, 'Prompt', '-'*20) | ||
print(prompt) | ||
print('-'*20, 'Output', '-'*20) | ||
print(output_str) |