In this directory, you will find examples on how you could apply IPEX-LLM INT4 optimizations on ChatGLM3 models on Intel GPUs. For illustration purposes, we utilize the THUDM/chatglm3-6b (or ZhipuAI/chatglm3-6b for ModelScope) as a reference ChatGLM3 model.
To run these examples with IPEX-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to here for more information.
We suggest using conda to manage environment:
conda create -n llm python=3.11
conda activate llm
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
# [optional] only needed if you would like to use ModelScope as model hub
pip install modelscope==1.11.0
We suggest using conda to manage environment:
conda create -n llm python=3.11 libuv
conda activate llm
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
# [optional] only needed if you would like to use ModelScope as model hub
pip install modelscope==1.11.0
Note
Skip this step if you are running on Windows.
This is a required step on Linux for APT or offline installed oneAPI. Skip this step for PIP-installed oneAPI.
source /opt/intel/oneapi/setvars.sh
For optimal performance, it is recommended to set several environment variables. Please check out the suggestions based on your device.
For Intel Arc™ A-Series Graphics and Intel Data Center GPU Flex Series
export USE_XETLA=OFF
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
export SYCL_CACHE_PERSISTENT=1
For Intel Data Center GPU Max Series
export LD_PRELOAD=${LD_PRELOAD}:${CONDA_PREFIX}/lib/libtcmalloc.so
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
export SYCL_CACHE_PERSISTENT=1
export ENABLE_SDP_FUSION=1
Note: Please note that
libtcmalloc.so
can be installed byconda install -c conda-forge -y gperftools=2.10
.
For Intel iGPU
export SYCL_CACHE_PERSISTENT=1
For Intel iGPU and Intel Arc™ A-Series Graphics
set SYCL_CACHE_PERSISTENT=1
Note
For the first time that each model runs on Intel iGPU/Intel Arc™ A300-Series or Pro A60, it may take several minutes to compile.
In the example generate.py, we show a basic use case for a ChatGLM3 model to predict the next N tokens using generate()
API, with IPEX-LLM INT4 optimizations on Intel GPUs.
# for Hugging Face model hub
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
# for ModelScope model hub
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT --modelscope
Arguments info:
--repo-id-or-model-path REPO_ID_OR_MODEL_PATH
: argument defining the Hugging Face (e.g.THUDM/chatglm3-6b
) or ModelScope (e.g.ZhipuAI/chatglm3-6b
) repo id for the ChatGLM3 model to be downloaded, or the path to the checkpoint folder. It is default to be'THUDM/chatglm3-6b'
for Hugging Face or'ZhipuAI/chatglm3-6b'
for ModelScope.--prompt PROMPT
: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be'AI是什么?'
.--n-predict N_PREDICT
: argument defining the max number of tokens to predict. It is default to be32
.--modelscope
: using ModelScope as model hub instead of Hugging Face.
Inference time: xxxx s
-------------------- Prompt --------------------
<|user|>
AI是什么?
<|assistant|>
-------------------- Output --------------------
[gMASK]sop <|user|>
AI是什么?
<|assistant|> AI是人工智能(Artificial Intelligence)的缩写,指通过计算机程序或机器学习算法来模拟、延伸或扩展人类智能的技术。AI旨在
Inference time: xxxx s
-------------------- Prompt --------------------
<|user|>
What is AI?
<|assistant|>
-------------------- Output --------------------
[gMASK]sop <|user|>
What is AI?
<|assistant|>
AI stands for Artificial Intelligence. It refers to the development of computer systems or machines that can perform tasks that would normally require human intelligence, such as recognizing patterns
In the example streamchat.py, we show a basic use case for a ChatGLM3 model to stream chat, with IPEX-LLM INT4 optimizations.
Stream Chat using stream_chat()
API:
# for Hugging Face model hub
python ./streamchat.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --question QUESTION
# for ModelScope model hub
python ./streamchat.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --question QUESTION --modelscope
Chat using chat()
API:
# for Hugging Face model hub
python ./streamchat.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --question QUESTION --disable-stream
# for ModelScope model hub
python ./streamchat.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --question QUESTION --disable-stream --modelscope
Arguments info:
--repo-id-or-model-path REPO_ID_OR_MODEL_PATH
: argument defining the *Hugging Face (e.g.THUDM/chatglm3-6b
) or ModelScope (e.g.ZhipuAI/chatglm3-6b
) repo id for the ChatGLM3 model to be downloaded, or the path to the checkpoint folder. It is default to be'THUDM/chatglm3-6b'
for Hugging Face or'ZhipuAI/chatglm3-6b'
for ModelScope.--question QUESTION
: argument defining the question to ask. It is default to be"晚上睡不着应该怎么办"
.--disable-stream
: argument defining whether to stream chat. If include--disable-stream
when running the script, the stream chat is disabled andchat()
API is used.--modelscope
: using ModelScope as model hub instead of Hugging Face.