diff --git a/README.md b/README.md index a8fcaf105f..26763b3c53 100644 --- a/README.md +++ b/README.md @@ -149,7 +149,7 @@ For detailed inference benchmarks in more devices and more settings, please refe
  • InternLM-XComposer2 (7B, 4khd-7B)
  • InternLM-XComposer2.5 (7B)
  • Qwen-VL (7B)
  • -
  • Qwen2-VL (2B, 7B)
  • +
  • Qwen2-VL (2B, 7B, 72B)
  • DeepSeek-VL (7B)
  • InternVL-Chat (v1.1-v1.5)
  • InternVL2 (1B-76B)
  • diff --git a/README_zh-CN.md b/README_zh-CN.md index 871eba01b5..4b9f85c735 100644 --- a/README_zh-CN.md +++ b/README_zh-CN.md @@ -150,7 +150,7 @@ LMDeploy TurboMind 引擎拥有卓越的推理能力,在各种规模的模型
  • InternLM-XComposer2 (7B, 4khd-7B)
  • InternLM-XComposer2.5 (7B)
  • Qwen-VL (7B)
  • -
  • Qwen2-VL (2B, 7B)
  • +
  • Qwen2-VL (2B, 7B, 72B)
  • DeepSeek-VL (7B)
  • InternVL-Chat (v1.1-v1.5)
  • InternVL2 (1B-76B)
  • diff --git a/docs/en/get_started/installation.md b/docs/en/get_started/installation.md index 7116ab2832..ab7ee0b30e 100644 --- a/docs/en/get_started/installation.md +++ b/docs/en/get_started/installation.md @@ -23,7 +23,7 @@ pip install lmdeploy The default prebuilt package is compiled on **CUDA 12**. If CUDA 11+ (>=11.3) is required, you can install lmdeploy by: ```shell -export LMDEPLOY_VERSION=0.6.0 +export LMDEPLOY_VERSION=0.6.1 export PYTHON_VERSION=38 pip install https://github.com/InternLM/lmdeploy/releases/download/v${LMDEPLOY_VERSION}/lmdeploy-${LMDEPLOY_VERSION}+cu118-cp${PYTHON_VERSION}-cp${PYTHON_VERSION}-manylinux2014_x86_64.whl --extra-index-url https://download.pytorch.org/whl/cu118 ``` diff --git a/docs/en/supported_models/supported_models.md b/docs/en/supported_models/supported_models.md index 52367e4471..25010f63bd 100644 --- a/docs/en/supported_models/supported_models.md +++ b/docs/en/supported_models/supported_models.md @@ -20,6 +20,7 @@ The following tables detail the models supported by LMDeploy's TurboMind engine | Qwen2 | 1.5B - 72B | LLM | Yes | Yes | Yes | Yes | | Mistral | 7B | LLM | Yes | Yes | Yes | - | | Qwen-VL | 7B | MLLM | Yes | Yes | Yes | Yes | +| Qwen2-VL | 2B, 7B, 72B | MLLM | Yes | Yes | Yes | - | | DeepSeek-VL | 7B | MLLM | Yes | Yes | Yes | Yes | | Baichuan | 7B | LLM | Yes | Yes | Yes | Yes | | Baichuan2 | 7B | LLM | Yes | Yes | Yes | Yes | diff --git a/docs/zh_cn/get_started/installation.md b/docs/zh_cn/get_started/installation.md index 30d08cd9ef..f7eedecfaa 100644 --- a/docs/zh_cn/get_started/installation.md +++ b/docs/zh_cn/get_started/installation.md @@ -23,7 +23,7 @@ pip install lmdeploy 默认的预构建包是在 **CUDA 12** 上编译的。如果需要 CUDA 11+ (>=11.3),你可以使用以下命令安装 lmdeploy: ```shell -export LMDEPLOY_VERSION=0.6.0 +export LMDEPLOY_VERSION=0.6.1 export PYTHON_VERSION=38 pip install https://github.com/InternLM/lmdeploy/releases/download/v${LMDEPLOY_VERSION}/lmdeploy-${LMDEPLOY_VERSION}+cu118-cp${PYTHON_VERSION}-cp${PYTHON_VERSION}-manylinux2014_x86_64.whl --extra-index-url https://download.pytorch.org/whl/cu118 ``` diff --git a/docs/zh_cn/supported_models/supported_models.md b/docs/zh_cn/supported_models/supported_models.md index 779fc6cd51..92fa39669e 100644 --- a/docs/zh_cn/supported_models/supported_models.md +++ b/docs/zh_cn/supported_models/supported_models.md @@ -20,6 +20,7 @@ | Qwen2 | 1.5B - 72B | LLM | Yes | Yes | Yes | Yes | | Mistral | 7B | LLM | Yes | Yes | Yes | - | | Qwen-VL | 7B | MLLM | Yes | Yes | Yes | Yes | +| Qwen2-VL | 2B, 7B, 72B | MLLM | Yes | Yes | Yes | - | | DeepSeek-VL | 7B | MLLM | Yes | Yes | Yes | Yes | | Baichuan | 7B | LLM | Yes | Yes | Yes | Yes | | Baichuan2 | 7B | LLM | Yes | Yes | Yes | Yes | diff --git a/lmdeploy/version.py b/lmdeploy/version.py index 199e3ce8e0..5237d5f859 100644 --- a/lmdeploy/version.py +++ b/lmdeploy/version.py @@ -1,7 +1,7 @@ # Copyright (c) OpenMMLab. All rights reserved. from typing import Tuple -__version__ = '0.6.0' +__version__ = '0.6.1' short_version = __version__