Skip to content

Commit

Permalink
fix mllama inference without image (#2947)
Browse files Browse the repository at this point in the history
* fix

* add hf token in ut

* debug

* fix
  • Loading branch information
RunningLeon authored Dec 25, 2024
1 parent dfeee42 commit 35a5591
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 1 deletion.
2 changes: 2 additions & 0 deletions .github/workflows/unit-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@ jobs:
options: "--gpus=all --ipc=host --user root -e PIP_CACHE_DIR=/root/.cache/pip -e CUDA_VISIBLE_DEVICES=2,3 --pull never"
volumes:
- /nvme/share_data/github-actions/pip-cache:/root/.cache/pip
- /nvme/share_data/github-actions/hf_home:/root/.cache/huggingface
- /nvme/share_data/github-actions/packages:/root/packages
- /usr/share/zoneinfo/Asia/Shanghai:/etc/localtime:ro
steps:
Expand Down Expand Up @@ -90,6 +91,7 @@ jobs:
echo "TODO"
- name: Test lmdeploy python UT
run: |
huggingface-cli login --token ${{ secrets.HF_TOKEN }}
coverage run --branch --source lmdeploy -m pytest -rsE tests
coverage xml
coverage report -m
Expand Down
5 changes: 4 additions & 1 deletion lmdeploy/pytorch/models/mllama.py
Original file line number Diff line number Diff line change
Expand Up @@ -1289,8 +1289,11 @@ def prepare_inputs_for_generation(
attn_metadata = context.attn_metadata
cross_attn_metadata = context.cross_attn_metadata

if int(cross_attn_metadata.kv_seqlens.sum()) == 0:
# cross_attn_metadata is None when inputs without image
if cross_attn_metadata is not None and int(
cross_attn_metadata.kv_seqlens.sum()) == 0:
cross_attn_metadata.kv_seqlens = None

device = input_ids.device

# process image input
Expand Down

0 comments on commit 35a5591

Please sign in to comment.