-
Notifications
You must be signed in to change notification settings - Fork 183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
按照示例运行报错 #33
Comments
在提供的colab上能运行,但我的本地环境不行。 |
可能是版本不匹配,transformers低一点试试,也可以看看colab的环境配置 |
升级了torch版本后可以了。但项目主页写的,torch==1.7,transformer==4.26.1 |
我们稍后改下 |
你好,请问你升级到多少版本以后正常 |
我没有指定版本,默认升级到了最新 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
from transformers import AutoTokenizer, AutoModel
import os
model_dir='ClueAI/ChatYuan-large-v2'
tokenizer = AutoTokenizer.from_pretrained(model_dir)
速度会受到网络影响,网络不好可以使用下面高级参数配置方式
model = AutoModel.from_pretrained(model_dir, trust_remote_code=True)
history = []
print("starting")
while True:
query = input("\n用户:")
if query == "stop":
break
if query == "clear":
history = []
os.system('clear')
continue
response, history = model.chat(tokenizer, query, history=history)
print(f"小元:{response}")
报错信息:
用户:你好
Traceback (most recent call last):
File "/root/work2/work2/chenzhihao/llm_chatbot/examples/chatyuan_interact.py", line 79, in
main()
File "/root/work2/work2/chenzhihao/llm_chatbot/examples/chatyuan_interact.py", line 71, in main
response = answer(query, context)
File "/root/work2/work2/chenzhihao/llm_chatbot/examples/chatyuan_interact.py", line 52, in answer
out = model.generate(**encoding, return_dict_in_generate=True, output_scores=False, max_new_tokens=1024,
File "/root/anaconda3/envs/llm_chatbot-py39/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/root/anaconda3/envs/llm_chatbot-py39/lib/python3.9/site-packages/transformers/generation/utils.py", line 1437, in generate
return self.sample(
File "/root/anaconda3/envs/llm_chatbot-py39/lib/python3.9/site-packages/transformers/generation/utils.py", line 2479, in sample
next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)
RuntimeError: probability tensor contains either
inf
,nan
or element < 0The text was updated successfully, but these errors were encountered: