-
Notifications
You must be signed in to change notification settings - Fork 317
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[17:28:39] [Model]: Loading declare-lab/flan-alpaca-large... #62
Comments
Hi, did you try to conduct a unit test to see if it is possible to load a pre-trained model using huggingface? My guess is that the memory is not enough for loading the model. from transformers import T5ForConditionalGeneration # you may also try to change "declare-lab/flan-alpaca-large" to "declare-lab/flan-alpaca-base" to see if it goes well. |
after a unit test by loading the model from huggingface.
I have changed the model from large to base but I have encountered the same:
I am using google colab T4 with high RAM |
The hanging may also be reasonable as the main process could be handling the data after loading the model (there is no signal for indicating the completion of model loading). |
Thanks for the great work,
I am trying to implement the work of this paper on google colab with 166 G disk and T4. but at the training stage for both rationale generation and answer inference I got the output:
and the cell stop and the expermint folder is empty. can anyone explain what is the problem for me? (I am still a new learner in the field)
The text was updated successfully, but these errors were encountered: