Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I use "python setup.py install"download the fast-transformer mudule,came back to run the tran.py,it turns out.... #29

Open
aefew45yhgwshbwe5r67iuj opened this issue Dec 17, 2024 · 1 comment

Comments

@aefew45yhgwshbwe5r67iuj

(pytorch2) E:\大创\video-bgm-generation-main>D:/Anaconda/envs/pytorch2/python.exe e:/大创/video-bgm-generation-main/src/train.py
name: debug
args Namespace(name='debug', lr=0.0001, batch_size=6, path=None, epochs=200, train_data='E:/大创/video-bgm-generation-main/dataset/lpd_5_prcem_mix_v8_10000.npz', gpus=None)
DEBUG MODE checkpoints will not be saved
num of encoder classes: [ 18 3 18 129 18 6 20 102 4865] [7, 1, 6]
D_MODEL 512 N_LAYER 12 N_HEAD 8 DECODER ATTN causal-linear

: [ 18 3 18 129 18 6 20 102 4865]
DEVICE COUNT: 1
VISIBLE: 0
n_parameters: 39,006,324
train_data: dataset
batch_size: 6
num_batch: 506
train_x: (3039, 9999, 9)
train_y: (3039, 9999, 9)
train_mask: (3039, 9999)
lr_init: 0.0001
DECAY_EPOCH: []
DECAY_RATIO: 0.1
Traceback (most recent call last):
File "e:\大创\video-bgm-generation-main\src\train.py", line 226, in
train_dp()
File "e:\大创\video-bgm-generation-main\src\train.py", line 169, in train_dp
losses = net(is_train=True, x=batch_x, target=batch_y, loss_mask=batch_mask, init_token=batch_init)
File "D:\Anaconda\envs\pytorch2\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Anaconda\envs\pytorch2\lib\site-packages\torch\nn\parallel\data_parallel.py", line 169, in forward
return self.module(*inputs[0], **kwargs[0])
File "D:\Anaconda\envs\pytorch2\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "e:\大创\video-bgm-generation-main\src\model.py", line 482, in forward
return self.train_forward(**kwargs)
File "e:\大创\video-bgm-generation-main\src\model.py", line 450, in train_forward
h, y_type = self.forward_hidden(x, memory=None, is_training=True, init_token=init_token)
File "e:\大创\video-bgm-generation-main\src\model.py", line 221, in forward_hidden
encoder_hidden = self.transformer_encoder(encoder_pos_emb, attn_mask)
File "D:\Anaconda\envs\pytorch2\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Anaconda\envs\pytorch2\lib\site-packages\pytorch_fast_transformers-0.4.0-py3.9-win-amd64.egg\fast_transformers\transformers.py", line 138, in forward
x = layer(x, attn_mask=attn_mask, length_mask=length_mask)
File "D:\Anaconda\envs\pytorch2\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Anaconda\envs\pytorch2\lib\site-packages\pytorch_fast_transformers-0.4.0-py3.9-win-amd64.egg\fast_transformers\transformers.py", line 77, in forward
x = x + self.dropout(self.attention(
File "D:\Anaconda\envs\pytorch2\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Anaconda\envs\pytorch2\lib\site-packages\pytorch_fast_transformers-0.4.0-py3.9-win-amd64.egg\fast_transformers\attention\attention_layer.py", line 103, in forward
new_values = self.inner_attention(
File "D:\Anaconda\envs\pytorch2\lib\site-packages\torch\nn\modules\module.py", line 1501, in call_impl
return forward_call(*args, **kwargs)
File "D:\Anaconda\envs\pytorch2\lib\site-packages\pytorch_fast_transformers-0.4.0-py3.9-win-amd64.egg\fast_transformers\attention\causal_linear_attention.py", line 98, in forward
V = causal_linear(
File "D:\Anaconda\envs\pytorch2\lib\site-packages\pytorch_fast_transformers-0.4.0-py3.9-win-amd64.egg\fast_transformers\attention\causal_linear_attention.py", line 23, in causal_linear
V_new = causal_dot_product(Q, K, V)
File "D:\Anaconda\envs\pytorch2\lib\site-packages\torch\autograd\function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "D:\Anaconda\envs\pytorch2\lib\site-packages\pytorch_fast_transformers-0.4.0-py3.9-win-amd64.egg\fast_transformers\causal_product_init
.py", line 44, in forward
CausalDotProduct.dot[device.type](
TypeError: 'NoneType' object is not callable

@wzk1015
Copy link
Owner

wzk1015 commented Dec 17, 2024

Please refer to #3.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants