profile
viewpoint

Ask questionsTraining fails with torch.jit.script

I pulled the latest code and now training fails with error:

raise NotSupportedError(base.range(), "slicing multiple dimensions at the same time isn't supported yet") torch.jit.frontend.NotSupportedError: slicing multiple dimensions at the same time isn't supported yet @torch.jit.script def fused_add_tanh_sigmoid_multiply(input_a, input_b,n_channels): n_channels_int = n_channels[0] in_act = input_a+input_b t_act = torch.nn.functional.tanh(in_act[:, :n_channels_int, :]) ~~~~~~ <--- HERE s_act = torch.nn.functional.sigmoid(in_act[:,n_channels_int:, :]) acts = t_act * s_act return acts

NVIDIA/waveglow

Answer questions Cpruce

operation failed in interpreter: @torch.jit.script def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): n_channels_int = n_channels[0] in_act = input_a+input_b ~~~~~~~~~~~~~~~ <--- HERE t_act = torch.tanh(in_act[:, :n_channels_int, :]) s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) acts = t_act * s_act return acts

pytorch 1.1


Epoch: 0 Traceback (most recent call last): File "train.py", line 182, in <module> train(num_gpus, args.rank, args.group_name, **train_config) File "train.py", line 121, in train outputs = model((mel, audio)) File "/home/cory/anaconda3/envs/nvc/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call result = self.forward(*input, **kwargs) File "/media/cory/ssd_dsk/waveglow/glow.py", line 240, in forward output = self.WN[k]((audio_0, spect)) File "/home/cory/anaconda3/envs/nvc/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call result = self.forward(*input, **kwargs) File "/media/cory/ssd_dsk/waveglow/glow.py", line 162, in forward torch.IntTensor([self.n_channels])) File "/home/cory/anaconda3/envs/nvc/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call result = self.forward(*input, **kwargs) RuntimeError: /pytorch/torch/csrc/jit/fuser/cuda/fused_kernel.cpp:137: a PTX JIT compilation failed

pytorch 1.0.1

Update

It seems the root cause for me was CUDA OOM and was showing the jit error message for some reason. Using PT 1.1 with jit now

useful!

Related questions

No questions were found.
source:https://uonfu.com/
Github User Rank List