Hi,
Thanks for this good work!
I am using Skip-LSTM on my experiment now. And it seems to work well.
However, I am wondering how can this code deal with variable-length sequence inputs?
When using RNN/LSTM in pytorch, we can use torch.nn.utils.rnn.pack_padded_sequence and torch.nn.utils.rnn.pad_packed_sequence to make the model not to consider the padding vectors as input.
Are there any alternative ways to do the same thing? Thanks a lot!
Hi,
Thanks for this good work!
I am using Skip-LSTM on my experiment now. And it seems to work well.
However, I am wondering how can this code deal with variable-length sequence inputs?
When using RNN/LSTM in pytorch, we can use
torch.nn.utils.rnn.pack_padded_sequenceandtorch.nn.utils.rnn.pad_packed_sequenceto make the model not to consider the padding vectors as input.Are there any alternative ways to do the same thing? Thanks a lot!