This repository was archived by the owner on Dec 11, 2023. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 2k
This repository was archived by the owner on Dec 11, 2023. It is now read-only.
top-k predictions not generated #477
Copy link
Copy link
Open
Description
I trained the model with:
python -m nmt.nmt \
--attention=scaled_luong \
--src=vi --tgt=en \
--vocab_prefix=/tmp/data/token \
--train_prefix=/tmp/data/train \
--dev_prefix=/tmp/data/valid \
--test_prefix=/tmp/data/test \
--out_dir=/tmp/model-top-k \
--num_train_steps=10000 \
--steps_per_stats=100 \
--num_layers=2 \
--num_units=1024 \
--dropout=0.2 \
--metrics=bleu \
--optimizer=sgd \
--learning_rate=1.0 \
--start_decay_step=5000 \
--decay_steps=10 \
--encoder_type=bi \
--beam_width=10
Followed to that used the inference engine with the following:
python -m nmt.nmt \
--out_dir=./model \
--inference_input_file=./data/test.buggy.beam.search \
--inference_output_file=./data/testing-beam-search/model.output \
--beam_width=10 \
--num_translations_per_input=1
I see in the log:
decoder: infer_mode=greedybeam_width=10, length_penalty=0.000000, coverage_penalty=0.000000
.....
.....
.....
dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_3/basic_lstm_cell/bias:0, (512,), /device:GPU:0
dynamic_seq2seq/decoder/attention/luong_attention/attention_g:0, (), /device:GPU:0
dynamic_seq2seq/decoder/attention/attention_layer/kernel:0, (256, 128), /device:GPU:0
dynamic_seq2seq/decoder/output_projection/kernel:0, (128, 10003),
loaded infer model parameters from ./code_model/translate.ckpt-20000, time 0.29s
# Start decoding
decoding to output ./data/testing-beam-search/model.output
done, num sentences 1, num translations per input 1, time 0s, Thu Jun 11 21:20:29 2020.
However, only one prediction is generated. Any ideas?
Metadata
Metadata
Assignees
Labels
No labels