Skip to content

Discrepancy in training results on WebQSP with BERT #10

@AiRyunn

Description

@AiRyunn

Hi! I was trying to train Pangu with BERT on WebQSP but can't match the results from the paper or the repo's trained model.
I have experimented with different epochs, but the results are inconsistent. The only one closer to the reported F1 is using 6 epochs, but there is still a gap, and the F1 with 7 epochs drops significantly.

Training Method F1
Retrain (1 epoch) 68.8
Retrain (2 epochs) 72.2
Retrain (3 epochs) 71.4
Retrain (4 epochs) 71.7
Retrain (5 epochs) 69.9
Retrain (6 epochs) 75.8
Retrain (7 epochs) 72.9
Retrain (10 epochs) 73.0
Trained Model from Repo 77.3
Reported in Paper 77.9

Can you help me understand why I'm seeing these differences? Thank you in advance!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions