Skip to content

Training the VisionTapas model #9

@ashim95

Description

@ashim95

Hi,

Since the checkpoint for VisionTapas is not available, I am trying to train this model from scratch. The code directory does not have any instructions to run the code. It would be great if you can a small readme file with steps to train this model. If you can add the required versions for libraries, that would also be appreciated.

Currently, I am using torch==1.13.0 with CUDA==11.6.2, and transformers==4.24.0. When I run the training script, it seems the fixed_vocab_file is missing. Can you please add how to get that?

I understand you might be short on time, but I would appreciate if you can at least give one of the training commands you used. Currently, my command looks like this:

python train_question_answering.py --train-folder ../data/chartqa/train/ --validation-folder ../data/chartqa/val/ --qa-train-file ../data/chartqa/train/train_augmented.json --qa-val-file ../data/chartqa/val/val_augmented.json --out-dir vision-tapas-model

Hoping for a quick reply.

Thanks,
-- ashim

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions