Skip to content

Slow Training with GPU #1

@jecummin

Description

@jecummin

I ran the usage example in the README after setting up the repo and environment, but the training speeds seems to be much slower than the README suggests.

> git clone https://github.com/iliao2345/CompressARC.git
> cd CompressARC
> mamba create -n compress
> mamba activate compress
> pip install -r requirements.txt
> python analyze_example.py
Enter which split you want to find the task in (training, evaluation, test): training
Enter which task you want to analyze (eg. 272f95fa): 272f95fa
...

The progress bar estimates that the task will take ~50min to train. Roughly 3-4x slower than the README suggests. I've tried this with both an NVIDIA T4 and an NVIDIA A100 GPU for processing. I confirmed that the GPU was in use during training. By comparison, if I restrict training to only the CPU, training time is estimated to be ~1hr. The GPU doesn't seem to be accelerating much.

I had made no edits to the code.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions