feature: log number of seen sequence and frames during training for throughput#230
feature: log number of seen sequence and frames during training for throughput#230avocadoali wants to merge 1 commit intomainfrom
Conversation
There was a problem hiding this comment.
Pull Request Overview
This PR adds throughput tracking metrics to the training logging system by calculating and logging the cumulative number of sequences and frames processed during training.
- Adds
sequences_seenandframes_seenmetrics to training logs - Implements consistent tracking across tokenizer, LAM, and dynamics training scripts
- Removes an unnecessary blank line in train_dynamics.py
Reviewed Changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
| jasmine/train_tokenizer.py | Adds sequences_seen and frames_seen calculations to log_dict |
| jasmine/train_lam.py | Adds sequences_seen and frames_seen calculations to log_dict |
| jasmine/train_dynamics.py | Adds sequences_seen and frames_seen calculations to log_dict and removes extra blank line |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
| sequences_seen = step * args.batch_size | ||
| frames_seen = step * args.seq_len * args.batch_size |
There was a problem hiding this comment.
These calculations count each sequence and frame multiple times in distributed training scenarios. When using data parallelism across multiple devices, each device processes its own batch independently, but step increments on all devices. This means the totals will be multiplied by the number of processes. Consider multiplying by jax.process_count() to get accurate global counts, or divide by process count if tracking per-process metrics.
| sequences_seen = step * args.batch_size | |
| frames_seen = step * args.seq_len * args.batch_size | |
| sequences_seen = step * args.batch_size * jax.process_count() | |
| frames_seen = step * args.seq_len * args.batch_size * jax.process_count() |
| sequences_seen = step * args.batch_size | ||
| frames_seen = step * args.seq_len * args.batch_size |
There was a problem hiding this comment.
These calculations count each sequence and frame multiple times in distributed training scenarios. When using data parallelism across multiple devices, each device processes its own batch independently, but step increments on all devices. This means the totals will be multiplied by the number of processes. Consider multiplying by jax.process_count() to get accurate global counts, or divide by process count if tracking per-process metrics.
| sequences_seen = step * args.batch_size | |
| frames_seen = step * args.seq_len * args.batch_size | |
| sequences_seen = step * args.batch_size * jax.process_count() | |
| frames_seen = step * args.seq_len * args.batch_size * jax.process_count() |
| sequences_seen = step * args.batch_size | ||
| frames_seen = step * args.seq_len * args.batch_size |
There was a problem hiding this comment.
These calculations count each sequence and frame multiple times in distributed training scenarios. When using data parallelism across multiple devices, each device processes its own batch independently, but step increments on all devices. This means the totals will be multiplied by the number of processes. Consider multiplying by jax.process_count() to get accurate global counts, or divide by process count if tracking per-process metrics.
| sequences_seen = step * args.batch_size | |
| frames_seen = step * args.seq_len * args.batch_size | |
| sequences_seen = step * args.batch_size * jax.process_count() | |
| frames_seen = step * args.seq_len * args.batch_size * jax.process_count() |
|
@avocadoali small ping |
No description provided.