-
Notifications
You must be signed in to change notification settings - Fork 31.3k
Seed test for reproducibility #42400
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
[For maintainers] Suggested jobs to run (before merge) run-slow: ernie4_5_moe, jamba, minimax, mixtral, qwen2_moe, qwen3_moe |
|
cc @ydshieh |
|
run-slow: ernie4_5_moe, jamba, minimax, mixtral, qwen2_moe, qwen3_moe |
|
This comment contains models: ["models/ernie4_5_moe", "models/jamba", "models/minimax", "models/mixtral", "models/qwen2_moe", "models/qwen3_moe"] |
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
CI Results✅ No failing test specific to this PR 🎉 ! |
|
Could you share the job run pages , or the full error log you have if you are running some tests locally. This test doesn't seem to require seed (from what this test is actually testing), so I would like to know what is wrong without seed. (and the daily CI doesn't have this test failing, only once for mixtral) |
|
@ydshieh the test has a model initialized from |
|
From my experience of dealing failing tests, use seed in this scenario is a quick hot fix but not the best approach handling it. |
The
test_load_balancing_losstest is intermittently flaky for me. This PR sets a manual seed to make it more reproducible, which hopefully solves the issue!