Skip to content

InternScience/SciEvalKit

Repository files navigation

OpenCompass SciEval ToolKit

A unified evaluation toolkit and leaderboard for rigorously assessing the scientific intelligence of large language and vision–language models across the full research workflow.


Website  Leaderboard  Report  GitHub

welcome  Welcome to the official repository of SciEval!

why Why SciEval?

SciEval is an open‑source evaluation framework and leaderboard aimed at measuring the scientific intelligence of large language and vision–language models.
Although modern frontier models often achieve ~90 on general‑purpose benchmarks, their performance drops sharply on rigorous, domain‑specific scientific tasks—revealing a persistent general‑versus‑scientific gap that motivates the need for SciEval. Its design is shaped by following core ideas:

  • Beyond general‑purpose benchmarks ▸ Traditional evaluations focus on surface‑level correctness or broad‑domain reasoning, hiding models’ weaknesses in realistic scientific problem solving. SciEval makes this general‑versus‑scientific gap explicit and supplies the evaluation infrastructure needed to guide the integration of broad instruction‑tuned abilities with specialised skills in coding, symbolic reasoning and diagram understanding.
  • End‑to‑end workflow coverage ▸ SciEval spans the full research pipeline—such as image interpretation, symbolic reasoning, executable code generation, and hypothesis generation—instead of isolated subtasks.
  • Capability‑oriented & reproducible ▸ A unified toolkit for dataset construction, prompt engineering, inference, and expert‑aligned scoring ensures transparent and repeatable comparisons.
  • Grounded in real scenarios ▸ Benchmarks use domain‑specific data and tasks so performance reflects actual scientific practice, not synthetic proxies.
SciEval capability radar

progress Progress in Scientific Intelligence

Realtime updates — scores are synchronized with the Intern‑Discovery‑Eval leaderboard.

SciEval capability radar
  • General benchmarks overestimate scientific competence. Even the strongest frontier models (e.g., Gemini 3 Pro) score below 60 on Scientific Text Capability , despite scoring near 90 on widely used general‑purpose benchmarks.
  • Multimodal capability is breaking the 60‑point barrier. Gemini 3 Pro leads Scientific Multimodal Capability with 62.88, reflecting strong performance in multimodal perception and reasoning.
  • Open‑source systems are rapidly closing the gap. Qwen3‑VL‑235B‑A22B and Qwen3‑Max now match or surpass several proprietary models in symbolic reasoning and code generation, signalling healthy community progress.
  • Symbolic reasoning and code generation remain bottlenecks. No model exceeds 50 in equation‑level manipulation or 30 in end‑to‑end executable code tasks, indicating that scientific workflows requiring programmatic pipelines still fail frequently.

key Key Features

Category Highlights
Seven Core Dimensions Scientific Knowledge Understanding, Scientific Code Generation, Scientific Symbolic Reasoning, Scientific Hypothesis Generation, Scientific Multimodal Perception, Scientific Multimodal Reasoning, Scientific Multimodal Understanding
Discipline Coverage Life Science • Astronomy • Earth Science • Chemistry • Materials Science • Physics.
Multimodal & Executable Scoring Supports text, code, and image inputs; integrates code tasks and LLM-judge fallback for open-ended answers.
Reproducible & Extensible Clear dataset and model registries, minimised hard-coding and modular evaluators make new tasks or checkpoints easy to plug in.
SciEval framework overview

An overview of the SciEval framework, illustrating how heterogeneous scientific datasets, unified prompt construction, model inference, and capability-oriented evaluators are integrated into a single reproducible evaluation pipeline.

news News

  • [2025‑12‑12] · 📰 Evaluation Published on OpenCompass

    • SciEval’s benchmark results are now live on the OpenCompass platform, providing broader community visibility and comparison.
  • [2025‑12‑05] · 🚀 SciEval v1 Launch

    • Initial public release of a science‑focused evaluation toolkit and leaderboard devoted to realistic research workflows.
    • Coverage: seven scientific capability dimensions × six major disciplines in the initial benchmark suite.
  • [2025‑12‑05] · 🌟 Community Submissions Open

    • Submit your benchmarks via pull request to appear on the official leaderboard.

start Quick Start

Get from clone to first scores in minutes—see our local QuickStart / 快速开始 guides, or consult the VLMEvalKit tutorial for additional reference.

1 · Install

git clone https://github.com/InternScience/SciEvalKit.git
cd SciEvalKit
pip install -e .[all]    # brings in vllm, openai‑sdk, hf_hub, etc.

2 · (Optional) add API keys

Create a .env at the repo root only if you will call API models or use an LLM‑as‑judge backend:

OPENAI_API_KEY=...
GOOGLE_API_KEY=...
DASHSCOPE_API_KEY=...

If no keys are provided, SciEval falls back to rule‑based scoring whenever possible.

3 · Run a API demo test

python run.py \
  --dataset SFE \
  --model gpt-4o \
  --mode all \
  --work-dir outputs/demo_api \
  --verbose

4 · Evaluate a local/GPU model

python run.py \
  --dataset MaScQA \
  --model qwen_chat \
  --mode infer \
  --work-dir outputs/demo_qwen \
  --verbose

# ➜ Re‑run with --mode all after adding an API key
#     if the benchmark requires an LLM judge.

update Codebase Updates

  • Execution‑based Scoring
    • Code‑generation tasks (SciCode, AstroVisBench) are now graded via sandboxed unit tests.

thanks Acknowledgements

SciEval ToolKit is built on top of the excellent VLMEvalKit framework and we thank the OpenCompass team not only for open‑sourcing their engine, but also for publishing thorough deployment and development guides (Quick StartDevelopment Notes) that streamlined our integration.

We also acknowledge the core SciEval contributors for their efforts on dataset curation, evaluation design, and engine implementation: Jun Yao, Han Deng, Yizhou Wang, Jiabei Xiao, Jiaqi Liu, Encheng Su, Yujie Liu, Weida Wang, Junchi Yao, Haoran Sun, Runmin Ma, Bo Zhang, Dongzhan Zhou, Shufei Zhang, Peng Ye, Xiaosong Wang, and Shixiang Tang, as well as all community testers who provided early feedback.

SciEvalKit contributors can join the author list of the report based on their contribution to the repository. Specifically, it requires 3 major contributions (implement a new benchmark, foundation model, or contribute a major feature). We will update the report quarterly and an additional section that details each developer’s contribution will be appended in the next update.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages