Skip to content

MagiCompiler v1.0.0: Break the Boundaries of Local Compilation

Latest

Choose a tag to compare

@jiahy0825 jiahy0825 released this 23 Mar 13:04
· 13 commits to main since this release

We are excited to announce the official release of MagiCompiler v1.0.0! 🎉

MagiCompiler is an advanced compiler and runtime augmentation framework built on top of torch.compile, designed specifically for large-scale Transformer-like models. Going beyond local operator optimization, it brings system-level optimization to both multi-modal inference and large-model training.

Release Highlights

  • Whole-graph compilation for multi-modal inference across Transformer boundaries.
  • FSDP-aware whole-layer compilation for large-model training with transparent parameter sharding.
  • Plug-and-play integration with minimal code intrusion.
  • Smart asynchronous offloading and heuristic activation recomputation to perfectly balance compute efficiency and memory footprint.
  • Better interpretability through human-readable graph and kernel artifacts dumps.

About This Release

v1.0.0 marks our first official open-source release and the starting point of our Compiler as Manager vision: reimagining the compiler not just as a kernel generator, but as a system-level manager orchestrating execution, dataflow, and memory.

We welcome feedback, issues, and contributions from the community.