Skip to content

BouchamaDjad/MLIR-RL

 
 

Repository files navigation

Using Reinforcement Learning for Automatic Code Optimization in the MLIR Compiler

Table of Contents

Installation

Step-by-step instructions on how to get the development environment running.

# Clone the repository
git clone https://github.com/TheSun00000/MLIR-RL

Training

To train using the Hierarchical Policy Network:

python hierarchical_train.py

Demonstration

For an interactive demonstration, run the notebook demo/demo_env.ipynb.

For running a random run of the RL enviroenement, run the following:

python demo/demo_env.py

For a small tutorial of applying MLIR transformations, run the following:

python demo/demo_inference.py

Contact

Nazim Bendib - jn_bendib@esi.dz

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • MLIR 64.1%
  • Makefile 20.4%
  • Python 10.2%
  • C++ 1.8%
  • CMake 1.5%
  • C 1.4%
  • Other 0.6%