Skip to content

ParEval for asynchronous many task (AMT) languages e.g. HPX

License

Notifications You must be signed in to change notification settings

mpr-lab/ParEval_AMT

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

67 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ParEval

HPDC 2024 arXiv GitHub license

This repo contains the Parallel Code Evaluation (ParEval) Benchmark for evaluating the ability of Large Language Models to write parallel code. See the ParEval Leaderboard for up-to-date results on different LLMs. We have extended this to include testing for HPX and Legion generation, and in addition translation from HPX -> Legion

Overview

The organization of the repo is as follows.

  • prompts/ -- the prompts in ParEval alongside some utility scripts
  • generate/ -- scripts for generating LLM outputs
  • drivers/ -- scripts to evaluate LLM outputs
  • analysis/ -- scripts to analyze driver results and compute metrics
    • @k/ -- summary csvs of HPX generation performance
    • visuals/ -- per-prompt category net runtime graphs
    • visuals_specific/ -- individual prompt runtime graphs
  • tpl/ -- git submodule dependencies
  • prompts/ -- all prompt file jsons that are to be used as inputs for the generation phase
  • run_driver -- miscellaneous driver running scripts
  • srun_generate/ -- miscellaneous code generation scripts

Each subdirectory has further documentation on its contents. The general workflow is to use generate/generate.py to generate LLM outputs, run drivers/run-all.py to evaluate outputs, and analysis/metrics.py to post-process the results summaries.

Setup and Installation

A couple core systems software are assumed to be installed: Python >=3.7, a C++ compiler that supports C++17 and OpenMP, Make, CMake, and an MPI implementation. If you are testing the CUDA and HIP prompts, then you will need access to NVIDIA and AMD GPUs alongside their respective software stacks.

First, clone the repo.

git clone https://github.com/SanjanaYasna/ParEval_amt.git 

Next, you need to build Kokkos (if you want to include it in testing). Kokkos source code was installed under directory ParEval_amt/tpl/kokkos/kokkos, set to version 4.5.01

cd amt/tpl/kokkos/kokkos
git checkout {stable version of choice}
module load gcc/9.4.0
module load mpich/4.2.1
#configure again at your own preferences, with build set to builddir below 
#my config...
cmake -B builddir \
    -DCMAKE_CXX_COMPILER=g++ \
    -DCMAKE_BUILD_TYPE=Release \
    -DKokkos_ENABLE_OPENMP=ON \
    -DKokkos_ENABLE_THREADS=ON \
    -DKokkos_ARCH_NATIVE=ON \
    -DKokkos_ENABLE_DEPRECATED_CODE_4=OFF
cmake --build builddir
#set install prefix for kokkos to build under tpl/kokkos, as that's where pareval make files check
cmake --install builddir /work/pi_mrobson_smith_edu/ParEval_amt/tpl/kokkos/build

You will need to be able to use HPX for this project. There are two versions of HPX being tested: 1.5.1, and 1.10.0 There are setup scripts on the Unity cluster to get the respective version of HPX running:

#HPX 1.5.1 
source /work/pi_mrobson_smith_edu/.hpx_tcmalloc_1.5.1
#OR 
#HPX 1.10.0 
source /work/pi_mrobson_smith_edu/.hpx_1_10_0

Finally, you need to install the Python dependencies. requirements_AMT.txt has the set of dependencies. Use UV for the easiest time installing these.

#get uv in whatever environment you have
pip install uv
#if you're on unity cluster, there is a uv environment you can activate
source /work/pi_mrobson_smith_edu/pareval/.venv/bin/activate

#otherwise, take from the .txt environment file and make a uv environment from these packages 
uv add -r requirements_AMT.txt

Citing ParEval original repo contents

@misc{nichols2024large,
      title={Can Large Language Models Write Parallel Code?}, 
      author={Daniel Nichols and Joshua H. Davis and Zhaojun Xie and 
              Arjun Rajaram and Abhinav Bhatele},
      year={2024},
      publisher = {Association for Computing Machinery},
      address = {New York, NY, USA},
      booktitle = {Proceedings of the 33rd International Symposium on High-Performance Parallel and Distributed Computing},
      series = {HPDC '24}
}

License

ParEval is distributed under the terms of the MIT license.

About

ParEval for asynchronous many task (AMT) languages e.g. HPX

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++ 56.4%
  • Python 20.4%
  • Cuda 14.9%
  • Shell 8.0%
  • Other 0.3%