Skip to content

kharoufabdallah/pps_mpact

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Predator-Prey Simulation (PPS)

An actor-based parallel predator-prey ecosystem simulation using MPI. Grid cells are modelled as actors communicating through asynchronous message passing, scheduled by MPI software entities in a 1:N MPI-to-actor relationship.

Design

  • Cell-as-actor: Each grid cell is an actor with encapsulated state (predator/prey counts, spatial coordinates)
  • 1:N MPI-to-actor: Each MPI rank manages multiple cell actors via CellMPI
  • Asynchronous messaging: Cross-rank animal movement uses MPI_Bsend (fire-and-forget)
  • Clock actor: Coarse-grained day synchronisation via SUNRISE/SUNSET messages on rank 0
  • Out-of-lockstep: Ranks advance sub-steps independently within each day
  • Bounce-back: Animals rejected by full cells are returned to their origin via feedback messages

Parameters

Defined in cell/lotka_volterra.h:

#define MAX_PER_CELL 100
#define INITIAL_PREY 10000
#define INITIAL_PREDATORS 5000
#define STEPS_PER_DAY 50
#define DAYS 20

#define ALPHA 0.06     // prey birth rate
#define BETA  0.01     // predation rate
#define DELTA 0.1      // predator reproduction per prey eaten
#define GAMMA 0.04     // predator death rate

Compilation

make            # standard build
make CXXFLAGS="-std=c++17 -O2 -DVERIFY"   # with verification counters
make CXXFLAGS="-std=c++17 -O2 -DDEBUG"    # with debug prints
make clean      # remove build artifacts

Running

# Single rank
srun --ntasks=1 ./main <grid_x> <grid_y>

# Multiple ranks (e.g. 16 ranks on Cirrus)
srun --time=00:10:00 \
     --partition=standard \
     --qos=short \
     --distribution=block:block \
     --hint=nomultithread \
     --account=<budget_code> \
     --nodes=1 \
     --ntasks=16 \
     ./main 200 200

# 512 ranks across 2 nodes
srun --time=00:10:00 \
     --partition=standard \
     --qos=short \
     --exclusive \
     --distribution=block:block \
     --hint=nomultithread \
     --account=<budget_code> \
     --nodes=2 \
     --ntasks=512 \
     ./main 200 200

Scaling Results

Benchmarked on Cirrus (dual AMD EPYC 9825, 288 cores/node).

Strong Scaling (200x200 grid, fixed problem size)

Ranks Time (s) Speedup
1 619.51 1.00x
2 532.00 1.16x
4 255.38 2.43x
8 201.41 3.08x
16 118.66 5.22x
32 89.73 6.90x
64 68.33 9.07x
128 51.37 12.06x

Weak Scaling (~2500 cells/rank, proportional animals)

Ranks Grid Time (s)
1 50x50 37.55
2 70x70 66.09
4 100x100 110.09
8 140x140 89.07
16 200x200 109.26
32 280x280 145.42
64 400x400 175.75
128 566x566 240.89

Conditional Compilation Flags

Flag Purpose
-DDEBUG Enable detailed print statements via DBG_PRINT macro
-DVERIFY Enable Test class for send/receive count verification

Architecture

predator-prey/
├── main.cpp
├── Makefile
├── README.md
├── logs/
├── framework/
│   ├── actor/
│   │   ├── actor.hpp          # Template-based Actor<DataT, MsgT, RspT> base class
│   │   └── actor_err.hpp      # Actor error codes
│   └── mpi/
│       └── mpi_sw.hpp         # MPI software entity base class
├── cell/
│   ├── cell.hpp               # GridCellActor — cell actor with simulation logic
│   ├── cell.cpp
│   ├── cell_msg.hpp           # CellMsg_s struct and message type enums
│   └── lotka_volterra.h       # Simulation and Lotka-Volterra parameters
├── cell_mpi/
│   ├── cell_mpi.hpp           # CellMPI — MPI rank manager, router, scheduler
│   └── cell_mpi.cpp
├── clock/
│   ├── clock.hpp              # ClockActor — temporal synchronisation
│   └── clock.cpp
└── test/
    └── test.hpp               # Test class for communication verification

About

An actor-based parallel predator-prey ecosystem simulation using MPI. Grid cells are modelled as actors communicating through asynchronous message passing, scheduled by MPI software entities in a 1:N MPI-to-actor relationship.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors