Skip to content

echen5503/IIMOC-Backend

Repository files navigation

IIMOC-Backend

Backend for IIMOC 2026. Built on top of go-judge, it provides a REST API for team management, code submission, sandboxed judging, leaderboards, and scoring — purpose-built for heuristic/optimization contests where partial scoring and iterative improvement matter.

What Makes It Good for Heuristic Contests

Most OJ backends are built for exact-answer problems. This one is designed for the kind of contest where:

  • Contestants submit repeatedly, trying to improve their score
  • Each problem has a score out of 100 based on solution quality, not just pass/fail
  • Teams are ranked on total score across all problems, updated live
  • Contestants can view their own submission history and see where they stand on each problem

The scoring system tracks each team's best submission per problem, maintains a live leaderboard, and supports baseline normalization (e.g., scoring relative to a reference solution). Daily bonuses reward early high-quality submissions.

The judging pipeline is fully extensible — problems define their own testlib checker, so scoring logic lives entirely in chk.cc and can implement any heuristic metric you want.

Key Features

  • Sandboxed execution via go-judge (C++17, Python3, PyPy3, Java by default)
  • Classic and interactive problem support with precompiled testlib checker/interactor caching
  • Team registration, credential verification, and member tracking
  • Per-problem and overall leaderboards with pagination
  • Scoring system with daily bonuses, baseline normalization, and best-score caching
  • Submission archiving to submissions/<bucket>/<sid>/ with meta.json and result.json
  • Admin endpoints for baseline setting, bonus awarding, and cache rebuilding
  • Docker-first deployment via Dockerfile and docker-compose.yml

Architecture

The codebase is organized into distinct layers:

IIMOC-Backend/
├── server.js                  # Entry point: wires all layers and starts Express
├── src/
│   ├── config/                # Configuration constants (single source of truth)
│   ├── types/                 # HTTP status codes and type definitions
│   ├── middleware/            # Auth, validation, error-handling middleware
│   ├── data/
│   │   ├── db.js              # MongoDB connection and collection initialization
│   │   └── repository/        # Data access objects (teams, submissions, scores)
│   ├── services/
│   │   ├── judging/           # Test case execution and result persistence
│   │   ├── scoring/           # Leaderboard and score calculation
│   │   └── problems/          # Problem loading and management
│   ├── routes/                # Express route handlers (auth, submissions, leaderboard, problems, admin)
│   ├── utils/
│   │   ├── gojudge.js         # go-judge HTTP API client
│   │   ├── upload.js          # Multer file upload config
│   │   └── submissionManager.js
│   ├── lib/
│   │   └── dailybonus.js      # Daily bonus award logic
│   ├── router.js              # Top-level API router
│   ├── judge_engine.js        # Submission queue and worker orchestration
│   └── problem_manager.js     # Problem loading, validation, import/export
├── problems/                  # Problem packages (one directory per problem)
│   └── problem_creation.md    # Agentic problem creation guide (see below)
├── submissions/               # Submission storage (auto-created)
├── data/                      # Result cache (auto-created)
├── config/langs.yaml          # Language toolchain configuration
├── include/testlib.h          # testlib header
├── scripts/                   # Batch utilities (setup, submit, fetch)
├── Dockerfile
└── docker-compose.yml

Creating Problems with AI

problems/problem_creation.md is a prompt file designed to be fed directly to Claude Code or any other agentic coding tool. Drop a rough statement_draft.md into the problem directory, then point an agent at problem_creation.md and it will generate a complete, tested problem package:

File What the agent generates
statement.txt Polished markdown problem statement
chk.cc testlib checker implementing your scoring metric
gen.py Test case generator (produces testdata/ and smalltest/)
vis.py Terminal visualization of a test case + output
sample.py Minimal solution that achieves non-zero score
bad.py A subtly incorrect solution (useful for verifying the checker)
testing.md Confirmation that the checker, generator, and visualizer all pass

The agent is instructed to walk through small cases step-by-step to verify correctness, prompt you when the problem statement is ambiguous (and suggest a reasonable default), then clean up any scratch files. The result is a drop-in problem package ready for POST /problem/setup.

The full example package is in problems/PPK/.

Problem Format

Problem directories live under problems/<pid>/.

File Purpose
config.yaml Metadata, time/memory limits, subtask definitions
statement.txt Plain-text or markdown problem statement
testdata/ Input/output pairs (1.in, 1.ans, ...)
smalltest/ Smaller cases for visualization and quick testing
chk.cc testlib checker — defines the scoring logic
interactor.cpp Optional interactor for interactive problems

Example config.yaml:

type: default
checker: chk.cc
time: 5s
memory: 256m
subtasks:
  - score: 100
    n_cases: 30
    n_visualize: 10
    n_systest: 300

The checker (chk.cc) receives the input, expected output, and contestant output, and writes a score between 0 and 100. All heuristic scoring logic lives here — the rest of the backend is metric-agnostic.

Requirements

  • Node.js ≥ 18 (ES modules)
  • MongoDB (any recent version; connection via MONGO_URI)
  • go-judge binary with HTTP control port exposed
  • Language toolchains: g++, openjdk-17-jdk, python3, pypy3

Debian/Ubuntu example:

sudo apt update
sudo apt install -y g++ openjdk-17-jdk-headless python3 pypy3

Getting Started

Local Run

  1. Install dependencies:

    npm install
  2. Start go-judge:

    go-judge --parallelism=4
  3. Start the server:

    MONGO_URI=mongodb://localhost:27017/IIMOC \
    PORT=8081 \
    GJ_ADDR=http://127.0.0.1:5050 \
    JUDGE_WORKERS=4 \
    node server.js
  4. Verify health:

    curl http://localhost:8081/health

If you use testlib inside the sandbox, ensure the header is mounted and set TESTLIB_INSIDE (default: /lib/testlib).

Docker / Compose

./restart.sh

The compose file binds ./problems, ./submissions, and ./data into /app. Set MONGO_URI in your environment or a .env file before running.

Environment Variables

Variable Default Purpose
MONGO_URI (required) MongoDB connection string
PORT 8081 Express listen port
GJ_ADDR http://127.0.0.1:5050 go-judge HTTP address
JUDGE_WORKERS 4 Node-side worker count
SUB_BUCKET 100 Submission bucket size
SUBMISSIONS_DIR ./submissions Submission storage path
TESTLIB_INSIDE /lib/testlib testlib header path inside sandbox
GJ_PARALLELISM unset go-judge parallelism (used by Docker entrypoint)
NODE_ENV production Environment mode
DEBUG false Verbose logging

REST API

All endpoints return JSON unless noted. Team credentials (team + password) are required for submissions. Admin endpoints require a separate admin password.

Health

Method Path Description
GET /health Returns { ok: true }

Teams & Authentication

Method Path Description
POST /signup Register or update a team
GET /team/:team/members Get team metadata and member list
GET /teamHasMember?team=&member= Check if a member exists on a team

POST /signup — body fields: team, password, members[] (required); school, email, country (optional). Responses: 201 (created) or 200 (updated), both with { ok, message, team }.

Submissions

Method Path Description
POST /submit Submit code for judging
GET /result/:sid Get full result; ?short=1 for { status, passed }
GET /getSubmissionIds/:team All submission IDs for a team
GET /getTeamSubmissions/:team/:page Paginated team submissions
GET /getSubmissionCode/:sid?password= Fetch submission source (admin)

POST /submit — accepts multipart/form-data or JSON. Required fields: pid, lang, code, team, password. Response: { ok, sid }.

Problems

Method Path Description
GET /problems List all problems; ?statement=true to include statements
GET /problem/statement/:pid Get problem statement (plain text)
POST /problem/setup Build/update a problem (multipart: pid, optional zipfile)
POST /problem/add-problem Create a new problem directory
GET /getProblemSubmissions/:page/:pid Paginated submissions for a problem
GET /problem/placement/:team/:pid Team's rank on a problem
GET /package/:pid Download problem archive (<pid>.tar.gz)

Leaderboards & Scoring

Method Path Description
GET /leaderboard Overall leaderboard; ?page=1&pageSize=20
GET /leaderboard/:pid Per-problem leaderboard
GET /scoring/team/:team Team's score breakdown across all problems

Admin

All admin endpoints require a password field (body or query) matching the configured admin password.

Method Path Description
POST /admin/setBaseline Set baseline score for a problem
POST /admin/dailyCutoff Award daily bonuses for early submissions
POST /admin/rebuildBests Rebuild the team best-scores cache
GET /admin/lastNonCEBeforeCutoff?password= Per-team best submissions before cutoff
POST /admin/systest Run system tests for a problem (see below)
DELETE /admin/problem/:pid?password= Delete a problem and purge all its submissions

POST /admin/systest — copies problems/<pid>/ to problems/<pid>_sys/, regenerates test data using the problem's gen.py with n_systest cases from config.yaml, then submits each team's last pre-cutoff code to <pid>_sys for judging.

Body fields:

  • password (required) — admin password
  • pid (required) — source problem ID
  • cutoff (optional) — Unix timestamp in ms; only submissions at or before this time are included (defaults to now)

Response: { ok, pid, sysPid, cutoff, totalCases, teamsJudged, teams[] } where each team entry is { team, originalSid, sysSid } or { team, skipped, reason } if no eligible submission was found. Results for each sysSid can be polled via GET /result/:sid.

The n_systest field in config.yaml controls how many cases are generated per subtask:

subtasks:
  - score: 100
    n_cases: 30       # used during contest
    n_systest: 300    # used for system testing

Scripts

Script Purpose
scripts/setup.py Batch problem packaging via API
scripts/submit.py Batch submission upload
scripts/fetch.py Batch result polling

Operational Notes

  • Results are cached in memory and persisted to submissions/<bucket>/<sid>/result.json.
  • Interactive problems rely on go-judge pipeMapping; ensure interactor binaries run in the sandbox.
  • Customize language commands in config/langs.yaml.
  • MongoDB collections created automatically on startup: teams, submissions, problemMeta, teamBests, dailyBonuses.

License

AGPL-3.0

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors