AI Research Interview Lab is a local practice workspace for interview preparation for research scientists and engineers. Problems curated by me (Andrew Zhao) and code all written by OpenAI Codex. I know that https://www.deep-ml.com/problems exists, but I personally curated these problems to fit for my own needs.
All GUI and backend code are generated by OpenAI's Codex.
The current curation focuses on LLM + RL. Canonical answers are either 1) generated by ChatGPT, 2) copied from deep-ml, 3) Karpathy's micrograd, 4) existing source code (e.g., PyTorch), or 5) adapted from VeRL. Over time, the scope may broaden to cover more areas.
Contributions are welcome — feel free to open a PR. For questions or feature ideas, email Andrew Zhao at andrewzhao112@gmail.com or visit https://andrewzh112.github.io/. This is a side project, so responses may be delayed.
🚀 Quick launch (GUI):
python -m toolsTip: To launch without auto-starting the local LLM server, use
python -m tools --no-llm-serve.
The correctness of these questions is not guaranteed. Please use them at your own risk. If you find any mistakes or inaccuracies, feel free to raise a pull request (PR).
This project serves both as a curated collection and an interface for practicing ML-related questions. I do not take any responsibility or liability for any issues arising from its use.
- Contact
- Quick Start
- Using The GUI
- Calendar & Daily Checklist
- Lab Copilot (Local LLM)
- CLI (Optional)
- Add Your Own Problems
- Citation
- 📧 Email: andrewzhao112@gmail.com
- 𝕏: @_AndrewZhao
- 🌐 Website: https://andrewzh112.github.io/
- Requirements: Python 3.10+; Tkinter is included with most Python distributions. On Linux, you may need
python3-tkfrom your package manager. - Use uv for environment and installs:
- Install uv (one option):
brew install uvorcurl -LsSf https://astral.sh/uv/install.sh | sh - Create/activate env:
uv venv && source .venv/bin/activate - Core packages:
uv pip install numpy torch# see PyTorch site for CUDA-specific wheels
- Install uv (one option):
Optional topic-specific packages (install as needed)
- Transformers (text generation and data prep):
uv pip install transformers - TensorDict (some ops helpers):
uv pip install tensordict - FlashAttention (optional fast paths used in some helpers; skip if unsupported):
uv pip install flash-attn(GPU/CUDA only)
python -m tools# run GUI (preferred)python -m tools.gui# equivalentmakeormake gui# if you havemake
- Topic browser: Left side shows a foldable tree of categories and topics. Use the search box to filter.
- Start: Select a topic and click
Startto scaffold a stub (and tests, when available) underworkspace/. - Edit: Use the
Detailstab to edit code. Editor features:- Auto-indent, soft tabs (4 spaces), block unindent on Shift+Tab
- Smart Backspace deletes a full indent block in leading spaces
- Auto-close pairs for () [] {} ' " and pair deletion
- Python syntax highlighting (VS Code Dark-like)
- Save with
Saveor Cmd/Ctrl+S; a red dot on the Save button means unsaved changes
- Run tests: Click
Run Tests.- If there are unsaved changes, you’ll be prompted to save first.
- Tests run as a module (
python -m workspace.<name>). Output appears in theOutputtab. - On success, you’ll see a brief celebratory overlay.
- Tabs: Cmd/Ctrl+1 → Details, Cmd/Ctrl+2 → Output, Cmd/Ctrl+3 → Stats.
- View Canonical: Opens the reference implementation for comparison (read-only in the editor view).
- After viewing, click the
Detailstab or use Cmd/Ctrl+1 to return to your code.
- After viewing, click the
- Smart Random: Picks topics weighted toward weaker or less-practiced ones.
- Stats: The
Statstab shows totals and a weakest-topics table; stats persist to.practice_stats.json.- Buttons: Refresh, Recommend (Smart), Reset Stats, Delete Stats File
- Daily goal: Set target problems/day (Stats tab). Progress auto-increments on successful runs.
- Timed mode: Optional countdown shown in the header; start by checking
Timedbefore clickingStart. - Right panel (Calendar & Copilot): Use the persistent icons on the far right edge:
- Calendar icon opens/closes the Calendar tab.
- Copilot icon opens/closes the Copilot tab.
- The pane opens to the selected tab; click the same icon again to hide.
- Hide left panel: Use the persistent file‑icon sidebar button on the far left edge to collapse/expand the Topics pane. Shortcut: Cmd/Ctrl+B.
Plan your practice by scheduling questions on specific days and checking them off as you complete them.
- Open the Calendar: Click the Calendar icon on the far right toolbar to open the right tools pane (opens to the
Calendartab). Click again to hide. - Select a day: Navigate months with ◀/▶ and click a date to view/edit that day’s checklist.
- Add current question: With a topic selected on the left, use either:
- Action bar:
Add to Calendar(uses the currently selected calendar date), or - Right pane:
Add Current Topicin the Calendar header.
- Action bar:
- Remove from calendar: Use
Remove from Calendarin the action bar to unschedule the current topic from the selected date. - Complete items:
- Click the checkbox next to a scheduled question, or
- Pass the tests for that topic — it auto-checks the item for today.
- Jump to a question: Click any scheduled item in the list to open that question in the main editor immediately.
Notes
- The calendar shows only questions from the built-in bank. If a scheduled item no longer exists, it is hidden.
- Scheduled items are per-day. Recurring support exists internally but the UI focuses on per-day scheduling.
- Start: Create the stub file for the selected topic in
workspace/. Starts the timer ifTimedis enabled. - Run Tests: Run the topic’s tests. Disabled for topics without tests. Prompts to save unsaved changes and auto‑creates the stub if missing, then runs as
python -m workspace.<name>. Switches to the Output tab and shows a brief celebration on success. - Save: Write the editor contents to the generated file. The red dot on Save indicates unsaved changes.
- View Canonical: Open the reference implementation in a read‑only tab (searchable with the Find bar). If no mapping exists for the topic, a message is shown instead.
- Copy Review Prompt: Copy a structured, side‑by‑side review prompt (your code + canonical, when available) to the clipboard, ready to be pasted into GPT for review.
- Right toolbar icons: Use the Calendar/Copilot icons on the far right to open/close the right tools pane (or use the header icon for the pane).
- Back to Code: Switch back to the Details (editor) tab from Output/Stats/Canonical.
- Timed (checkbox): Enable countdown mode; use the minutes spinner to set duration.
- Stop Timer: Stop the active countdown.
- Random: Pick a uniformly random topic from the currently visible (filtered) list and select it.
- Smart: Pick a topic weighted by “weakness” using your stats (prioritizes unseen topics, low success rate, and fewer runs) and select it.
- Clear (search): Clear the topic filter text.
- Reset Counts: Reset per‑topic “done” counts (used for daily/weakest‑topic signals).
Copilot panel
- Include context: Add topic, your code, and canonical content to each prompt.
- Send: Send your prompt to the local LLM backend (Ollama).
- Stop: Interrupt a streaming response.
- Clear: Clear the current chat transcript.
Stats tab
- Set: Update your daily goal.
- Refresh: Recompute and refresh stats in the view.
- Recommend (Smart): Shortcut to Smart random selection from the stats view.
- Reset Stats: Reset in-memory counters (keeps the file unless deleted).
- Delete Stats File: Remove
.practice_stats.jsonfrom disk.- Tip: Double‑click a row in the weakest‑topics table to jump directly to that topic.
- Cmd/Ctrl+S: Save current file
- Cmd/Ctrl+Z (Shift+Z / Y): Undo / Redo
- Cmd/Ctrl+F: Show Find bar (also in Canonical tab)
- Cmd/Ctrl+1/2/3: Switch to Details / Output / Stats
- Cmd/Ctrl+= / − / 0: Zoom in / out / reset editor + UI fonts
- Cmd/Ctrl+Shift+C: Toggle the right tools pane (Calendar/Copilot)
- Cmd/Ctrl+B: Toggle the left Topics sidebar
The GUI includes a collapsible right tools pane (Calendar & Copilot). The Copilot tab provides multi‑turn chat about your code, with context from the current topic, your candidate solution, and the canonical implementation.
- Toggle: Click the Copilot icon on the far right toolbar (or the header icon). Press Cmd+Shift+C (Ctrl+Shift+C on Windows/Linux) to toggle.
- Streaming chat: Responses stream token‑by‑token; rendered with Markdown (code blocks, lists, tables, bold/italic, quotes, headings).
- Context: Keep “Include context” checked to inject your code + canonical + topic spec on every turn.
- Default model:
gpt-oss:20b(editable in the Copilot panel).
Backend: Ollama
- Install:
brew install ollamathenollama serve - Pull model:
ollama pull gpt-oss:20b(pick a smaller model if RAM is limited, e.g.,qwen2.5-coder:7b) - Endpoint:
http://localhost:11434
Performance helpers
- keep_alive: The app keeps the model warm (
keep_alive: "30m") to reduce first‑token latency across turns. - KV reuse: The app captures Ollama’s
contextand reuses it for the next turn. - Prewarm: On launch, it primes KV with your current topic + context using
num_predict: 0(no visible output), so the first turn is faster.
Config
- Env vars:
PRACTICE_COPILOT_MODEL— default model id (default:gpt-oss:20b)OLLAMA_BASE_URL— base URL for Ollama (default:http://localhost:11434)PRACTICE_AUTOSTART_OLLAMA—0/false/noto disable auto‑start (default on)
- CLI:
python -m tools --no-llm-serve— do not auto‑start Ollama on launch
- List topics:
python tools/generate_entry.py --list - Generate entry file:
python tools/generate_entry.py --topic graph/topological_sort - Run as a module after editing:
python -m workspace.graph_topological_sort
You can extend the practice set by adding new problems to the generator. If you’d like to share your additions with others, please submit a pull request.
Steps
- Pick an id: Choose a path-like id such as
attention/my_attentionorloss/my_loss. - Add a stub and tests in the generator:
- Open
tools/generate_entry.pyand add:- A function that returns the stub string (e.g.,
_stub_my_attention()) - A function that returns the tests string (e.g.,
_tests_my_attention()), or reuse_tests_none("attention/my_attention")if you don’t have tests yet.
- A function that returns the stub string (e.g.,
- Register it in
_make_registry()by adding aProblemSpecwith yourid,title,stub, andtests. - Optional: set
bare=TrueinProblemSpecto emit only the stub with no header/tests, like the minimal MoE example.
- Open
- Map the canonical file (optional but recommended):
- If you have a reference implementation in the repo, update
tools/gui.py’scanonical_path_for_topic()to return the path for your topic id so “View Canonical” works.
- If you have a reference implementation in the repo, update
- Try it out:
- Launch the GUI (
python -m tools), search for your topic, clickStart, edit, andRun Tests. - Or generate directly via CLI:
python tools/generate_entry.py --topic attention/my_attention.
- Launch the GUI (
Tips
-
Keep ids grouped by area (e.g.,
attention/,nlp/,ops/,loss/,training/,graph/). -
Test files should be self-contained and run without external services. If you rely on libraries (e.g., NumPy/PyTorch), import-guard your tests so the user gets a clear message if the dependency isn’t installed.
-
For reference-backed tests, structure them like existing problems: import the canonical function from
ml/...orcoding/...to compare results. -
CLI UI (optional):
python tools/ui.py- Commands:
list,gen <topic>,random,generated,delete <index|path>,help,exit. On exit, prompts to delete session files. - One-shot:
python tools/ui.py --randomorpython tools/ui.py --topic ops/masked_mean --out workspace/my_masked_mean.py
Open a PR with your new problems registered in tools/generate_entry.py (and any canonical references). Include a short description, example tests, and where appropriate a mapping in tools/gui.py:canonical_path_for_topic().
If you are crazy and want to cite:
@misc{zhao2025ai_research_interview_lab,
title = {AI Research Interview Lab},
author = {Zhao, Andrew},
year = {2025},
note = {Local interview practice workspace},
url = {https://andrewzh112.github.io/}
}