This project benchmarks and accelerates a 2-step AI pipeline:
- Image → description (Gemini)
- Description → 3 quiz questions (Gemini)
Goal: reduce end-to-end latency by parallelizing I/O-bound API calls with Python threads.
notebooks/parallel_gemini_api_benchmark.ipynb— all experiments, plots, and codereport/report.pdf— full technical report with results + analysis
- Serial baseline: ~3.06s per image (100 images)
- Parallel (5 threads): ~4× speedup under free-tier rate limits
- Set your API key as an environment variable:
export GEMINI_API_KEY="YOUR_KEY"
- Open the notebook and run all cells.
- This repo contains no production NeuroTrace code—only the benchmarking pipeline.
- API calls are rate-limited; the notebook includes retry + exponential backoff.
