A living ecosystem of digital wildlife, brought to life through the ancient art of origami and modern agentic AI.
Paper Zoo reimagines the origami learning curve. We've replaced static PDF diagrams with a Socratic Agentic Loop. By combining multimodal AI vision with a tactile Neubrutalist design system, we turn every physical fold into a persistent digital achievement.
Tip
This isn't just a tutorial—it's a digital sanctuary where your real-world craft populates a living habitat.
Powered by Gemini 3.1 Flash-Lite, our tutoring agent doesn't just show you what to do—it understands why you're stuck.
- Dynamic Diagrams: High-fidelity SVG generation for every step.
- Contextual Guidance: Ask "Why is my corner overlapping?" and get a spatial explanation based on the current model state.
Stop guessing if your "reverse mountain fold" was correct.
- Snapshot Verification: Capture your progress via webcam.
- Multimodal Evaluation: Our vision pipeline analyzes the geometric accuracy of your fold against the gold-standard rubric.
- Constructive Loops: Receive precision feedback ("Sharpen the beak crease for better stability") instead of a simple pass/fail.
Every verified fold earns you a resident in your personal zoo.
- Persistent Progression: Your scores and history are stored in Supabase, driving a personalized growth graph.
- Interactive Community: Visit other habitats, share tips, and showcase your "Pippin the Penguin" collection.
Our architecture is built on the Agentic Verification Loop. The frontend acts as a thin shell for high-interaction AI components.
sequenceDiagram
participant U as User (Physical)
participant F as Next.js 15
participant G as Gemini Agent
participant S as Supabase (Memory)
U->>F: Upload Fold Photo
F->>G: POST /api/verify (Base64 + Rubric)
Note over G: Multimodal Reasoning
G-->>F: JSON { grade: 8, tip: "Crisper edges" }
F->>S: Store Result & Unlock Sprite
S-->>F: Success
F->>U: Reveal Pixel Character in Zoo
At the heart of the habitat is a custom, high-performance rendering system designed for crisp "handmade" aesthetics.
Located in frontend/components/animation/PixelCharacter.tsx, this engine renders character sprites as a precise grid of div-blocks. This ensures:
- Zero Blurring: Perfect scaling for 2026 high-DPI displays.
- Dynamic Theming: CSS-driven color swaps for specialized "shiny" or "patterned" variants.
The habitat utilizes a custom-built interaction layer within ZooCanvas.tsx that handles:
- Character AI: Unique movement patterns (Waddling, Hopping, Flapping) triggered by successful folds.
- Collision Logic: A lightweight spatial grid for maintaining a clean, non-overlapping habitat.
- Node.js 20.x
- Tailwind CSS v4 (Engineered for CSS-first performance)
- Supabase CLI (For local migrations)
-
Clone & Install:
git clone https://github.com/pmadaan1/Wildhacks-2026.git cd frontend && npm install
-
Environment: Fill your
.env.localwith keys from Google AI Studio and Supabase.NEXT_PUBLIC_SUPABASE_URL=... NEXT_PUBLIC_SUPABASE_ANON_KEY=... GEMINI_API_KEY=...
-
Launch:
npm run dev
MIT License. Developed with a focus on Accessible AI—making complex skills attainable through playful interaction.
Constructed for Wildhacks 2026.