IIIT Naya Raipur | 2Γ LION 2026 | DRDO Award | Amazon ML Summer School
parth = {
"role": "AI Researcher & Builder",
"education": "B.Tech CS @ IIIT Naya Raipur (2023β2027)",
"research": {
"accepted": "AdaptSRNet β LION 2026 (adaptive attention, image steganalysis)",
"under_review": "Cold-Start Learning Under Domain Uncertainty β LION 2026",
},
"expertise": {
"llm_systems": ["RAG", "Fine-Tuning (LoRA/QLoRA/DoRA/PEFT)", "LLM Agents", "Multi-Agent Orchestration"],
"voice_ai": ["STT/TTS Pipelines", "Wake-word Detection", "Multimodal LLMs"],
"infra": ["FastAPI", "Async Architectures", "Go", "Docker", "ChromaDB"],
"frontend": ["Next.js", "React", "TypeScript", "D3.js", "Tailwind CSS"],
},
"achievements": {
"awards": "π DRDO Best Exhibitor β ICORT '25",
"selection": "π Amazon ML Summer School (few hundred from thousands)",
"competitions": "π₯ 1st E-Cell Ideathon | Top 10/176 IIT Delhi Vision Marathon | Top 5 IIT Bhilai",
"cp": "π» LeetCode 300+ | Codeforces Pupil (peak 1150)",
},
"current_focus": "Building production-grade AI systems that survive contact with reality",
}
π§ PaperLens β AI Research AssistantFastAPI Β· React Β· TypeScript Β· ChromaDB Β· Groq Β· NVIDIA NIM Β· Qwen (fine-tuned)
|
π€ SAGE β Autonomous Voice Desktop AgentPython Β· LLMs Β· Multi-Agent Β· STT/TTS Β· Runtime Tool Synthesis
|
π° LUMEN β Multimodal AI Financial PlatformFastAPI Β· Next.js Β· Go Β· MCP Server Β· OAuth Β· LLMs
|
β‘ autoresearch-lite β Autonomous ML Research AgentPython Β· PyTorch SDPA Β· LLMs Β· Muon Optimizer
|
π¬ PEFT Showdown β Fine-Tuning Methods BenchmarkPyTorch Β· HuggingFace PEFT Β· TRL Β· Kaggle T4 Benchmarked 6 PEFT methods (LoRA, QLoRA, DoRA, VeRA, AdaLoRA, IA3) on SmolLM2-1.7B β same model, same data, same hardware. Built to answer one question: which method should you actually use? Key findings: DoRA β best final loss Β· IA3 β LoRA quality at 10Γ fewer params Β· QLoRA β honest default for free GPU users Β· Phase 2: safety alignment degradation testing in progress |
|
|
AdaptSRNet: Enhancing Image Steganalysis via Adaptive Filter-Attention Fusion Compact deep architecture: learnable SRM filter banks + SE/CBAM multi-scale attention fusion for high-precision binary steganalysis. Evaluated on WOW stego images at 0.2 bpp with full ablation studies.
|
Cold-Start Learning Under Domain Uncertainty: A Strategy Selection Perspective Reframed cold-start forecasting as a strategy selection problem. Used normalised Wasserstein distance as domain divergence criterion β no target labels needed. Showed scratch training outperforms transfer learning under measurable domain mismatch.
|
Status: π’ Open to AI/ML Internships Location: Naya Raipur, India Email: parth23100@iiitnr.edu.in
βοΈ From Parth Patel | Grad: June 2027 π