By Ayush
This is my personal lab notebook — a messy, evolving record of my machine learning journey.
I started this repo to:
- Learn by building, not just reading.
- Document experiments that often fail, sometimes work, and occasionally surprise me.
- Create a reference I can revisit when I forget why
learning_ratematters or how to debug overfitting.
No corporate buzzwords. Just code, math, and curiosity.
- From Scratch: Implementations of algorithms (because understanding the basics feels like magic).
- Real-World Projects: Messy pipelines on public datasets (Kaggle, research papers, etc.).
- Experiments: Results of "What happens if I try...?" moments (spoiler: usually overfitting).
- Notes: Quick references for concepts I’m actively learning (e.g., PCA, Bayesian optimization).
Think of this as my digital brain dump for ML.
- No strict rules: Add notebooks when I learn something new. Delete them if they’re nonsense.
- No perfection: Code is messy where it doesn’t matter. Clean where it does.
- No deadlines: This grows at my pace.
Dear Future Ayush,
When you look back here:
- Celebrate progress, not perfection.
- Revisit the notebooks that made you go "Oh right, that’s how that works!"
- Keep adding the stuff you’re scared to admit you don’t know yet.
— Past Ayush
Feel free to borrow ideas, tweak code, or laugh at my early attempts.
If you find this useful, let me know — it’ll make my day.
"Learn by doing. Fail by trying. Grow by iterating."