Explore the cosmos with Llama 4-powered narration and intelligent Chat assistance.
- Node.js 18+
- Python 3.9+
- Llama 4 API key
- Clone the repository:
git clone [your-repo-url]
cd astronoma- Set up the backend:
cd backend
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
pip install -r requirements.txt
cp .env.example .env
# Add your LLAMA_API_KEY to .env- Set up the frontend:
cd ../frontend
npm install
cp .env.example .env- Start the backend (from
/backend):
uvicorn app.main:socket_app --host localhost --port 3000 --reload- Start the frontend (from
/frontendin a new terminal):
npm run dev- Open http://localhost:5173 in your browser
- 🌌 3D visualization of the solar system
- 🎙️ AI-generated narration in multiple languages
- 💬 Intelligent chat assistant for navigation
- 🔍 Search functionality for celestial objects
- 🌐 Multilingual support (English, Spanish, French, Hindi)
- Click and drag: Rotate view
- Scroll: Zoom in/out
- Click planet: View information
- Chat: "Take me to Mars", "What's the largest planet?"
- Frontend: React, Three.js, TypeScript, Tailwind CSS
- Backend: FastAPI, Python, Socket.io
- AI: Llama 4 API
astronoma/
├── frontend/ # React frontend
├── backend/ # Python backend
└── README.md # This file
- Ensure Python 3.9+ is installed
- Check that all dependencies are installed
- Verify LLAMA_API_KEY is set in .env
- Ensure backend is running on port 3000
- Check VITE_API_URL in frontend .env
- Check browser audio permissions
- Ensure browser supports Web Speech API
MIT





