AI-assisted home fridge inventory with use-by suggestions and hazard highlighting (see Description.md and docs/inventory-and-spoilage-alerts.md).
Stack: Next.js 15, React 19, Prisma + SQLite, and the OpenAI API for structured hazard + shelf-life inference (default model gpt-5.4-mini), with local rules if the API is not configured.
- Install Node.js (LTS, includes
npm). - In a terminal:
cd web
cp .env.example .env-
Edit
web/.envand set yourOPENAI_API_KEY(from OpenAI API keys). -
Then:
npm install
npx prisma db push
npm run dev- Open http://localhost:3000.
Without OPENAI_API_KEY, the app still runs; use-by and hazard hints use local rules and tables only.
| Variable | Required | Notes |
|---|---|---|
OPENAI_API_KEY |
Yes (for AI) | Your sk-… key. |
OPENAI_MODEL |
No | Defaults to gpt-5.4-mini. |
DATABASE_URL |
Yes | Default in .env.example: file:./dev.db (SQLite under web/prisma/). |
If you set OPENAI_BASE_URL (e.g. http://127.0.0.1:11434/v1 for Ollama), the app talks to that server instead. Use OPENAI_MODEL to match the local model name (e.g. llama3.2). If you also keep an sk-… key, a failed local call can retry on OpenAI using OPENAI_FALLBACK_MODEL (default gpt-5.4-mini). See web/.env.example history or ask in the repo if you need this.
Fridge/
├── Description.md # Master feature list
├── docs/ # Feature specs
├── README.md
└── web/ # Next.js application
├── prisma/
│ └── schema.prisma
├── src/
│ ├── app/ # Routes & API
│ ├── components/
│ └── lib/ # Inference, rules, Prisma client
├── package.json
└── .env.example
GET /api/health— Liveness + LLM config (llmConfigured,provider,model).GET /api/items?sort=useBy|addedAt— Active inventory.POST /api/items— Create item (runs inference; optionaluseByoverride).POST /api/infer— Preview suggestion without saving.PATCH /api/items/[id]— Update item.DELETE /api/items/[id]— Remove item.POST /api/items/[id]/consume— Mark consumed.
- OpenAI usage is billed; calls need internet.
- Accuracy: Models can be wrong about food safety. The UI includes disclaimers; curated rules bias toward caution.
If anything fails (Prisma, port 3000 in use), check the terminal from npm run dev or npx prisma db push.