Local-first AI studio for turning rough ideas into polished posts and story-driven content.
Desktop UX when you want to steer. Folder-based service mode when you want the machine to keep moving.
Most AI writing tools stop at "generate text."
Megamind Content Studio is built around a more practical question: what if your local AI stack could behave like a small production studio?
- Research the idea
- Shape the draft
- Reformat it for the target channel
- Run headlessly from a watched folder when you do not want to babysit the UI
- Keep the whole workflow close to your machine and your models
That is the point of this repo.
Replace docs/assets/demo-placeholder.svg with your recorded product GIF when ready.
Local-first by defaultusing Ollama instead of outsourcing the core writing loop.Two operating modeswith a guided Electron flow and a folder-driven service runner.Prompt pipeline, not prompt chaoswith a backend that separates raw ideation from platform formatting.Story-friendly directionthrough thestorytellerdefault model and service job examples for comic-style narrative work.Windows-native practicalitywith batch scripts, folder watchers, and service-oriented operation.
npm install
python -m venv .venv
.venv\Scripts\activate
pip install -r requirements.txtOLLAMA_URL=http://localhost:11434
OLLAMA_MODEL=storyteller
OLLAMA_TIMEOUT=2000
STORAGE_DIR=backend/storage
SERVICE_INPUT_DIR=SERVICE_INPUT
SERVICE_OUTPUT_DIR=SERVICE_OUTPUTuvicorn backend.main:app --reloadnpm startrun.batrun.bat lets you choose between UI mode and Service mode.
- Enter a topic, context, platform, mood, and model in the Electron app.
- Generate a raw research-style passage through FastAPI.
- Review and edit the draft.
- Format the result for a target platform.
- Export the final output.
- Drop a text job into the watched input folder.
- The Python worker claims the file, parses it, and runs the same backend logic without the visible UI.
- Outputs, manifests, and logs are written to the configured service folders.
flowchart LR
A["Electron UI"] --> B["IPC Layer"]
B --> C["FastAPI Backend"]
C --> D["Prompt Builder"]
D --> E["Ollama"]
C --> F["Storage + Exports"]
G["Folder Service Runner"] --> C
G --> F
Open the stack breakdown
electron/hosts the desktop shell and IPC bridge.frontend/contains the screens, styling, and UI controller logic.backend/contains FastAPI routes, schemas, prompt builders, Ollama integration, and service worker modules.docs/captures architecture notes and batch-mode guidance.
Electronhandles the visible workflow.FastAPIowns the request/response contract.Ollamagenerates raw and formatted writing locally.Python workerpowers background folder processing.
Open service mode details
- Input folder:
SERVICE_INPUT - Output folder:
SERVICE_OUTPUT - Worker base:
backend/storage/service
JOB_TYPE: story
TOPIC: The Little Bridge of Courage
MODEL: storyteller
PLATFORM: linkedin
MOOD: Electric
JOB_ID: little-bridge-of-courage
CONTEXT:
Create an 8-panel comic-style story with a gentle, uplifting emotional arc.
python -m backend.worker.service_runnerUseful docs:
Open backend endpoints
GET /healthPOST /generate-rawPOST /format-postOpen the repo map
.
|-- electron/ Desktop shell and IPC
|-- frontend/ Screens, styles, and controller logic
|-- backend/
| |-- core/ Settings and constants
| |-- routes/ FastAPI endpoints
| |-- schemas/ Request models
| |-- services/ Prompting, Ollama calls, exports, worker helpers
| |-- worker/ Folder-based background runner
| `-- storage/ Artifacts, examples, logs
|-- docs/ Architecture and service docs
|-- run.bat Windows launcher
|-- backend.bat Backend starter
|-- frontend.bat Frontend starter
`-- folder_service.bat Folder worker starter
- Clean split between generation and formatting stages.
- Easy local experimentation with Ollama models through
.env. - Practical batch workflow for Windows users who prefer folders over dashboards.
- Repo already contains story-oriented sample jobs and service examples.
- The project is still experimental and evolving quickly.
- Some operational behavior is documented more deeply in
docs/than in-code. - A polished product demo GIF still needs to be recorded and dropped into the README.
- Default model in
.env:storyteller - Backend framework:
FastAPI - Desktop shell:
Electron - Local model runtime:
Ollama
This repository is currently marked UNLICENSED.
