-
Notifications
You must be signed in to change notification settings - Fork 5
Roadmap
JoyBoy's roadmap is centered around one idea: make a powerful local AI workstation feel stable, predictable, and pleasant on real consumer machines.
The project is not trying to become a cloud platform. It is trying to become a strong local harness that coordinates local models, runtime state, image tools, addons, and agent workflows without forcing users to juggle scripts and fragile UI layers.
JoyBoy should become a local-first AI workstation where users can:
- Chat with local models.
- Generate and edit images locally.
- Import models from supported providers.
- Use optional local packs without polluting the public core.
- Run long local jobs with clear progress, cancellation, and recovery.
- Open a project workspace and use a Codex-style local assistant to inspect, edit, and reason about a repository.
The long-term goal is a stable local harness that feels closer to a polished workstation than a pile of scripts.
The biggest priority is stability.
Before expanding too aggressively, JoyBoy needs stronger guarantees around:
- Runtime state cleanup after cancelled jobs.
- Better model unload and reload behavior.
- Clearer errors when CUDA, PyTorch, providers, or model files are misconfigured.
- Safer transitions between chat, image, video, and project workflows.
- Better first-run setup feedback while assets and model helpers download.
- More tests around routing, model imports, local packs, and UI state.
Success means the user can stop a job, switch workflows, reload the app, or change models without the app getting stuck in a weird half-running state.
The Codex-style project mode is one of the most important roadmap items.
The target behavior is:
- A user chooses a local workspace.
- JoyBoy opens a dedicated project conversation named after that workspace.
- The assistant can inspect files, summarize the repository, propose changes, and eventually make controlled edits.
- The UI stays close to a normal conversation, but clearly shows that this is a project-aware session.
- Tool calls should be bounded, readable, cancellable, and recoverable.
This mode should become useful for real development work, not just a terminal-looking experiment.
See Project Mode for more detail.
JoyBoy has mostly been tested on 8GB VRAM machines so far.
That matters because many users do not have workstation GPUs, and the app should remain useful on consumer hardware.
Current priorities:
- Keep improving model scheduling and unloading.
- Prefer quantized runtime paths where possible.
- Avoid keeping chat models loaded during heavy image/video jobs.
- Make first-run downloads and runtime placement visible in the UI.
- Detect CPU-only PyTorch installs early instead of failing deep inside generation.
- Keep performance acceptable on 8GB VRAM without making the app confusing.
The project also needs proper validation on stronger machines.
Important configurations to test:
- 12GB VRAM.
- 16GB VRAM.
- 24GB VRAM.
- Multi-GPU setups, if contributors can test them.
- High-RAM systems with different CPU/GPU combinations.
The goal is not just "does it run", but whether JoyBoy chooses the right runtime strategy for the hardware.
Larger GPUs should benefit from more direct placement, fewer unloads, and smoother model switching when safe.
See Hardware and VRAM for the compatibility plan.
JoyBoy's public core should stay neutral and open-source friendly.
Optional local packs can extend routing, prompts, UI surfaces, model sources, or specialized workflows. Some optional local packs may target mature workflows where legal, consensual, and compliant with platform policies. These packs are not part of the public core.
The roadmap here is:
- Keep pack boundaries explicit.
- Make install and activation flows easier to understand.
- Validate pack manifests before enabling them.
- Document what packs can and cannot override.
- Avoid hardcoding private assets or provider tokens into the public repo.
See Local Packs for the intended boundary.
- Stabilize cancellation and runtime cleanup across chat, image, video, and project mode.
- Improve project-mode workspace UX and tool-call behavior.
- Finish first-run setup and asset-download progress feedback.
- Expand test coverage for model imports and router behavior.
- Improve translations across English, French, Spanish, and Italian.
- Make model picker state more reliable across installed and imported models.
- Document supported hardware profiles and known limits.
- Make project mode reliable enough for real repository analysis.
- Add stronger job history, runtime logs, and recovery information.
- Improve addon discovery and local pack management.
- Support more imported image model families safely.
- Improve video workflow reliability and setup messaging.
- Validate larger-GPU runtime strategies.
The most useful contributions right now are:
- Small bug fixes with reproducible steps.
- Translation fixes.
- Hardware test reports.
- Setup and dependency improvements.
- UI polish that makes runtime state easier to understand.
- Tests for routing, model imports, and local pack validation.
If you are unsure where to start, check the open issues marked for first contributions.