Skip to content

Add grok-superimage support, server-side image config and resilient streaming/error handling#47

Merged
lijirou12 merged 1 commit intomainfrom
codex/migrate-imagine-to-grok-superimage-1.0-q2pii5
Feb 26, 2026
Merged

Add grok-superimage support, server-side image config and resilient streaming/error handling#47
lijirou12 merged 1 commit intomainfrom
codex/migrate-imagine-to-grok-superimage-1.0-q2pii5

Conversation

@lijirou12
Copy link
Copy Markdown
Owner

Motivation

  • Introduce a server-controlled image model (grok-superimage-1.0) with centralized n/size/response_format so clients cannot override those params.
  • Improve streaming reliability by converting transport 5xx/exception failures into SSE error events for streaming endpoints and returning one-shot SSE error payloads for immediate stream errors.
  • Make token retry logic more robust by retrying on transient upstream errors when alternative tokens exist and improve reverse proxy handling for SOCKS proxies.

Description

  • Added new model entry grok-superimage-1.0 in ModelService and default config section [superimage] in config.defaults.toml with n, size, and response_format keys, and updated readme.md and admin UI (app/static/admin/js/config.js) to expose these options.
  • Enforced server-side image config for grok-superimage-1.0 via _superimage_server_image_config() and used it during request validation instead of client-provided image_config when that model is selected.
  • Implemented _safe_sse_stream() and _streaming_error_response() in the chat route to catch exceptions during async streams and emit SSE error events (and [DONE]), and wrapped streaming returns from ImageGenerationService, ImageEditService, ChatService, and VideoService accordingly; also added try/except around non-image ChatService/VideoService calls to return SSE error responses when stream is not explicitly false.
  • Image streaming processors (image.py and image_edit.py) now suppress intermediate partial previews for chat-format streams, track whether any chat chunk was emitted, and always emit a final empty chat chunk plus data: [DONE] if no final chunk was produced.
  • Added transient_upstream() detection in app/services/grok/utils/retry.py and used it in ChatService to attempt alternative tokens on transient upstream failures; exported the symbol from the module.
  • Improved reverse app chat proxy handling in app/services/reverse/app_chat.py to detect SOCKS schemes and pass them via proxy to curl_cffi while using proxies for HTTP(S) proxies, and adjusted timeout selection logic.
  • Small API surface/rename cleanup in admin config route to avoid shadowing get_storage by renaming the local resolver import to resolve_storage and the route to get_storage_mode.

Testing

  • Ran the project test suite with pytest and all tests passed.
  • Ran static checks/linter (flake8/format checks) and they succeeded.

Codex Task

@vercel
Copy link
Copy Markdown

vercel bot commented Feb 26, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
grok2api Building Building Preview, Comment Feb 26, 2026 0:28am

@chatgpt-codex-connector
Copy link
Copy Markdown

You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard.

@lijirou12 lijirou12 merged commit 6c25556 into main Feb 26, 2026
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant