Releases: ResearAI/AutoFigure-Edit
Releases · ResearAI/AutoFigure-Edit
AutoFigure-Edit v1.1
AutoFigure-Edit v1.1
This release is published as tag v1.1.
AutoFigure-Edit v1.1 focuses on two practical workflows that were still awkward in earlier public builds: starting from a user-supplied stage-1 academic figure, and running the pipeline cleanly with official OpenAI models or OpenAI-compatible gateways.
Highlights
1. Stage-1 figure import workflow
- Added a dedicated import-mode path in the web UI for users who already have the stage-1 academic raster figure.
- Added and documented the CLI path for skipping step 1 image generation and continuing directly from SAM + SVG reconstruction.
- Imported figures are normalized into the regular pipeline output structure so later SAM, placeholder, and SVG steps behave the same as a full run.
Why this matters:
- You can now iterate on an existing figure instead of regenerating from method text every time.
- The product is more practical for users who already have a draft figure from another model, another provider, or a previous run.
2. Official OpenAI model support
- Step 1 can now use the official OpenAI Images API with
gpt-image-2. - The OpenAI Responses route is documented and exposed for text plus multimodal SVG reconstruction.
openai_responsenow defaults togpt-5.5for the SVG / reasoning path.
Why this matters:
- The OpenAI path is now a first-class documented workflow instead of an implicit or partial configuration.
- Users can mix OpenAI image generation with OpenAI multimodal reconstruction more predictably.
3. OpenAI-compatible routing and custom provider support
- Added
customas the primary OpenAI-compatible provider name in the CLI and web app. - Kept
bianxieas a backward-compatible alias so older commands do not break. - Fixed the default
openai_responsemain route so step 1 can inherit the same compatiblebase_urlandapi_keyinstead of incorrectly falling back to the official OpenAI host.
Why this matters:
- A single compatible gateway can now drive the whole workflow more reliably.
- Users no longer need to fight inconsistent routing between the image stage and the SVG reconstruction stage.
4. Web configuration, bilingual UI, and onboarding updates
- Added bilingual Chinese / English switching across the main page, import page, canvas page, and guide page.
- Added an in-product configuration guide that explains workflow choices, field meanings, recommended presets, and SAM backend setup.
- Kept the release aligned with the newer DeepScientist-branded web surface.
Why this matters:
- Setup is easier to understand for first-time users.
- The import flow and provider flow are now visible in the product instead of being hidden in CLI-only assumptions.
Included Areas of Change
- Pipeline and provider routing:
autofigure2.py - Web request handling:
server.py - Web UI and bilingual configuration flow:
web/index.html,web/import.html,web/guide.html,web/app.js - Documentation updates:
README.md,README_ZH.md
Release Notes
- Git tag:
v1.1 - GitHub release:
AutoFigure-Edit v1.1 - Default OpenAI image model:
gpt-image-2 - Default OpenAI Responses SVG model:
gpt-5.5 - No standalone binary assets are attached to this release; GitHub provides the source tarball and zipball for tag
v1.1.