We introduce SliderEdit, a framework for continuous image editing with fine-grained, interpretable instruction control. Given a multi-part edit instruction, SliderEdit disentangles the individual instructions and exposes each as a globally trained slider, allowing smooth adjustment of its strength.
- Clone the repository
git clone git@github.com:ArmanZarei/SliderEdit.git cd SliderEdit - Create and activate the conda environment
conda env create -f environment.yml conda activate slideredit pip install -e .
First, load the SliderEdit pipeline:
from slideredit.pipelines import SliderEditFluxKontextPipeline
pipe = SliderEditFluxKontextPipeline.from_pretrained(
"black-forest-labs/FLUX.1-Kontext-dev",
torch_dtype=torch.bfloat16
).to("cuda")GSTLoRA is designed for single-instruction editing scenarios or when a single slider is sufficient to control the overall edit intensity.
pipe.load_gstlora("PATH_TO_CKPT")
output_image = pipe(
image=IMAGE_TO_BE_EDITED,
prompt="EDIT PROMPT",
generator=torch.Generator().manual_seed(SEED)
slider_alpha=STRENGH_VALUE,
).images[0]The parameter slider_alpha controls edit strength. Negative values increase intensity, while positive values suppress the effect. We recommend initially sweeping values in the range [-1, 1].
STLoRA is designed for multi-instruction editing prompts, providing independent control over the strength of each individual instruction.
pipe.load_stlora("PATH_TO_CKPT")
output_image = pipe(
image=IMAGE_TO_BE_EDITED,
prompt="Edit_Instruction_1 and Edit_Instruct_2 and ...",
generator=torch.Generator().manual_seed(SEED)
subprompts_list=["Edit_Instruction_1", "Edit_Instruct_2", ...],
slider_alpha_list=[Strength_Value_1, Strength_Value_2, ...],
).images[0]Parameters:
subprompts_list: Individual instructions (sub-prompts) from the original edit promptslider_alpha_list: Corresponding intensity values for each instruction (we recommend initially sweeping values in the range: [-1, 1])
See sample_inference.ipynb for complete inference examples.
The training script for GSTLoRA is available at train_gstlora_flux_kontext.py. An example configuration using an open-source image editing dataset is provided in train_gstlora_flux_kontext.yaml.
To launch training:
python training/train_gstlora_flux_kontext.py --config="training/configs/train_gstlora_flux_kontext.yaml"View training progress and slider visualizations in this W&B report.
Training STLoRA with our proposed PPS loss requires multi-instruction edit prompts. (If you instead wish to use the SPPS loss, the same dataset as in the GSTLoRA setting can be reused)
Below, we provide a simple small-scale example illustrating how to construct such a dataset and perform training. In this example, we focus on human face editing by manually combining a set of predefined single-instruction edits to form multi-instruction prompts.
For more details, please refer to the training script train_stlora_flux_kontext.py and the example configuration file train_stlora_pps_flux_kontext.yaml.
First, download the sample human faces dataset:
mkdir -p datasets
cd datasets
gdown 183-Jubsu2rFiQmgpBYjFpAxGSwqn2XOM
unzip slideredit_faces_dataset.zip
cd ..Then, launch training:
python training/train_stlora_flux_kontext.py --config="training/configs/train_stlora_pps_flux_kontext.yaml"View training progress and slider visualizations in this W&B report.
Below are example checkpoints from models trained using the above configurations:
mkdir -p checkpoints
gdown 1YHrHhSeKovEPGpFgFbv0iPL67YRgg6rG -O checkpoints/ # GSTLoRA iter500
gdown 1PdORTgzFzfGGbNAoPQb0T5su3t82xErY -O checkpoints/ # STLoRA iter1200Example results:
See sample_inference.ipynb for more details.
@article{zarei2025slideredit,
title={SliderEdit: Continuous Image Editing with Fine-Grained Instruction Control},
author={Zarei, Arman and Basu, Samyadeep and Pournemat, Mobina and Nag, Sayan and Rossi, Ryan and Feizi, Soheil},
journal={arXiv preprint arXiv:2511.09715},
year={2025}
}

