Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
Adds an option to the SD1.5/SDXL Image-To-Latents node to apply color compensation to the float tensor of the image before encoding. Counteracts brightness drift and haze associated with inpainting, particularly on dark/solid backgrounds.
At the moment, it only applies dynamic range adjustments to all colors at once, as that seemed to be the most reliable method in my testing. It can be revised to adjust colors independently or apply some more elaborate correction method if one is discovered.
For clarity, the option is a Modal of either "None" or "SDXL" on the node. In the Canvas it is a boolean switch with a tooltip to explain its function. The node defaults to "None" and only the SDXL graphs change this.
(Hopefully github image compression doesn't make this impossible to see...)

Following images show the difference after 2 consecutive light inpaint passes on the same pair of seeds. Fixed version on the right.
Prevents build-up of errors after successive passes. The following images show the difference after 6 encodes without any denoise (same mask is used on both images, dots along the right and bottom sides, heart and smiley in the middle).

It also does not prevent the SD model from changing an inpaint area away from the surroundings with high enough denoise (~0.6 and up). The intention is to prevent VAE drift on inpaint only and not touch the output side of image generation.
Related Issues / Discussions
Analysis and testing thread: https://discord.com/channels/1020123559063990373/1430012534923989053/1430012534923989053
Other mentions of the effect:
https://discord.com/channels/1020123559063990373/1193632033193336953/1427920397864534169
https://discord.com/channels/1020123559063990373/1149510134058471514/1424466794470445267
https://discord.com/channels/1020123559063990373/1149506274971631688/1344784624068464717
QA Instructions
Lots of pixel peeping, inpainting, and workflows with custom nodes to output graphs for value histogram shifts and trend differences across encode cycles. I gave up on fixing this a year ago because I couldn't find something that worked in enough scenarios. I'm still not fully satisfied with the results here, but it is at least a quantifiable step in the right direction.
Merge Plan
Easy to merge: all settings and code are self-contained and don't interact with other parts of the UI aside from adding the new options.
Checklist
What's Newcopy (if doing a release after this PR)