Vello API: A more sane design for paints#1435
Vello API: A more sane design for paints#1435DJMcNab wants to merge 3 commits intolinebender:mainfrom
Conversation
nicoburns
left a comment
There was a problem hiding this comment.
I've left some comments.
My counter-proposal for the Paint type (also for discussion) would be:
pub enum Paint {
Solid(AlphaColor<Srgb>),
Gradient(Gradient),
Image(ImageBrush<PaintId>),
// Maybe
Custom(u32),
Encoded(u32),
}Notes:
- Avoids a level of nesting for standard brushes
- Transform would still be passed separately (is there a disadvantage to this?). Or would be a field in a struct that wraps this enum.
- BlurredRoundedRect would remain seperate to paint in the API
- AdHoc Images would be registered with the
RenderContextthe same as any other image (perhaps with a flag to hint that it is a short-lived image that need not be cached beyond a single render).
Questions:
- Might not the
ImageSamplerparams apply to custom paints too? And if so, perhaps they should just be handled as images? Do we have any example custom paints in mind (if not, we could just leave them out for now).
| /// [`set_blurred_rounded_rect_brush`](PaintScene::set_blurred_rounded_rect_brush). | ||
| /// [`BlurredRoundedRectBrush`] as a paint directly (see also [`Paint::BlurredRoundedRect`]). |
There was a problem hiding this comment.
What is the motivation for trying to make blurred_rounded_rect part of Paint in the API? I know it's implemented as a paint type, but my understanding was that the reason for a separate method in the API was becuase it only really works with a rectangular shape, and thus using it with the fill method which allows you to specify other shapes is a potential pitfall.
There was a problem hiding this comment.
The blurred rounded rectangle brush describes the shape within which the blurred rounded rectangle is drawn, which needn't be a rectangle. The obvious (and documented) use case is for drawing box shadows; you need to be able to draw only the "outside" of the box, without also drawing the brr within the box.
You are of course correct that the algorithm only works for blurred rounded rectangles, but that's just a docs question; I have tried to write the docs to explain this properly, and point towards the helper method. I'm slightly surprised that the docs I've written in this PR are quite so easily misunderstood; the "original" version of this, which was in #700 has very similar documentation, but you understood it then.
| // object's transform. That's clearly pretty poor, and we can do better. | ||
| // However, this isn't exposed in the current Vellos, so we're choosing to defer this. | ||
| // transform: Affine, | ||
| Encoded(u32), |
There was a problem hiding this comment.
The idea of being able to backreference paints in the same scene is cool. But probably the easier way to implement "scene-local" brushes today (absent first-class support) would be to paint the text into a layer and then use that layer as mask? But perhaps that would have worse performance?
There was a problem hiding this comment.
For that matter, would implementing support for this in the backends be any easier than adding support for "scene-local" paint transforms?
| /// Draw an immutable image with a dynamic lifetime. | ||
| /// | ||
| /// Using this paint type will likely increase memory usage further than would otherwise be necessary, as: | ||
| /// 1) A copy of the image's data will remain in the renderer even after the final render using the image, until it is garbage collected based on renderer-specific criteria. | ||
| /// 2) The [`ImageData`](peniko::ImageData) used in this brush will remain live until this scene is rendered. | ||
| /// | ||
| /// As such, it's recommend to encode the image manually, and use [`Paint::StandardBrush`]. | ||
| /// This is especially important for short-lived images (in addition to animated images). | ||
| /// (Note that Vello API itself does not currently support encoding images manually, but | ||
| /// individual renderers may provide APIs for this). | ||
| /// | ||
| /// This method is provided for developer ergonomics. | ||
| AdHocImage { | ||
| image: ImageBrush, | ||
| paint_transform: Affine, | ||
| }, |
There was a problem hiding this comment.
Would the lifetime of this be any different to the lifetime of a Texture referenced by ID? Presumably the "deallocate" method for deallocating a Texture will need to keep track of in-flight renders anyway (so that we don't accidentally try to render a texture that doesn't exist), and thus the same GC semantics would apply?
There was a problem hiding this comment.
The GC semantics of this are likely to be some combination of heuristics such as:
- this image hasn't been used in rendering in the prior (10 seconds and 10 renders)
- the underlying
ImageData's blob has no more strong references, so definitely won't be rendered again.
Once those happen, the renderer implementation would internally call deallocate, and that could have delayed action until all in-flight renders are complete (that is, all in-flight renders at the time of the call to deallocate; not until there are no in-flight renders), or something more complicated if we determine that's worth the complexity.
The important context for this design is that we don't have an abstraction for "render contexts" in Vello API as things stand (the one in #1299 is very likely unviable). That motivation is already clearly documented in the docs for this crate.
That is, this allows Vello API to display images from the CPU in its current form, whilst still documenting the performance pitfalls of that feature.
AdHoc Images would be registered with the RenderContext the same as any other image (perhaps with a flag to hint that it is a short-lived image that need not be cached beyond a single render).
This question is confusing, given that these are already documented to be long-lived...
There was a problem hiding this comment.
Wait, do we have some sort of home brew garbage collection implemented internally? 🤔
15 minutes later: Aha, now I get it 💡
This was mostly my trying to make use of the type from Peniko; I quite like the idea of flattening it out.
In my mind, there are two disadvantages to pulling it out:
This is a fair question. I can't think of any off the top of my head. I'm happy to leave it out of this PR. |
|
Event though I grew accustomed to the current "sparse strip" APIs, I don't see any downsides with making this less stateful and more stateless. Also the adhoc image texture GC thingamajig make sense, I am pretty sure 👍 Edit: at the end of the day, this is important surface level stuff that doesn't really affect the hot path. Good stuff @DJMcNab |
Note that this proposed design is not yet implemented for any of the backends. It may be useful to spawn discussion.
This proposed design for the
Paintenum maintains the prior separation between "brush" and "filled area" which motivated the prior design, without any of the other strangeness forced by it (namely especially the interaction withPaintScene::append).It also demonstrates a possibility for per-renderer extensions, and the other motivation of non-object-transformed gradients.
Unfortunately, I doubt that I will be able to attend today's renderer OH to further present this.