Skip to content

Conversation

@DJMcNab
Copy link
Member

@DJMcNab DJMcNab commented Jan 14, 2026

This is a core part of #1299, split out to be re-viewable separately.

One observation which I hadn't properly engaged with until today is that the form proposed in this PR is independently useful. That is, useful apps could be made renderer-agnostic with just this API, with none of the "renderer abstraction" pain of #1299. I do think that the renderer API in #1299 does have valuable aspects (having textures be reference counted is still something I really want us to have), but it isn't needed to get something off-the-ground.

The core of this PR is two things:

  1. The PaintScene trait. This is your standard 2d imaging model trait (comparable e.g. with web canvas, vello::Scene, anyrender::PaintScene).
  2. A renderer-erased implementation of that trait, called vello_api::Scene. I call it renderer-erased, because it is (or... will be) invalid to combine renderers within a Scene graph. That's because of image resources and the like.

This PR does not have glyph support, and indeed I still believe that Vello API should continue to not have that.

Comment on lines +49 to +52
// There are arguments for a "dynamic length" encoding here, as PathEl is sized for 6 f64s (plus a disciminant)
// It depends somewhat on what proportion of the elements are a CurveTo
/// The elements of the contained paths.
pub elements: Vec<PathEl>,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should definitely defer this until later, but it occurs to me that this kind of optimisation might also work well for gradient color stops


fn set_brush(
&mut self,
brush: impl Into<OurBrush>,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW, I'm not entirely convinced that having impl Into in the API makes the ergnomics better. It's nice if you're passing a literal (no need to call .into()). But it's annoying if you're converting from some other type, because it breaks type inference and means that an Into implementation from that type to e.g. OurBrush won't work (and you also can't turbofish Into::into, so you generally have to break out the conversion into it's own let binding).

Copy link
Member Author

@DJMcNab DJMcNab Jan 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is largely copying Vello Classic's existing API. I'm also not aware of having run into any cases where such a conversion has been necessary with Vello Classic's API. We can always remove it as a follow-up.

and you also can't turbofish Into::into

For the sake of pedantry: this isn't true - you can use Into::<Whatever>::into. But I agree that it isn't very ergonomic.

#[derive(Debug)]
pub enum RenderCommand {
/// Draw a path with the current brush.
DrawPath(Affine, PathId),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

API Style: could we use structs rather than tuple/struct enum variants? The latter can be difficult to work with (can't take reference or pass int a function, etc).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could, and there are reasons to think it could be nicer. Either way, I'm not going to change that in this PR.

(Fwiw, these structures are taken pretty much as-is from Vello Common's recording types)

pub struct Scene {
pub paths: PathSet,
pub commands: Vec<RenderCommand>,
pub hinted: bool,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Q: Is "hintedness" actually a global property of a Scene. Or might it apply to some items in the scene but not others?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this is a property of an entire Scene, because:

  1. A scene being hinted is an (enforced) guide that you can't rescale/rotate/etc. it.
  2. There's no way to subdivide a scene (e.g. to split out the parts which are hinted for separate processing)
  3. I believe that implicitly making a Scene be hinted by trying to include hinted content into it would be a really error-prone design to use, as the failure wouldn't be localised to the code which was "wrong".

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah ok, so "hinted: true" really means "maybe hinted". And hinted: false means "definitely not hinted.

(including non-hinted content into a hinted scene must presumably be valid? Certainly I can imagine wanting to render an animated SVG (unhinted) along with non-animated text (hinted) into the same overall scene)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(including non-hinted content into a hinted scene must presumably be valid? Certainly I can imagine wanting to render an animated SVG (unhinted) along with non-animated text (hinted) into the same overall scene)

I agree with this, and I can also imagine such a case. Would it be simpler to omit this for now to simplify the unified API?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as I'm aware, that case is already valid. It is correct that this flag that a Scene is hinted was added without much discussion (although it's also not much code), and probably should have been a follow-up. I still think there is value in it, but that tearing it out might be best.

// After you edit the crate's doc comment, run this command, then check README.md for any missing links
// cargo rdme --workspace-project=vello_api

// These docs are written for a more complete version of Vello API.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A lot of this documentation feels like it might be better suited to being in either:

  • The PR description
  • A dedicated .md file design document

with rustdoc reserved for user-facing documentation.

(I certainly don't think we could ship it in this state)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have the mental bandwidth to rewrite these docs at the moment. That's especially true when I believe this is likely to further change again, around textures being added. I agree that it's important, but I really would like to avoid blocking this PR on it. See also below.

I certainly don't think we could ship it in this state

From my perspective, this PR is being submitted as a base for further collaboration and scoped review, rather than as something to be shipped as-is. It definitely needs a lot more follow-up before release.

Comment on lines +6 to +11
/// A renderer-specific ID for a texture.
///
/// Note that we don't currently expect these to be constructed,
/// because the.
#[derive(Copy, Clone, Debug, Hash, PartialEq, Eq, PartialOrd, Ord)]
pub struct TextureId(u64);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI, I much prefer this being a u64 newtype rather than reference counted. When previously discussing reference counting I thought it would be an internal implementation detail of a rendering backend to prevent a resource that is part of an in-flight render from being deallocated (even if the user explicitly calls "free").

(some kind of generational scheme may also work for this)

Copy link
Member Author

@DJMcNab DJMcNab Jan 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To clarify, I still think that for applications (i.e. not web browsers and similar which are already managing image lifetimes), the place in the trade-off space which maximises usability is to use reference counted handles for images. And that doesn't mean that you can't have manually managed resources (indeed, #1299 did allow that); it just means that you have to be careful with their lifetimes if you do. I do find it hard to imagine wanting to make a second tier of reference counting for in-flight renders, if we'll already have one for "not in-flight" renders.

But that doesn't need to be tied into the initial PR for getting this landed.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My concern here is that it's going to be hard to type-erase anything which can't be coerced into a primitive type. Which will make it difficult to to wrap this API in a more general purpose one that covers renderers other than Vello (and just generally leaves you dealing with an associated type rather than a concrete one).

@grebmeg grebmeg self-requested a review January 18, 2026 22:52
Copy link
Contributor

@nicoburns nicoburns left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have lots of nits, but the overall direction seems about right to me. So happy for this to be merged and then iterated on.

@nicoburns
Copy link
Contributor

I also think it may be worth reconsidering whether to merge with AnyRender at some point, given the now very similar designs. I certainly plan to import the ideas that seem good to me (notably the Scene type) into AnyRender.

@DJMcNab DJMcNab added this pull request to the merge queue Jan 20, 2026
Merged via the queue into linebender:main with commit b18b4df Jan 20, 2026
17 checks passed
@DJMcNab DJMcNab deleted the api-scene-only branch January 20, 2026 12:10
Comment on lines +64 to +65
// Obviously, ideally we'd not be allocating here. This is forced by the current public API of Vello CPU.
let bezpath = BezPath::from_iter(segments.iter().cloned());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it not possible to store a re-useable BezPath on CPUScenePainter so we don't need to pay this allocation (ditto below)?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we certainly could do that. The reason I don't in this PR is that I believe that would be a trap, because there's no semantic difference between a reference into a BezPath owned by the CPUScenePainter and a reference into the PathSet.

That is to say, the correct way to fix this is to change the internal APIs to work with a slice of PathEl, rather than papering over it with amortised allocations. That's one of the things on the task list before any "serious" release of this code.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

one of the things on the task list

Is this task list captured somewhere or captured in a TODO? I worry that without documentation we'll forget this very important optimisation.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've made these an explicit TODO in #1375

Copy link
Collaborator

@grebmeg grebmeg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @DJMcNab! I took a look at this PR, and you did a great job refining what can be separated from the previous PR. This one is much more manageable. ❤️

I left a few questions, mostly to discuss and help me understand the purpose of certain pieces. I also noticed that some of the documentation feels a bit outdated and contains quite a few TODOs.

I think this could be a good starting point for evolving the API in an iterative way. Right now, we seem to be missing an example of a "real" application that demonstrates how the API is used in practice. Such an example would also help clarify how we envision apps being built and how vello_api is meant to be consumed.

Additionally, it might be worth adding exp or experimental prefixes to the api.rs files and possibly to the vello_api crate itself, just to make it completely clear that this is experimental.

use crate::texture::TextureId;

/// The brush type used for most painting operations in Vello API.
pub type StandardBrush = Brush<ImageBrush<TextureId>>;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nit]: I’m not sure I understand why it’s called Standard. I actually liked the previous Paint and had gotten quite used to it, but I guess “brush” is the vocabulary we generally use?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Standard" in this case is referring to the fact that it's a non-specialised brush; that is, it's a brush from the standard imaging model (so solid colour, gradient or image). The alternative is the specialised brushes, which are currently only blurred rounded rectangles.

The name of Brush/Paint is from my experience having come from the Vello Classic side. I'm happy for us to change it to paint. I think perhaps I missed something; if the "paint" is what is drawn, but the brush describes the paint. So the metaphor is coherent that you call set_paint with a brush.

// The naive solution would need to apply the gradient with a paint transform which undoes the
// object's transform. That's clearly pretty poor, and we can do better.
// However, this isn't exposed in the current Vellos, so we're choosing to defer this.
// transform: Affine,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we use transform instead of paint_transform? ditto below where paint_transform used.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately, I'm not sure what alternative you're proposing here. Would you mind expanding this question?

Ultimately, this block going to have to be part of my upcoming work to reconsider how brushes/paint styles work.


/// Set the current brush to represent a rounded rectangle blurred with an approximate gaussian filter.
///
/// **This method is currently unimplemented on all backends, so should not be used.**
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be better to exclude this method entirely, since it isn’t implemented in both renderers?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are two arguments you could be making, one for which I'm loosely in favour, and one which I'm against.

  • I'm against removing just this method, because I don't think that fill_blurred_rounded_rect is the right API for blurred rounded rectangles. It is important to be able to separate what the paint type is from what area is painted; the currently supported fill_blurred_rounded_rect can only fill the full rectangle, with no cutouts.
  • However, the alternative reading is for removing the blurred rounded rectangle feature from Vello API on an interim basis, until we can support set_blurred_rounded_rect_brush (and also fill_blurred_rounded_rect in Vello Hybrid, see Vello Hybrid: Support blurred rounded rectangles #1373). I'm loosely in favour of this, with a few caveats. The main reason I'd like to keep this in the API, is to have an example of the "specialised paint" I mentioned in my other comment, which is useful for design work.

I also expect blurred rounded rectangles to be part of the final imaging model for this crate, which is why they're in here to start with.

pub struct Scene {
pub paths: PathSet,
pub commands: Vec<RenderCommand>,
pub hinted: bool,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(including non-hinted content into a hinted scene must presumably be valid? Certainly I can imagine wanting to render an animated SVG (unhinted) along with non-animated text (hinted) into the same overall scene)

I agree with this, and I can also imagine such a case. Would it be simpler to omit this for now to simplify the unified API?

Comment on lines +171 to +180
fn append(
&mut self,
mut scene_transform: Affine,
Self {
// Make sure we consider all the fields of Scene by destructuring
paths: other_paths,
commands: other_commands,
hinted: other_hinted,
}: &Scene,
) -> Result<(), ()> {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I might be mistaken, but I was a little surprised to see this, as I understood it to be a new feature related to merging scenes. Could you please clarify the use cases?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Once you have the Scene type, this is a pretty natural extension, I think? Use cases would include having e.g. a widget in a GUI toolkit, which can contain a translatable/scalable child SVG. The widget would be provided with a Scene to render into which is owned by the toolkit, and it itself would own the Scene of the SVG, which is a more efficient representation for painting (especially with cached paths).

self.render_context.set_transform(transform);
self.render_context.set_fill_rule(fill_rule);
// TODO: Tweak inner `fill_path` API to either take a `Shape` or an &[PathEl]
self.render_context.fill_path(&path.to_path(0.1));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be possible to apply the tolerance when preparing the shape, or is the tolerance supposed to change depending on the transform? Of course, this would require changing the API to use slice.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is nearly the same thing as #vello > Determining correct `Shape` tolerance, except that we do actually know the final transform here.

Ok(())
}

fn fill_path(&mut self, transform: Affine, fill_rule: Fill, path: impl Shape) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

impl Shape is used in multiple places, would this increase binary size due to monomorphized functions? Would it be better to stay with BezPath instead?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function is very short, so I think this should be very negligible.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not a huge fan of the impl Shape either. Having said that, there are potentially algorithmic-level improvements available from impl Shape (e.g. downcasting to Rect and sending Rect down a fast path).

Copy link
Member Author

@DJMcNab DJMcNab Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

impl Shape is used in multiple places, would this increase binary size due to monomorphized functions? Would it be better to stay with BezPath instead?

The problem with &BezPath is that the ergonomics are bad (and you also end up doing unwanted allocations). Taj has suggested taking in a pathsegment iterator, which is a much better fit. Follow the discussion in other threads for more.

clip_transform: Affine,
clip_path: Option<impl Shape>,
blend_mode: Option<BlendMode>,
opacity: Option<f32>,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess the filters are deliberately omitted here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that's correct. I did originally have a TODO here, but Olivier asked for it to be moved. This is now captured as

//! - Filter effects

@DJMcNab
Copy link
Member Author

DJMcNab commented Jan 22, 2026

Additionally, it might be worth adding exp or experimental prefixes to the api.rs files and possibly to the vello_api crate itself, just to make it completely clear that this is experimental.

I've added some docs for this in #1372. It's plausible that an experimental_api name for the modules would also make sense, but I'm not planning to make that PR. It's a question we need to ask when we're looking at doing another release, probably.

Right now, we seem to be missing an example of a "real" application that demonstrates how the API is used in practice.

On a small-scale, #1369 is trying to provide that.

Such an example would also help clarify how we envision apps being built and how vello_api is meant to be consumed.

I agree that we need this, although my thinking is that we need to get it to a more stable position first (e.g. changing the brush model).

pull bot pushed a commit to retro1996/vello that referenced this pull request Jan 22, 2026
Copy link
Contributor

@taj-p taj-p left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @grebmeg that this PR was way more manageable to review than the prior. Kudos! 🎉

I have concerns with taking impl Shape because that hides the tolerance calculation which can produce incorrect rasters of shapes for typical consumption.

We should take &BezPath (preferably, impl IntoIterator<Item = PathEl> since we don't necessarily need to collect) instead and push the tolerance complexity to the consumer until we're certain we can produce correct outputs for typical consumption.

Yes, that does call into question the value of Scene. Scene is resolution-dependent when it contains approximated primitives (Circle, Ellipse, etc.), so accepting impl Shape hides a footgun for consumers who scale significantly. I appreciate that a lot of consumers enjoy a scene API, so, instead of removing it, I would document that you need to think about to_path tolerance very carefully before using it.

We could take impl Shape for renderer-specific PaintScene impls and &BezPath for Scene, but I'm not sure that inconsistency is worth it.

To summarise: I believe we need to favour correctness over ergonomics until we're sure we can be correct.

Comment on lines +64 to +65
// Obviously, ideally we'd not be allocating here. This is forced by the current public API of Vello CPU.
let bezpath = BezPath::from_iter(segments.iter().cloned());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

one of the things on the task list

Is this task list captured somewhere or captured in a TODO? I worry that without documentation we'll forget this very important optimisation.

- Renderer specific painting commands (i.e. using downcasting).
This is intended to be an immediate follow-up to the MVP landing.
- Pushing/popping clip paths (i.e. non-isolated clipping).
This feature should be easy to restore, although it isn't clear how it will work with "Vello GPU", i.e. Hybrid with GPU rasterisation
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
This feature should be easy to restore, although it isn't clear how it will work with "Vello GPU", i.e. Hybrid with GPU rasterisation
This feature should be easy to restore, although it isn't clear how it will work with "Vello GPU", i.e. Hybrid with GPU sparse strip rendering

Copy link
Member Author

@DJMcNab DJMcNab Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have dictionary somewhere of what these terms mean in the pipeline? I was treating "rasterisation" as being the "convert flattened lines to alphas". However, I see that this terminology is more consistent with Raph's RustWeek slides, so I'll change it in my follow-up (edit: #1375)

fn fill_path(&mut self, transform: Affine, fill_rule: Fill, path: impl Shape) {
self.scene.set_transform(transform);
self.scene.set_fill_rule(fill_rule);
// TODO: Tweak inner `fill_path` API to either take a `Shape` or an &[PathEl]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// TODO: Tweak inner `fill_path` API to either take a `Shape` or an &[PathEl]
// TODO: Tweak inner `fill_path` API to take a `PathEl` iterator

I think we can pass an iterator through instead. Single threaded and hybrid can operate over the iterator directly (flatten::fill takes an iterator) and multi threaded CPU could collect into its allocation group directly (with some tweaks to allocation group)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, but I've been avoiding making breaking changes to the existing APIs in both Vellos CPU and Hybrid, especially in this PR.

///
/// This type has public fields as an interim basis.
/// To use this type, you should use the methods on its implementation of [`PaintScene`] instead.
#[derive(Debug)]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should Scene be Clone?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, yes and no. In the current state, I believe it could be, and I didn't add it only because I didn't think about this. That being said, the reason I want to avoid it is for future "custom operations".

So for example, in Vello CPU, we can easily set a per-path blend mode, which isn't possible (in general at least) for Vello Hybrid. We want some way to represent that (and similar features), and that will end up looking like Box<dyn SomeTrait>. That type obviously isn't cloneable in itself.
That being said, it is possible to make it cloneable, and there are strong arguments for doing so.

Comment on lines +95 to +96
// If you need a different tolerance, you should pass in a `BezPath` (i.e. using Shape::to_path with the tolerance you require)
self.elements.extend(shape.path_elements(0.1));
Copy link
Contributor

@taj-p taj-p Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But, what if you need to support much more than 10x zoom? For example, IIUC, won't the 23,000% zoom in graphite below produce a ~23 pixel error using this tolerance?

Consider:

  • Affinity's renderer; or
  • Zooming 23,000% in Graphite (below video); or
  • Building any form of whiteboard / design product; or
  • Building some form of 2D game with dynamically scaling scenes
graphite.zoom.mov

The solution in the comment to convert to BezPath first with a lower tolerance seems surprising and confusing for a consumer - it's a footgun. They could use a very low tolerance, but this creates many segments and suffers a performance penalty even if the user never scales the scene.

So, to summarise:

path_elements() is called at scene construction time, but the final transform isn't known until render time (or later, when the scene is appended). But, if you ever need to later apply a transform to the scene greater than 10x, you should know ahead of time what tolerance you require, and pay the performance cost of lower tolerance.

So, there is a tension here. Scene wants to be resolution-independent and reuseable at any scale for best ergonomics. But, path_elements() and the like bakes in resolution-dependent implementation. This seems contradictory.

For SVG, this means that <circle> and <ellipse> are subject to this tension (assuming the parser doesn't convert them to paths before reaching Vello).

I think we should accept BezPath (or better an iterator of PathEl) into the API until we have an answer to this problem because otherwise our internal implementation can produce incorrect renderings for expected and common use cases.

By passing in BezPath (or PathEl iterator) instead of impl Shape, we force the consumer to think about the issue and push the problem out until we are ready to properly tackle it. Yes, this is less ergonomic but is more correct.

This approach may also better reflect our current state where path caching isn't yet a known dimension.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe I'm missing something, but couldn't we change vello_cpu/vello_hybrid to take an impl Shape as well, and we pass it all the way down to flattening, where the tolerance is multiplied by the scale/skew factor in each direction, as it's currently done for the stroke tolerance?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are there concerns with making Shape -> BezPath conversion single-threaded instead of multi-threaded here? I have similar (potential) concerns around some of the Parley Draw API design...

Copy link
Member Author

@DJMcNab DJMcNab Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did discuss this before making this decision, in #vello > Determining correct `Shape` tolerance, although clearly that thread didn't get enough attention. I've actioned the proposal we discussed in that thread in #1376. That doesn't allow arbitrary path element iterators (probably we should have a suitable wrapper type), but it does maintain ergonomics in the common case.

Maybe I'm missing something, but couldn't we change vello_cpu/vello_hybrid to take an impl Shape as well, and we pass it all the way down to flattening, where the tolerance is multiplied by the scale/skew factor in each direction, as it's currently done for the stroke tolerance?

I believe this would be possible, although it would unfortunately require.

Are there concerns with making Shape -> BezPath conversion single-threaded instead of multi-threaded here? I have similar (potential) concerns around some of the Parley Draw API design...

If you can make measurements showing it's an issue, we can explore different designs. But I personally find it hard to think it would be a problem.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That affinity demo is really impressive!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe I'm missing something, but couldn't we change vello_cpu/vello_hybrid to take an impl Shape as well, and we pass it all the way down to flattening, where the tolerance is multiplied by the scale/skew factor in each direction, as it's currently done for the stroke tolerance?

Would that work for scenes though? At construction time, they do not know their ultimate transform; so we would need to store the traits on the scene until render time, which, I think, would be hard to do elegantly

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that is probably what I was going to say in my sentence fragment (if I ever got around to finishing it!); that it would require some way to store Shapes generically, which doesn't currently exist, and would likely require boxing or unsafe shenanigans.

@DJMcNab
Copy link
Member Author

DJMcNab commented Jan 23, 2026

I think that the follow-ups of #1375 and #1376 address all of the review concerns raised, and so conversation on their respective parts should continue in those PRs. Let me know if I've missed any comment; it's not intentional, it would just be because of GitHub's UI not making it easy to keep track of these things.

songhuaixu pushed a commit to songhuaixu/vello that referenced this pull request Jan 26, 2026
As suggested in PR review of
linebender#1360 from @taj-p and @grebmeg.
Thanks both, and apologies that this likely did land too hastily.

I've also added some basic tests for `extract_integer_translation`,
because it's currently otherwise untested.

---------

Co-authored-by: Alex Gemberg <[email protected]>
Co-authored-by: Taj Pereira <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants