-
Notifications
You must be signed in to change notification settings - Fork 2
Add index mesh export that doesn't go through filesystem #15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Hey, thanks for your PR. I agree that the current approach of using the file-system to triangulate each part is a bad solution which was just good enough at the time. |
Yes :) I need the triangulation to render at all, GPUs only know how to quickly render triangles, not more complex CAD shapes. I need the normals and UVs (instead of just triangles) to render lighting and textures respectively. The demo I pushed out earlier was getting the mesh by going through an STL, but that doesn't come with normals and UVs so it was approximating the normals (lighting would be a bit off, especially on curved surfaces) and just ignoring UVs (it could not be textured). Looking forwards, the current code renders objects, but textures are duplicated across faces. We'll probably want to add support for returning a mesh for only a single face at some point so we can texture different faces differently, and enable detecting the cursor on individual faces instead of just the object as a whole. |
Ok, I think I understand now. The face based approach might be the one most natural to the rest of anvil. It might also be good to have an intermediate representation of the mesh (maybe based on the edges connecting the vertices) that could then be converted into the IndexedMesh that you need. What do you think? |
Did you have a look at the |
I think anvil will likely end up needing operations that work on both parts and faces - that's certainly the case with CAD programs I've worked with in the past. E.g. you might extrude a face, or you might rotate a part. I think we end up exporting APIs to get meshes of both parts and faces. Ultimately the full part mesh API likely calls the face based API unless we end up wanting to do better vertex deduplication for performance. But as a user of the library I'm very often going to just want the full mesh, not just a piece of it.
That this is what OCCT does for us? The docs here go into a reasonable amount of detail on how they modelled this. Doing this ourselves entails a fairly deep understanding of the underlying geometric model - you need to be able to do things like ask "so how closely does this polyline approximate this curve" and create a better one if it doesn't approximate it closely enough. Re-implementing this is non trivial.
(Response updated once from original): Turns out not... but
There's also just no good way to get this type without the sort of code I've written here... apart from the UV thing I'd have no real objection to rewriting my code to export this struct instead of defining its own struct, but I also wouldn't see any real advantage to doing so. |
Yes, I totally agree :)
Yes it does, but I would like to hide all OCCT interfaces behind custom wrappers to keep the option of ditching OCCT down the road open. An intermediate abstraction layer could be used in other usecases than rendering. For example, I have struggled so far to get the bounding box coordinates of a part. This could also be done using a mesh (given the performance penalty is not too big). But OCCT could suffice for now.
|
It's not a question of performance, the information doesn't exist to do this meaningfully - we don't know what triangles represent what face. That said, it's questionable how meaningful the current model is. It's mapping every face to the same unit square, so you end up with the same texture on every face scaled to the size of the face... which is ok for very basic programmer art where you don't care about the details, but the usefulness is very limited. Simply dropping UV support for now would make some sense. If/when we are exporting face meshes instead of part meshes the UV information becomes much more valuable, because we can then texture the different faces differently. I suppose at that point it's possible to approximate UVs manually, but it does not sound simple.
I disagree with the notion that what we are doing is exporting STL files in the first place... we aren't serializing anything. Using This doesn't scale to if/when we want to make textured faces with uv's, because
My gut feeling is that as long as you're using OCCT you want to use OCCT's meshing algorithm, and have the intermediate abstraction layer be the mesh itself (with labels for what is a face, what is a user facing "edge" of the geometry, and what is a user facing vertex - i.e. one with geometric meaning not one that the meshing algorithm just made up). That any replacement for OCCT (whether third party or custom made) will/should come with its own meshing layer. I could easily be wrong though, this is not an area where I have any expertise. |
Sorry for not getting back to you for a few days, I had a few deadlines this week. I now did a custom implementation that mostly copied your code in #16. The most important differences are:
Can you have a look if that also fits your use-case? If so I would finish the PR and merge. |
Closing since this is superseded by #16. No worries about taking your time. |
Currently exporting meshes is only possible via writing an stl to the filesystem. This adds support for doing it directly.
I referenced StlAPI_Writer in OCCT and mesh in opencascade-rs significantly while writing this code - both are also under LGPL2 licenses. Roughly the first comment is what I had before @bschwind very kindly pointed me in the direction of opencascade's implementation (a huge thank you to @bschwind for the help) - at which point I was just guessing at normals downstream and had no uvs.
Unlike the referenced implementations this exports an indexed mesh, allowing for faster rendering. It doesn't fully deduplicate vertices though, rather just taking what OOCT gives us (de duplication per face, duplicated vertices on the boundaries between faces). It seems like OCCT ought to be able to tell us which vertices are which and let us quickly deduplicate fully, but I haven't yet found a way to extract that information. In the meantime, this is at least better than nothing.
Neither reference fully handled normals, I think I got them correct, but it was basically trial and error.
missing_faces
is a concept fromStlApi_Writer
. I don't actually know how to generate them, but I generally trust that the upstream is right that it is somehow possible to create geometry that fails to triangulate.I decided to use bare rust arrays instead of points in the export format, since it is an interchange format so using our own types didn't seem particularly useful. I'm definitely willing to change that if you disagree. Either way on the bevy end I have to map everything down to f32s/u32s so it makes very little difference to me in my actual use case.
Disclaimer, since I know not everyone wants this in their project, that I did use claude while writing this. I'd estimate that 50% of the tests and 5% of the other code is actually claude's output surviving unchanged into the finish product. Any bugs remaining in the output are of course entirely my fault, and almost certainly not the result of claude considering the lines that did survive are the least interesting ones. This rather long PR description on the other hand was entirely written by hand.
I'm leaving this as a draft right now because I'm submitting this simultaneously with an upstream pull request (bschwind/opencascade-rs#210) we need to be able to output correct normals (the Cargo.toml change points to a commit not actually in the repo it is claiming it is... which github and cargo both just accept).