feat: split skeletons at branch points before merging#215
feat: split skeletons at branch points before merging#215wanqingy wants to merge 9 commits intoseung-lab:masterfrom
Conversation
|
oops.. disable post-processing should not be included.. Feel free to reject that change |
william-silversmith
left a comment
There was a problem hiding this comment.
This is a valuable mode of skeletonization for binary images resulting from light microscopy techniques that optically merge networks of neurons together that must be separated.
I added some comments, but overall this makes sense. It would be a good idea to add one or more test cases in the testing suite.
I do wish the permissions of unchanged files weren't touched though. 😅
| else: | ||
| path = self.frag_path | ||
|
|
||
| # to do: needs to handle corrupted data gracefully |
There was a problem hiding this comment.
What kind of corruption?
There was a problem hiding this comment.
like missing image chunks.... I tried to use flag fill-missing, but it doesn't solve the problem somehow. Didn't dive deep into it.
| timestamp:Optional[int] = None, | ||
| root_ids_cloudpath:Optional[str] = None, | ||
| # NEW: Add chunk indexing parameters | ||
| chunk_index:Optional[int] = None, |
There was a problem hiding this comment.
can't chunk index be calculated from the position in space?
| chunk_index:Optional[int] = None, | ||
| chunk_coords:Optional[Tuple[int,int,int]] = None, | ||
| global_chunks_per_dim:Optional[List[int]] = None, | ||
| volume_bounds:Optional[Bbox] = None, |
There was a problem hiding this comment.
I think you can get this from CloudVolume rather I think this is for selecting a cropped volume, but I'm still not sure it's necessary.
| ) | ||
| # Store chunk information | ||
| self.chunk_index = chunk_index | ||
| self.chunk_coords = chunk_coords # (cx, cy, cz) |
There was a problem hiding this comment.
Is this different from shape?
| for sid, skel in skeletons.items(): | ||
| skel.id = sid | ||
|
|
||
| if self.split_at_branches: |
There was a problem hiding this comment.
moving this after cross section might give slightly nicer cross section results near the branch points but probably very minor
| result[1] = surface_fragments[0] | ||
| result[1].id = 1 | ||
| else: | ||
| from osteoid import Skeleton |
There was a problem hiding this comment.
it's OK to move this to a top level import
| if isinstance(self.volume_bounds, str): | ||
| import json | ||
| volume_bounds_dict = json.loads(self.volume_bounds) | ||
| from cloudvolume.lib import Bbox, Vec |
There was a problem hiding this comment.
ok to promote bbox and vec to top level imports
| chunk_bbox = self.bounds | ||
|
|
||
| # Parse volume_bounds if it's a string | ||
| if isinstance(self.volume_bounds, str): |
There was a problem hiding this comment.
this is a bit weird, usually bounds are specified as Bbox.from_list/to_list or offset + shape
|
|
||
| return interior_faces | ||
|
|
||
| def get_interior_faces(self, vol): |
There was a problem hiding this comment.
I guess this is in part to support debugging if you want to consider a subvolume?
| # print(f"DEBUG: chunk_index={self.chunk_index}, coords={self.chunk_coords}") | ||
|
|
||
| if self.chunk_index is not None: | ||
| base_id = (self.chunk_index + 1) * 1000000 + 1000 |
There was a problem hiding this comment.
A safer method is to add the number of voxels in prior tasks, which will guarantee this id is unique. If using a uin64, this works up to a cubed volume length of 2.6M voxels.
|
I made some changes to the SkeletonTask to support high speed cross section and hole filling calculations.... sorry about that. |
Added
split_at_branchesoption to Stage 1 skeletonization task to reduce memory load during skeleton merge operations for binary label datasets.