Add ability to refine_template from a movie#104
Add ability to refine_template from a movie#104jdickerson95 merged 29 commits intomdg_global_whiteningfrom
Conversation
mgiammar
left a comment
There was a problem hiding this comment.
Overall the logic of the PR looks fine, but I think there's a lot of complexity introduced in setting up the particle image stack portion of Leopard-EM by all the new options for global/local filtering + the movie extraction. I'd like to see the functions in the utilities cleaned up a bit before merging; lots of different if/else statements make it hard to see how particualr functions will execute
| print( | ||
| f"frame {frame_index}: y_shifts: {y_shifts[0]}, x_shifts: {x_shifts[0]}" | ||
| ) |
There was a problem hiding this comment.
Extra print statement which should not be there
| if movie is not None and self.apply_global_filtering: | ||
| warnings.warn( | ||
| "Global filtering cannot be applied with movie refinement. " | ||
| "Disabling apply_global_filtering.", | ||
| stacklevel=2, | ||
| ) | ||
| self.apply_global_filtering = False |
There was a problem hiding this comment.
It would be preferable to raise a validation error through Pydantic when the movie is not None or if self.movie_config.enabled == True and self.apply_global_filtering == True. That way there are no implicit modifications to a configuration file
| def dose_weight( | ||
| movie_fft: torch.Tensor, | ||
| pixel_size: float, | ||
| pre_exposure: float, | ||
| fluence_per_frame: float, | ||
| voltage: float, | ||
| ) -> torch.Tensor: | ||
| """Dose weight a movie. | ||
|
|
There was a problem hiding this comment.
Function name should be more descriptive. It is dose weighting and summing a move into a single micrograph. From the current name and docstring I would expect it to return a tensor of shape (t, H, W) rather than (H, W)
| if movie is not None and deformation_field is not None: | ||
| particle_images = particle_stack.construct_image_stack_from_movie( | ||
| movie=movie, | ||
| deformation_field=deformation_field, | ||
| pos_reference="top-left", | ||
| handle_bounds="pad", | ||
| padding_mode="reflect", | ||
| padding_value=0.0, | ||
| pre_exposure=pre_exposure, | ||
| fluence_per_frame=fluence_per_frame, | ||
| ) | ||
| else: | ||
| particle_images = particle_stack.construct_image_stack( | ||
| images=micrograph_images, | ||
| indices=micrograph_indexes, | ||
| extraction_size=particle_stack.extracted_box_size, | ||
| pos_reference="top-left", | ||
| padding_value=0.0, | ||
| handle_bounds="pad", | ||
| padding_mode="reflect", # avoid issues of zeros | ||
| ) |
There was a problem hiding this comment.
From a code organization standpoint, there are a lot of different ways this function could execute depending on the input parameters. Separating out the image extraction step into two functions which operate on either on a ParticleStack or movie object seems like the logical choice here.
| frame_deformation_field = evaluate_deformation_field_at_t( | ||
| deformation_field=deformation_field, | ||
| t=normalized_t[frame_index], | ||
| grid_shape=(10 * gh, 10 * gw), |
There was a problem hiding this comment.
Why this extra value of 10? I know its because of oversampling to then do interpolation, but it's not immediately obvious why the grid dimensions are being multiplied.
There was a problem hiding this comment.
This is the same as in torch motion correction.
It is expensive to evaluate the cubic spline grid over all pixels. As an approximation, we evaluate it over a grid 10x10 larger than the control points grid, and then use bicubic interpolation to go to the full per-pixel shift.
10x was chosen because it seemed to work; more work would be needed to characterize sensitivity vs speed properly.
Add a differentiable refine template
Can now supply a movie and deformation field to refine template.
PR going into global whitening branch since this required the global/local whitening separation.