-
Notifications
You must be signed in to change notification settings - Fork 498
Transolver volume #1242
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Transolver volume #1242
Conversation
…taset. Surface dataloading is prototyped, not finished yet.
… data. Applied some cleanup to make the datapipe similar to domino, which is a step towards unification.
Greptile OverviewGreptile SummaryOverviewThis PR introduces significant enhancements to the PhysicsNeMo transolver capabilities:
Critical IssueTyphon Model Bug ( Architecture ChangesThe datapipe refactoring consolidates preprocessing logic and improves flexibility with configurable symmetries (translational invariance, scale invariance). The new Checklist StatusPer the PR description, the following items remain incomplete:
Important Files ChangedFile Analysis
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
22 files reviewed, 1 comment
| if len(global_context_input) > 0: | ||
| embedding_states = torch.cat(global_context_input, dim=-1) | ||
|
|
||
| # Project the inputs to the hidden dimension: | ||
| x = self.preprocess(local_embedding) | ||
|
|
||
| for block in self.blocks: | ||
| x = block(x, embedding_states) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
logic: embedding_states referenced before assignment when no geometry or global embeddings provided
| if len(global_context_input) > 0: | |
| embedding_states = torch.cat(global_context_input, dim=-1) | |
| # Project the inputs to the hidden dimension: | |
| x = self.preprocess(local_embedding) | |
| for block in self.blocks: | |
| x = block(x, embedding_states) | |
| # Construct the embedding states: | |
| if len(global_context_input) > 0: | |
| embedding_states = torch.cat(global_context_input, dim=-1) | |
| else: | |
| embedding_states = None | |
| # Project the inputs to the hidden dimension: | |
| x = self.preprocess(local_embedding) | |
| for block in self.blocks: | |
| x = block(x, embedding_states) |
RishikeshRanade
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
| inference workloads are different, so these aim to cover common scenarios as examples. --> | ||
|
|
||
| The validation dataset in Zarr format can be loaded, processed, and the L2 | ||
| metrics summarized in `inference_on_zarr.py`. For surface data, this script will also |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add that return_mesh_neighbors should be set to true to run this?
|
|
||
| metrics = { | ||
| "l2_pressure": torch.mean(l2[:, 0]), | ||
| "l2_pressure_surf": torch.mean(l2[:, 0]), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the readme we need to mention that this part needs to be changed when extending to a different use case. The other way is to describe variables in config.yaml like domino and read them from there.
|
|
||
|
|
||
| def pad_input_for_fp8(features: torch.Tensor, embeddings: torch.Tensor) -> torch.Tensor: | ||
| def pad_input_for_fp8( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do these need to be part of train.py? Can these functions be moved to utils or part of datapipe?
| dataloader: Training data loader | ||
| sampler (torch.utils.data.Sampler): Sampler for distributed or sequential sampling. | ||
| model (torch.nn.Module): The neural network model to train. | ||
| epoch_len (int): Length of the epoch. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this number of epochs?
| [2025-11-19 07:02:38,387][training][INFO] - Summary: | ||
| | Batch | Loss | L2 Pressure | L2 Shear X | L2 Shear Y | L2 Shear Z | Predicted Drag Coefficient | Pred Lift Coefficient | True Drag Coefficient | True Lift Coefficient | Elapsed (s) | | ||
| |---------|--------|---------------|--------------|--------------|--------------|------------------------------|-------------------------|-------------------------|-------------------------|---------------| | ||
| | Mean | 0.0311 | 0.0614 | 0.0921 | 0.108 | 0.1214 | 5.2949 | 1.9137 | 5.2962 | 1.9329 | 11.4647 | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this updated? These numbers look higher than what we discussed, right?
PhysicsNeMo Pull Request
This PR brings several updates:
Still minor details to wrap up:
Description
Checklist
Dependencies
Review Process
All PRs are reviewed by the PhysicsNeMo team before merging.
Depending on which files are changed, GitHub may automatically assign a maintainer for review.
We are also testing AI-based code review tools (e.g., Greptile), which may add automated comments with a confidence score.
This score reflects the AI’s assessment of merge readiness and is not a qualitative judgment of your work, nor is
it an indication that the PR will be accepted / rejected.
AI-generated feedback should be reviewed critically for usefulness.
You are not required to respond to every AI comment, but they are intended to help both authors and reviewers.
Please react to Greptile comments with 👍 or 👎 to provide feedback on their accuracy.