-
-
Notifications
You must be signed in to change notification settings - Fork 47
Open
Description
Is your feature request related to a problem? Please describe.
When visualizing deep neural networks with many repeated blocks (like ResNet or Transformer layers), the generated graphs become extremely large and difficult to navigate. This is particularly problematic for models with 12+ identical layers, where the visualization becomes cluttered with redundant information.
Describe the solution you'd like
- Add an option to collapse repeated sequential blocks in the visualization with a notation indicating the number of repetitions. For example:
# Example usage
model_graph = torchview.draw_graph(
model,
input_size,
collapse_repeated=True, # New parameter
repeat_threshold=2 # Minimum number of identical blocks to collapse
)- The visualization would show repeated blocks as a single block with a multiplier (e.g., "Transformer Block × 12") instead of drawing each instance separately.
Describe alternatives you've considered
- Implementing a custom grouping mechanism that requires manual specification of which layers to collapse
- Adding a zoom/expand feature to hide/show repeated blocks on demand
- Creating a hierarchical view where repeated blocks are nested under a parent node
Screenshots / Text
None
Additional context
This feature would be particularly useful for:
- Transformer-based architectures (BERT, GPT, etc.)
- Deep ResNet variants
- Vision models with repeated stages
- Any architecture using
nn.ModuleListwith multiple identical blocks
The collapsing should be intelligent enough to detect truly identical blocks while preserving any meaningful variations in architecture.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels