Skip to content
This repository was archived by the owner on Dec 14, 2025. It is now read-only.

Conversation

@lamyiowce
Copy link
Contributor

Implementing module replacements:

  • torch.Module becomes a placeholder torch fn, then a placeholder op in ONNX graph, which gets expanded to a replacement implementation.
  • main replacement mechanism in daceml/onnx/nodes/replacement.py and daceml/pytorch/module_replacement.py
  • implementations of GAT and GCN layers in daceml/onnx/nodes/replacement_entries.py, along with appropriate tests comparing to Pytorch Geometric implementations
  • Simple benchmarking code in examples

…and SDFG with some dummy logic is generated.
…and SDFG with some dummy logic is generated.
# Conflicts:
#	daceml/onnx/nodes/replacement.py
#	daceml/onnx/nodes/replacement_entries.py
#	daceml/onnx/onnx_importer.py
#	daceml/onnx/op_implementations/replacement_implementations.py
#	tests/module_replacements/test_gcnconv.py
@codecov
Copy link

codecov bot commented Aug 14, 2022

Codecov Report

Merging #117 (dcfdc08) into master (ecde8b3) will decrease coverage by 51.84%.
The diff coverage is 9.31%.

@@             Coverage Diff             @@
##           master     #117       +/-   ##
===========================================
- Coverage   62.69%   10.85%   -51.84%     
===========================================
  Files          65       68        +3     
  Lines        7239     7561      +322     
===========================================
- Hits         4538      820     -3718     
- Misses       2701     6741     +4040     
Impacted Files Coverage Δ
daceml/onnx/converters.py 24.77% <0.00%> (-71.42%) ⬇️
daceml/onnx/nodes/__init__.py 0.00% <0.00%> (-100.00%) ⬇️
daceml/onnx/nodes/replacement_entries.py 0.00% <0.00%> (ø)
daceml/onnx/op_implementations/__init__.py 0.00% <0.00%> (-100.00%) ⬇️
...ml/onnx/op_implementations/pure_implementations.py 47.53% <ø> (-26.43%) ⬇️
.../op_implementations/replacement_implementations.py 0.00% <0.00%> (ø)
daceml/torch/module.py 0.00% <0.00%> (-89.96%) ⬇️
daceml/onnx/nodes/replacement.py 0.82% <0.82%> (ø)
daceml/onnx/onnx_importer.py 54.05% <14.71%> (-40.72%) ⬇️
daceml/onnx/shape_inference/shape_inference.py 62.22% <59.52%> (-37.78%) ⬇️

... and 58 files with indirect coverage changes

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

Copy link
Collaborator

@orausch orausch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR, looks very cool! I will need some time to understand this and try it out.

Two things I noticed on a first glance:

  • You should add pytorch_geometric to the setup.py to allow CI to run.
  • I see that you've made some changes to symbolic_shape_infer.py. In general, it would be good to avoid touching this file since it is just vendored from https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/python/tools/symbolic_shape_infer.py. At a first glance it looks like we should be able to get your functionality if we just add your replacements to self.dispatcher_ after instantiation. I also see that the MSFT folks have been doing something similar to you with their aten replacements for their ORTModule wrapper. Maybe the new version of symbolic_shape_infer has something similar to what you need?

setup.py Outdated
'coverage', 'pytest', 'yapf==0.31', 'pytest-cov', 'transformers',
'pytest-xdist', 'torchvision', 'tabulate', 'efficientnet_pytorch',
'pytest-timeout'
'pytest-timeout', 'pytorch_geometric'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed it to the correct dependency but I'm not sure how to test it correctly.

@lamyiowce lamyiowce requested a review from tbennun January 24, 2023 15:38
Copy link
Contributor

@tbennun tbennun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks much better! some comments remain

print("Changing schdeule to TB dynamic: ", node[0].map)
node[0].schedule = dace.dtypes.ScheduleType.GPU_ThreadBlock_Dynamic
and len(node[0].map.params):
if node[0].label in [
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still not a big fan of this. Probably needs some explanation comment.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I restructured it a bit: now you provide an external exclude list that's specified in the benchmark.py file. All exclusions have comments describing why they're excluded. This way if the loop naming changes due to implementation changes:

  1. the code will not work because erroneous loops will be tb-scheduled,
  2. there will be a warning saying that some loops you wanted to exclude from tb scheduling do not exist in the graph.
    I think that's way better than previous behavior (which was silently not changing the schedule of the loops).

'coverage', 'pytest', 'yapf==0.31', 'pytest-cov', 'transformers',
'pytest-xdist', 'torchvision', 'tabulate', 'efficientnet_pytorch',
'pytest-timeout', 'pytorch_geometric'
'pytest-timeout', 'torch-geometric'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is not the right way to install pyg: https://github.com/pyg-team/pytorch_geometric#pytorch-113

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lamyiowce if there is something you can't put in the setup.py, you can always add it to the makefile, which is called by CI: https://github.com/spcl/daceml/blob/master/Makefile#L36

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants