You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to better understand the performance of JAX and its underlying just-in-time compilation architecture, but am puzzled how to get access to this information. For example, it would be helpful to distinguish how much time is spent tracing in Python, doing HLO optimizations within XLA, and time spent further downstream in LLVM->PTX and PTX->SASS compilation steps.
Surely these are useful metrics to JAX developers as well, but I could not find any information on how to access them.
import torch_xla.debug.metrics as met
print(met.metrics_report())
This page also mentions a XLA_METRICS_FILE and other environment variables that can be used to extract metrics information --- however, it seems that all of these are 100% PyTorch specific.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Dear JAX developers,
I am trying to better understand the performance of JAX and its underlying just-in-time compilation architecture, but am puzzled how to get access to this information. For example, it would be helpful to distinguish how much time is spent tracing in Python, doing HLO optimizations within XLA, and time spent further downstream in LLVM->PTX and PTX->SASS compilation steps.
Surely these are useful metrics to JAX developers as well, but I could not find any information on how to access them.
Searching online brings me to a PyTorch/XLA troubleshoooting guide with promising-looking interfaces like
This page also mentions a
XLA_METRICS_FILE
and other environment variables that can be used to extract metrics information --- however, it seems that all of these are 100% PyTorch specific.Any suggestions would be greatly appreciated!
Thanks,
Wenzel
Beta Was this translation helpful? Give feedback.
All reactions