Replies: 2 comments 1 reply
-
|
Hi @rgknox. Unfortunately, we don't currently have a way to do this. It seems like it'd be a useful thing for us to have, though! I'll open an issue from this discussion and we'll aim to get something implemented soon. |
Beta Was this translation helpful? Give feedback.
-
|
Hi @rgknox, I've implemented something for this in #488, adding a subroutine called With the proposed changes, running the to indicate that there is one parameter associated with the Is this in line with what you had in mind? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello. I've generated a trained neural network model in pytorch, and loaded it with Ftorch with the intention of performing inference. But I'd like to verify that I am writing and loading what I think I am, by printing the weights on both sides of the process. In my case it is just a couple of small linear layers and a relu.
In pytorch, I can loop through the items in the state_dict, like so:
Are there wrapper functions on the FTorch side that can help with this? I see there is a wrapper for printing out the contents of a tensor, but I'm not sure how to loop through the contents of the model and access those tensors. Thank you for your help!
Beta Was this translation helpful? Give feedback.
All reactions