Passing metadata through prediction step #17628
Unanswered
machur
asked this question in
Lightning Trainer API: Trainer, LightningModule, LightningDataModule
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
What is the best way of passing any sample-related metadata through prediction step? Is there a mechanism in PL to propagate contextual information that will not be used in the forward pass calculation per se? Ideally, I would like to use BasePredictionWriter for saving output in the end as a dynamically added callback, but to do so, I need match output predictions with corresponding input data. Is there any other way than passing index from dataset as tensor through the pipeline to match it later?
I have the following scenario: I'm generating predictions from different input files and I would like to save them using the input paths with a suffix (and some input IDs). I'm wondering how to achieve it while ensuring the proper separation of the data and training layers.
Beta Was this translation helpful? Give feedback.
All reactions