-
I am having trouble with a memory leak. I have a model involving layers subclassed from The issue is, on
I don't see the pattern, any clue? I don't think I'm doing anything weird in the GNN layer, I just implemented for gnn_layer in self.gnn_layers:
h = gnn_layer(h, v, pos, r, graph_features, domain, edge_index, batch) class Simulator(torch.nn.Module):
def __init__(self, model, graph_generator) -> None:
super().__init__()
self.model = model
self.graph_generator = graph_generator
def forward(self, graph_data: GraphData, step: int) -> Prediction:
graph = self.graph_generator.build_graph(graph_data, step)
return self.model(graph)
def rollout(self,
initial_data: GraphData,
domain_sequence: torch.Tensor,
time_sequence: torch.Tensor,
) -> Prediction:
graph = self.graph_generator.build_graph(initial_data, 0)
T = time_sequence.shape[0]
predictions = []
for t in range(T - 1):
print(t)
prediction = self.model(graph)
predictions.append(prediction)
graph = self.graph_generator.evolve(graph, prediction, time_sequence[t + 1], domain_sequence[t + 1])
return PredictionSequence(predictions) |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
I think this is to be expected, right? Note that the memory is not freed here until you either detach computations from the computation graph (e.g., via |
Beta Was this translation helpful? Give feedback.
I think this is to be expected, right? Note that the memory is not freed here until you either detach computations from the computation graph (e.g., via
predictions.append(prediction.detach())
or until you computeloss.backward()
. Depending onT
, this may result in OOMs.