-
Notifications
You must be signed in to change notification settings - Fork 541
Description
gradients = torch.autograd.grad(
outputs=potential.sum(),
inputs=flat_points,
create_graph=create_graph,
retain_graph=True,
only_inputs=True,
allow_unused=True
)[0]
is there a way out. I am forced to use the traditional architecture(using MLP)
self.block1 = nn.Sequential(
nn.Linear(input_dim, hidden_dim), nn.ReLU(),
)
instead of using the
network_config = {
"otype": "FullyFusedMLP",
"activation": "ReLU",
"output_activation": "Sigmoid", # Final activation for RGB
"n_neurons": hidden_dim,
"n_hidden_layers": 8 # Combined layers from all blocks
}
self.network = tcnn.Network(
n_input_dims=total_input_dim,
n_output_dims=3, # RGB output
network_config=network_config
I get an error using tinycuda when the torch.autograd.grad is trying to create graph but no error using the nn.sequential. I am not okay with inn.sequential because i need to speed up the training. Any help will be appreciated