Skip to content

SDFGrid for optimization #194

@jsong2333333

Description

@jsong2333333

Hi, first of all, thank you very much for developing such a comprehensive package for differentiable simulation.

Recently I'm trying to use the phiflow package to optimize an obstacle object based on the simulation, where I implemented and changed the notebook from pbdl. It can successfully optimize the obstacle as a 2D Geometry. However, my goal is to optimize a 3D SDFGrid, and I have encountered different problems with gradient calculation for different phiflow versions.

My most recent attempt is to follow another github issue on mesh optimization. I've updated to the most up-to-date developer version of the package, and tried to follow your instructions posted in that issue. When verifying the gradient using the command:

obstacle = np.load(sdf_dir)
obstacle= geom.SDFGrid(tensor(sdf, spatial('x,y,z')), bounds=Box(x=(-1, 1), y=(-1, 1), z=(-1, 1)))
obstacle.variable_attrs = ('values', 'geometry')

def loss_function(sdf: Geometry):
    return field.l2_loss(sdf)
d_sdf = math.gradient(loss_function, 'sdf')(obstacle)

I met the following error, which I believe is from pytorch:

File "/pscratch/sd/j/jsong/diffphys_sdf_3d.py", line 23, in
d_sdf = math.gradient(loss_function, 'sdf')(obstacle)
File "/pscratch/sd/j/jsong/conda_env/phiflow/lib/python3.10/site-packages/phiml/math/_functional.py", line 639, in call
native_result = self.traces[key](*natives)
File "/pscratch/sd/j/jsong/conda_env/phiflow/lib/python3.10/site-packages/phiml/backend/torch/_torch_backend.py", line 957, in eval_grad
grads = torch.autograd.grad(loss, wrt_args) # grad() cannot be called during jit trace
File "/pscratch/sd/j/jsong/conda_env/phiflow/lib/python3.10/site-packages/torch/autograd/init.py", line 276, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior.

Therefore, I'm wondering if there're any examples regarding optimizing with SDFGrid, and if it is impossible, which class would you suggest to change the SDFGrid into so that it could be optimized using this differentiable framework.

Another question is that when running simulation using jit_compile, the GPU memory is expanding due to the different obstacle objects included for every simulation. The newly assigned obstacles are likely causing the simulation function to recompile each time. I'm wondering if there's any recommended solutions regarding this function since the 3D simulation takes much longer time without jit_compile. I tried deleting caches from torch, gc collect, but all failed to decrease the GPU memory consumption.

Sorry for posting such a long message. Thank you!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions