-
Notifications
You must be signed in to change notification settings - Fork 8
NNPOps.neighbors.getNeighborPairs does not seem to allow double backward #149
Copy link
Copy link
Closed
Description
Hi,
This is more of a discussion than an issue specifically relatd to smee, but it appears that when using NNPOps.neighbors.getNeighborPairs with torch.autograd.grad(..., create_graph=True), the resulting gradient tensor has grad_fn=None, preventing second-order gradients from being computed. This is a blocker for anyone aiming to fit forces in periodic systems. The following code should reproduce the issue I have encountered:
import NNPOps.neighbors
import torch
epsilon = torch.tensor([0.3], requires_grad=True, device="cuda")
sigma = torch.tensor([3.0], requires_grad=True, device="cuda")
# Two particles
coords = torch.tensor(
[[0.0, 0.0, 0.0], [3.1, 0.0, 0.0]], requires_grad=True, device="cuda"
)
box = torch.eye(3, device="cuda") * 10.0
# Get distance via NNPOps
_, _, distances, _ = NNPOps.neighbors.getNeighborPairs(coords, 5.0, -1, box)
# Compute LJ energy
sig_r = sigma / distances
energy = 4.0 * epsilon * (sig_r**12 - sig_r**6)
print(f"Distance: {distances[0].item():.4f}")
print(f"LJ Energy: {energy.item():.4f}")
# Compute forces: F = -dE/dcoords
forces = -torch.autograd.grad(energy, coords, create_graph=True, retain_graph=True)[0]
print(f"Forces: {forces}")
print(f"forces.grad_fn: {forces.grad_fn}")
# This requires d(forces)/d(epsilon) and d(forces)/d(sigma)
force_loss = (forces**2).sum()
grad_epsilon = torch.autograd.grad(force_loss, epsilon, create_graph=True)[0]
print(f"d(force_loss)/d(epsilon): {grad_epsilon}")I think this has already been mentioned in relation to #143 and #141, but since getNeighborPairs does not seem to allow double backward, would it make sense to rewrite it using PyTorch operations to enable higher-order gradients?
Many thanks in advance for any ideas on this.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels