-
Notifications
You must be signed in to change notification settings - Fork 0
Description
Hi,
Thanks for your amazing work about implementing this algorithm! I am also working on this for a course project -- the implementation is indeed quite tricky given some typos in their proposed algorithm 1. And I have a few questions regarding your rdMC implementation (cell #@ title RDMC sampler).
line 28: score = score_model(x, batch_time_step)
From my understanding, this score is - ∇logp_t=−∇f_t, but the ULA in the proposed algorithm 2 only requires the ∇f* in the target density to sample from rather the intermediate diffused distribution ∇f_t, i.e. ∇f* should be independent to t and only dependent to the desired density (MNIST dataset). This ULA inner-loop is used to estimate the score function of the intermediate diffusion distribution by convolving the desired density p(x) \propto \exp{(-f_*(x))} and a Gaussian distribution. But if one can already estimate this intermediate score function via a neural network, this ULA iteration is useless.

line 32: x = x + step_size * (score + e_t) + noise
I suppose it's a ULA iteration for a ULA particle to give a MC estimation to v_k. Could you check if this x is messed up with the reverse SDE particle, which is used to sample from the target density? My thought is that we need to sample n different particles via ULA MCMC such they are from
then compute their expectation to approximate v_k. And use this v_k to update x_t in the reverse-time SDE (outer-loop), i.e. the x_t should not be updated in the inner-loop.
I understand this was originally a course project and you might have moved on to other work, but I just wanted to thank you for making the implementation public -- it’s been a great resource to learn from!
Tianfeng