-
Notifications
You must be signed in to change notification settings - Fork 0
Description
When calculating the gaussian walk kl_divergence loss for alpha and eta, I think we could do it more efficiently.
Currently, we approximate the gaussian walk loss like this:
- t : mu_t, logvar_t
- t-1 : mu_t-1, logvar_t-1
- delta: gaussian walk parameter
kl_divergence(mu_t, logvar_t-1, sample from N(mu_t-1, logvar_t-1), log(delta))
The second gaussian seems to be equivalent to:
N(sample from N(mu_t-1, var_t-1), delta) ---> N(mu_t-1, var_t-1) + N(0, delta) ---> N(mu_t-1, var_t-1 + delta)
So we don't really need to approximate the kl_divergence via sampling, we should be able to directly calculate it in this case.
This is important for the continuous time model because there we scale delta by time difference. When the time difference is close to zero any divergence from the random sample from N(mu_t-1, logvar_t-1) leads to a very large loss (-inf if time difference is zero).
relevant code
def get_alpha(self): ## mean field
alphas = torch.zeros(self.num_windows, self.num_topics, self.rho_size).to(self.device)
kl_alpha = []
alphas[0] = self.reparameterize(self.mu_q_alpha[:, 0, :], self.logsigma_q_alpha[:, 0, :])
p_mu_0 = torch.zeros(self.num_topics, self.rho_size).to(self.device)
logsigma_p_0 = torch.zeros(self.num_topics, self.rho_size).to(self.device)
kl_0 = self.get_kl(self.mu_q_alpha[:, 0, :], self.logsigma_q_alpha[:, 0, :], p_mu_0, logsigma_p_0)
kl_alpha.append(kl_0)
for t in range(1, self.num_windows):
alphas[t] = self.reparameterize(self.mu_q_alpha[:, t, :], self.logsigma_q_alpha[:, t, :])
p_mu_t = alphas[t-1]
logsigma_p_t = torch.log(self.delta * torch.ones(self.num_topics, self.rho_size).to(self.device))
kl_t = self.get_kl(self.mu_q_alpha[:, t, :], self.logsigma_q_alpha[:, t, :], p_mu_t, logsigma_p_t)
kl_alpha.append(kl_t)
kl_alpha = torch.stack(kl_alpha).sum()
return alphas, kl_alpha.sum()