Thank you for your open sourced code. I have several questions on it:
-
I run the pre-tarined model provided by you, “diffcast_phydnet_sevir128.pt” on SEVIR dataset. I find the mCSI is 0.3066, which is much higher than the experimental results reported in paper (0.2757). Is there any difference between this pre-trained model and the paper ? What is the mean of the “128” on this model name? Do you use a larger batch size like 128 in training ?
-
I run the training process of SimVP under the default setting, which is much higher than the paper results (0.3167 vs 0.2662). The SimVP even performs better than your provided model “diffcast_phydnet_sevir128.pt”. Is there any reason for it ?
-
I try to re-implement your training process on diffcast+phydnet. However, I only achieve 0.2660, which is much lower than the paper results 0.2757. Is there any point I need to care about ?
Thank you for your open sourced code. I have several questions on it:
I run the pre-tarined model provided by you, “diffcast_phydnet_sevir128.pt” on SEVIR dataset. I find the mCSI is 0.3066, which is much higher than the experimental results reported in paper (0.2757). Is there any difference between this pre-trained model and the paper ? What is the mean of the “128” on this model name? Do you use a larger batch size like 128 in training ?
I run the training process of SimVP under the default setting, which is much higher than the paper results (0.3167 vs 0.2662). The SimVP even performs better than your provided model “diffcast_phydnet_sevir128.pt”. Is there any reason for it ?
I try to re-implement your training process on diffcast+phydnet. However, I only achieve 0.2660, which is much lower than the paper results 0.2757. Is there any point I need to care about ?