Hi,
I am trying to reproduce Table 2 results for ImageWoof and notice large accuracy gaps compared to the paper. For example, with ConvNet and IPC 10, I get ~8.2% instead of the reported 40.6%.
I suspect a mismatch in the class index → WordNet ID mapping between my dataset preparation and the pretrained observer models. I am using imagewoof2.tgz from fastai and remapping to folders 00000–00009 using a list of ImageWoof WordNet IDs. Without the exact mapping you use, it is unclear how to align folder indices with the pretrained model’s class order.
For TinyImageNet, you provide an explicit index mapping in prepare/tinyimagenet.md, which avoids this ambiguity. Could you please share the corresponding mapping for ImageWoof (and ImageNette, if different from fastai’s default)?
Specifically, I need:
- For each folder index (00000–00009), which WordNet ID it should contain for ImageWoof.
- The same for ImageNette (00000–00009) if applicable.
Many thanks.
Hi,
I am trying to reproduce Table 2 results for ImageWoof and notice large accuracy gaps compared to the paper. For example, with ConvNet and IPC 10, I get ~8.2% instead of the reported 40.6%.
I suspect a mismatch in the class index → WordNet ID mapping between my dataset preparation and the pretrained observer models. I am using imagewoof2.tgz from fastai and remapping to folders 00000–00009 using a list of ImageWoof WordNet IDs. Without the exact mapping you use, it is unclear how to align folder indices with the pretrained model’s class order.
For TinyImageNet, you provide an explicit index mapping in
prepare/tinyimagenet.md,which avoids this ambiguity. Could you please share the corresponding mapping for ImageWoof (and ImageNette, if different from fastai’s default)?Specifically, I need:
Many thanks.