Hi, after reading your paper and studying the code, I don't understand why VisionTransformerForMaskedImageModeling have two implementations of the encoder (respectively encoder and teacher model). Why is it not possible to use just the encoder (since it seems they both have the same parameters)?
Hi, after reading your paper and studying the code, I don't understand why VisionTransformerForMaskedImageModeling have two implementations of the encoder (respectively encoder and teacher model). Why is it not possible to use just the encoder (since it seems they both have the same parameters)?