We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
在ospery中,convnext产生的image feature token应该是1024个 (1024 * 768的特征),再结合mask feature (128 + 64 + 32 + 16)和pos的 token,以及text的token 是否会比较容易超出2048比较多? 如果以上数值理解有谬误,烦请指正,非常感谢~
The text was updated successfully, but these errors were encountered:
你好, @DimplesL image token个数为1024,而每个区域对应的mask token和position token分别都只有1个,详见
Osprey/osprey/model/osprey_arch.py
Lines 184 to 187 in ca9f26d
Sorry, something went wrong.
你好, @DimplesL image token个数为1024,而每个区域对应的mask token和position token分别都只有1个,详见 Osprey/osprey/model/osprey_arch.py Lines 184 to 187 in ca9f26d ## mask cur_new_input_embeds.append(mask_feats[batch_idx][i:i+1].to(cur_raw_new_input_embeds.dtype)) ## pos cur_new_input_embeds.append(pos_feats[batch_idx][i:i+1].to(cur_raw_new_input_embeds.dtype)) 正常训练和推理一般是不会超过2048的。
正常训练和推理一般是不会超过2048的。
感谢指正,看了一下特征的变换,确实如此。 关于这部分还有个问题: 在mask extractor这个模块,是有线性层等参数的,独立于projector层参数,想确认下这部分的参数在训练过程保存是怎么设置的
No branches or pull requests
在ospery中,convnext产生的image feature token应该是1024个 (1024 * 768的特征),再结合mask feature (128 + 64 + 32 + 16)和pos的 token,以及text的token 是否会比较容易超出2048比较多?
如果以上数值理解有谬误,烦请指正,非常感谢~
The text was updated successfully, but these errors were encountered: