-
Notifications
You must be signed in to change notification settings - Fork 65
Open
Description
Hello DataComp team!
I'm seeking some clarification on the problem setup. To my understanding, when specifying a subset, if I assign a weight > 1 to a particular datapoint, it can appear multiple times in the rewritten dataset. This duplication may result in the same datapoint appearing twice in the same batch during contrastive training, potentially degrading performance (as the same datapoint would be contrasted against another copy of itself).
Do you have any mechanisms or suggestions within DataComp to help detect or handle these duplicate datapoints? If not, how would you recommend mitigating potential issues caused by having duplicates in the final dataset?
Thank you in advance for your guidance!
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels