Dear authors,
Thank you for open-sourcing such amazing work. I wanted to use the collapsed model for a task but am encountering the following error:
# Load model directly
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("cfilt/HiNER-collapsed-xlm-roberta-large")
model = AutoModelForTokenClassification.from_pretrained("cfilt/HiNER-collapsed-xlm-roberta-large")
Error:
OSError: Can't load tokenizer for 'cfilt/HiNER-collapsed-xlm-roberta-large'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'cfilt/HiNER-collapsed-xlm-roberta-large' is the correct path to a directory containing all relevant files for a XLMRobertaTokenizerFast tokenizer.
When I checked the files for the collapsed version and compared them with the original, the tokenizer.json and tokenizer_config.json files are missing in the former.
Are we supposed to load the tokenizer from the original model for the collapsed one? Please let me know at the earliest.
Thank you for your time!
Dear authors,
Thank you for open-sourcing such amazing work. I wanted to use the collapsed model for a task but am encountering the following error:
Error:
When I checked the files for the collapsed version and compared them with the original, the
tokenizer.jsonandtokenizer_config.jsonfiles are missing in the former.Are we supposed to load the tokenizer from the original model for the collapsed one? Please let me know at the earliest.
Thank you for your time!