⚡️ Speed up method ProphetNetTokenizer.get_vocab by 5%
#882
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
📄 5% (0.05x) speedup for
ProphetNetTokenizer.get_vocabinsrc/transformers/models/prophetnet/tokenization_prophetnet.py⏱️ Runtime :
532 microseconds→506 microseconds(best of185runs)📝 Explanation and details
The optimized code achieves a 5% speedup through three key micro-optimizations:
What was optimized:
get_vocab(): Replaceddict(self.vocab, **self.added_tokens_encoder)with{**self.vocab, **self.added_tokens_encoder}__init__: Changed the list comprehension[(ids, tok) for tok, ids in self.vocab.items()]to a direct for-loop when buildingids_to_tokensload_vocab()function that processes file lines more efficientlyWhy these optimizations work:
{**dict1, **dict2}) avoids the overhead of calling thedict()constructor, which has to process keyword arguments and merge dictionaries. Direct unpacking is a faster bytecode operation.load_vocab()reduces memory allocations by avoiding intermediate list storage of all lines.Performance characteristics:
The line profiler shows the
get_vocab()method improved from 31,350ns to 28,998ns per hit (~7.5% faster per call). Test results demonstrate consistent 2-19% improvements across various scenarios, with the largest gains on edge cases like duplicate tokens (15.2% faster) and unicode tokens (19.2% faster). The optimization is particularly effective for small to medium vocabularies where the dictionary operations dominate runtime.Impact on workloads:
Since tokenizers are frequently instantiated during model loading and
get_vocab()may be called during tokenization workflows, this optimization provides cumulative benefits in ML pipelines where ProphetNet models are used repeatedly.✅ Correctness verification report:
🌀 Generated Regression Tests and Runtime
To edit these changes
git checkout codeflash/optimize-ProphetNetTokenizer.get_vocab-misjj66cand push.