Skip to content

Conversation

@g-linville
Copy link
Member

@g-linville g-linville commented Jan 30, 2025

The tokens used for tool definitions are counted as part of the context window, but we were failing to count them here when trying to stay beneath the limit. This introduces a fix so that we account for it.

Our token estimation still isn't great. For example, a message consisting of 1\n repeated 30000 times is counted as 20000 tokens in our method (since we just divide the character count by 3), but in OpenAI's tokenizer, it is 60000 tokens, meaning we grossly underestimate. I think we should consider calling out to Python to use the tiktoken library to count tokens for us, as that will be far more reliable. However, in practice, this rarely seems to cause us any trouble, so maybe we are fine sticking with our current approach of just dividing by 3.

…ndow

Signed-off-by: Grant Linville <grant@acorn.io>
@g-linville g-linville changed the title fix: count tokens from tool definitions when adjusting for context wi… fix: count tokens from tool definitions when adjusting for context window Jan 30, 2025
Signed-off-by: Grant Linville <grant@acorn.io>
Signed-off-by: Grant Linville <grant@acorn.io>
@g-linville g-linville marked this pull request as draft January 30, 2025 16:16
@g-linville
Copy link
Member Author

Moved to draft. I'm going to use a proper tokenizer library so that we can get accurate token counts.

@g-linville
Copy link
Member Author

There is more work that would be needed here and it's just not a priority right now.

@g-linville g-linville closed this Mar 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant