Skip to content
This repository was archived by the owner on Jul 23, 2024. It is now read-only.
Discussion options

You must be logged in to vote

Found the answer in Application Insights - Errors.

This model's maximum context length is 4096 tokens. However, your messages resulted in 14821 tokens. Please reduce the length of the messages.

I increased the chunking size and when you ask for multiple sources, the response is additive. Reduced to 1 source and issue resolved.

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@pamelafox
Comment options

Answer selected by IainD925
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants