Skip to content

Script allows sending 'infinite' amounts of text, but without context/memory #7

@CrazySwede78

Description

@CrazySwede78

So looking through the script, basically, your code takes a text document, breaks it into pieces, matching tokens (And I am still trying to figure out how you get accurate token count for the openai API without using tiktoken or similar library) and sending each piece separately to the API )And with the ability to set a system message and/or prompt with each chunk) So yeah, in a way you are able to send near infinite texts to the API

But that does not change the underlying issue that the openai API models lack the conversationalbuffermemory that ChatGPT has, so each of those chunks would get treated and responded to without any inherent context or identification with any of the other chunks you send, so even if you would get replies, none of them would make any sense for the whole, as none of those would actually ever understand the full text sent, only the individual chunk sent in that call to it.

I mean, I guess i see the benefit of this script if you want to hit the rate limits of your gpt-4 API access, but other than that? Could you provide a sensible use-case for this script please? Also, for the sake of transparency and openness, I think you should mention the detail that the API can't connect the chunks and will only respond on an individual basis, as there seems to be some confusion regarding the true capabilities of your script.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions