We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error: reading streaming LLM response: doRequest: error sending request: Post "https://generativelanguage.googleapis.com//v1beta/models/gemini-2.5-pro-preview-03-25:streamGenerateContent?alt=sse": dial tcp 142.251.215.234:443: i/o timeout
How should I handle the above issue?
The text was updated successfully, but these errors were encountered:
same, the log seem normal /tmp/kubectl-ai.log
/tmp/kubectl-ai.log
Log file created at: 2025/05/09 18:28:37 Running on machine: areszz Binary: Built with gc go1.24.1 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0509 18:28:37.896898 3382599 main.go:327] Application startedpid3382599 I0509 18:28:37.903974 3382599 conversation.go:78] "Created temporary working directory" workDir="/tmp/agent-workdir-510886778" I0509 18:29:11.741031 3382599 conversation.go:133] "Starting chat loop for query:" query="list /root" I0509 18:29:11.741097 3382599 conversation.go:146] "Starting iteration" iteration=0
by the way, i use proxychains like proxychains ./kubectl-ai , it may cause problem but it work at most times. I try:
proxychains
proxychains ./kubectl-ai
proxychains curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=GEMINI_API_KEY" \ -H 'Content-Type: application/json' \ -X POST \ -d '{ "contents": [{ "parts":[{"text": "Explain how AI works"}] }] }'
without any problem
Sorry, something went wrong.
Can you pl. try without proxychains and confirm if it is working for you ?
curious about the benefits of running with proxychains.
No branches or pull requests
Error: reading streaming LLM response: doRequest: error sending request: Post "https://generativelanguage.googleapis.com//v1beta/models/gemini-2.5-pro-preview-03-25:streamGenerateContent?alt=sse": dial tcp 142.251.215.234:443: i/o timeout
How should I handle the above issue?
The text was updated successfully, but these errors were encountered: