🔒 Add timeout to requests.get calls in SkillFetcher#2
Conversation
Added a 10-second timeout to all HTTP GET requests in `claw2manus/fetcher.py` to prevent the application from hanging indefinitely when fetching remote skills or discovering authors. Affected methods: - `fetch_skill_from_github` - `fetch_skill_from_clawhub_website` - `discover_author_via_github` Co-authored-by: frostmute <989225+frostmute@users.noreply.github.com>
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
There was a problem hiding this comment.
Code Review
This pull request introduces timeouts to HTTP requests in claw2manus/fetcher.py to prevent indefinite hanging. The review feedback suggests using a tuple for the timeout parameter to separate connection and read timeouts, defining these values as class-level constants for better maintainability, and adding a mandatory User-Agent header for GitHub Search API requests to prevent potential 403 Forbidden errors.
| headers = {"Accept": "application/vnd.github.v3+json"} | ||
| try: | ||
| response = requests.get(url, headers=headers) | ||
| response = requests.get(url, headers=headers, timeout=10) |
There was a problem hiding this comment.
The GitHub Search API requires a User-Agent header for all requests. Without it, the API will likely return a 403 Forbidden error. It is recommended to include a descriptive User-Agent (e.g., the name of your application). Additionally, using a tuple for the timeout parameter is recommended for better granularity.
| response = requests.get(url, headers=headers, timeout=10) | |
| response = requests.get(url, headers={**headers, "User-Agent": "claw2manus"}, timeout=(3.05, 10)) |
| url = self.CLAW_HUB_RAW_GITHUB_URL.format(author=author, name=name) | ||
| try: | ||
| response = requests.get(url) | ||
| response = requests.get(url, timeout=10) |
There was a problem hiding this comment.
Using a single value for timeout sets both the connection and read timeouts to that value. It is recommended to use a tuple (e.g., (3.05, 10)) to specify separate timeouts. The connection timeout of 3.05 is slightly larger than a multiple of 3 (the default TCP retransmission window), which is a common best practice. Also, consider defining this timeout as a class-level constant to avoid repeating the magic number 10 across multiple methods.
| response = requests.get(url, timeout=10) | |
| response = requests.get(url, timeout=(3.05, 10)) |
| url = self.CLAW_HUB_WEBSITE_URL.format(name=name) | ||
| try: | ||
| response = requests.get(url) | ||
| response = requests.get(url, timeout=10) |
There was a problem hiding this comment.
🎯 What: The vulnerability fixed is the absence of a timeout parameter in
⚠️ Risk: If a remote server is slow or unresponsive, the application could hang indefinitely, leading to resource exhaustion and potential Denial-of-Service (DoS) scenarios.
requests.getcalls withinclaw2manus/fetcher.py.🛡️ Solution: Added
timeout=10to all threerequests.getinvocations in theSkillFetcherclass.PR created automatically by Jules for task 7957280771597398848 started by @frostmute