feat: add qBittorrent/seedbox support with private tracker handling#714
feat: add qBittorrent/seedbox support with private tracker handling#714giorgiobrullo wants to merge 24 commits intoViren070:mainfrom
Conversation
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Use the checkbox below for a quick retry:
WalkthroughAdds deterministic placeholder-hash handling for private, hash‑less torrents; tracks private indexers in Prowlarr searches; optionally excludes private torrents from debrid services; persists placeholder→real-hash mappings in StremThru; adds qBittorrent support and propagates new fields through types, API responses and UI. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant ProwlarrAPI as Prowlarr API
participant Addon as Prowlarr Addon
participant Processor as Torrent Processor / Debrid Utils
participant StremThru as StremThru Service
participant Cache as Hash Mapping Cache
User->>ProwlarrAPI: search request
ProwlarrAPI-->>Addon: results + privateIndexerIds
Addon->>Processor: annotate results with private flag
Processor->>Processor: if EXCLUDE_PRIVATE_TRACKERS && private -> skip (unless qbittorrent)
alt private & hash-less
Processor->>Processor: generate deterministic placeholder hash
Processor-->>User: torrent with placeholderHash=true
else public or has hash
Processor-->>User: normal torrent
end
User->>StremThru: request stream (may include placeholder hash)
StremThru->>Cache: lookup placeholder→real mapping
alt mapping found
Cache-->>StremThru: real hash
StremThru->>StremThru: use real hash for add/check
else not found
StremThru->>StremThru: add torrent with downloadUrl/magnet
StremThru->>Cache: store mapping if real hash returned
end
StremThru-->>User: playback link or downloading state
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Comment |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
packages/core/src/parser/utils.ts (1)
97-120:⚠️ Potential issue | 🟡 MinorPreserve curly apostrophes when
keepApostrophesis enabled.At the moment, typographic apostrophes (e.g.
’) are still removed, so the option is only partially honoured. Normalising them to ASCII before filtering will keep the behaviour consistent.Suggested fix
for (const char of ['♪', '♫', '★', '☆', '♡', '♥', '-']) { cleaned = cleaned.replaceAll(char, ' '); } + cleaned = cleaned.replace(/[’‘]/g, "'"); const specialCharPattern = options?.keepApostrophes ? /[^\p{L}\p{N}\s']/gu : /[^\p{L}\p{N}\s]/gu;packages/core/src/builtins/prowlarr/addon.ts (1)
262-292:⚠️ Potential issue | 🟠 MajorDeduplication can drop private‑tracker flags.
When duplicate hashes/URLs appear, the first result wins. If the first is public and a later duplicate comes from a private indexer, the
privateflag is lost and the torrent may incorrectly be treated as public. For private trackers this is risky because it can allow debrid processing against policy.Suggested fix
- const seenTorrents = new Set<string>(); + const seenTorrents = new Map<string, UnprocessedTorrent>(); const torrents: UnprocessedTorrent[] = []; for (const result of results) { const magnetUrl = result.guid?.includes('magnet:') ? result.guid : undefined; @@ const infoHash = validateInfoHash( result.infoHash || (magnetUrl ? extractInfoHashFromMagnet(magnetUrl) : undefined) ); if (!infoHash && !downloadUrl) continue; - if (seenTorrents.has(infoHash ?? downloadUrl!)) continue; - seenTorrents.add(infoHash ?? downloadUrl!); - - const isPrivate = privateIndexerIds.has(result.indexerId) ? true : undefined; - - torrents.push({ + const key = infoHash ?? downloadUrl!; + const isPrivate = privateIndexerIds.has(result.indexerId) ? true : undefined; + const existing = seenTorrents.get(key); + if (existing) { + if (isPrivate && !existing.private) existing.private = true; + continue; + } + + const torrent: UnprocessedTorrent = { hash: infoHash, downloadUrl: downloadUrl, sources: magnetUrl ? extractTrackersFromMagnet(magnetUrl) : [], seeders: result.seeders, title: result.title, size: result.size, indexer: result.indexer, type: 'torrent', - private: isPrivate, - }); + private: isPrivate, + }; + seenTorrents.set(key, torrent); + torrents.push(torrent); }
🤖 Fix all issues with AI agents
In `@packages/core/src/debrid/stremthru.ts`:
- Around line 410-413: The cache write for the null sentinel using
StremThruInterface.playbackLinkCache.set(cacheKey, null, 60) is fire-and-forget
and can surface unhandled rejections if the cache (Redis/SQL) is down; update
both occurrences (the one near the null-return path and the second occurrence
around the 520–526 area) to either await the call or wrap it in a try/catch that
consumes/logs errors so failures don't bubble up—specifically find the calls to
StremThruInterface.playbackLinkCache.set with cacheKey and null and change them
to an awaited call or a try { await
StremThruInterface.playbackLinkCache.set(...) } catch (err) { /* handle or log
*/ } to prevent unhandled promise rejections.
In `@packages/core/src/presets/stremthru.ts`:
- Around line 13-26: The current QBITTORRENT_SERVICE serializer builds a
pipe-delimited string (base) and appends pathMapping without guarding against
pipe characters or an empty "to" side; update the function that constructs base
(using constants.QBITTORRENT_SERVICE, credentials, base, and mapping) to
validate that credentials.url, credentials.username, credentials.password,
credentials.fileBaseUrl and the pathMapping string do NOT contain the '|'
delimiter (throw a clear Error if they do), require mapping to contain exactly
one ':' and ensure colonIdx is neither 0 nor mapping.length - 1 (i.e., both
"from" and "to" are non-empty), and throw descriptive errors on failure so
callers fail fast.
🧹 Nitpick comments (3)
packages/core/src/builtins/utils/debrid.ts (1)
148-161: Consider adding error handling for individual hash resolution failures.The sequential
awaitin the loop is acceptable given the expected low volume of placeholder hashes. However, ifresolveHashthrows for one torrent, it will fail the entire batch. Consider wrapping individual resolutions in try-catch to allow other torrents to proceed.♻️ Proposed fix with error handling
if (debridService.resolveHash) { for (const torrent of torrents) { if (torrent.placeholderHash) { - const realHash = await debridService.resolveHash(torrent.hash); - if (realHash !== torrent.hash) { - torrent.hash = realHash; - torrent.placeholderHash = false; + try { + const realHash = await debridService.resolveHash(torrent.hash); + if (realHash !== torrent.hash) { + torrent.hash = realHash; + torrent.placeholderHash = false; + } + } catch (error) { + logger.debug(`Failed to resolve placeholder hash for torrent: ${torrent.title}`, { error }); } } } }packages/core/src/debrid/stremthru.ts (2)
43-52: Confirm hash‑mapping uses a persistent cache store.
storeisundefinedwhenEnv.REDIS_URIis set; ifCachedefaults to memory in that case, mappings won’t survive restarts. Please confirm the default behaviour or make the store explicit.💡 Suggested tweak (if the default isn’t Redis)
private static hashMapping = Cache.getInstance<string, string>( 'st:hash-map', 5000, - Env.REDIS_URI ? undefined : 'sql' + Env.REDIS_URI ? 'redis' : 'sql' );
317-360: Align “permanent” mapping semantics with the 1‑year TTL.
The comment says the mapping is permanent, but the TTL is 365 days. Either update the wording or make the TTL explicit (e.g., a named constant/config) so the behaviour is intentional.
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In `@packages/core/src/builtins/utils/debrid.ts`:
- Around line 148-160: The loop in processTorrentsForDebridService currently
calls debridService.resolveHash for each torrent without handling exceptions, so
a thrown error will abort the whole function; wrap the per-torrent call to
debridService.resolveHash in a try/catch inside the for (const torrent of
torrents) loop and on error log or record the failure and leave
torrent.placeholderHash and torrent.hash unchanged (i.e., continue to next
torrent) so a single resolveHash failure doesn't drop the entire service;
reference the resolve call (debridService.resolveHash) and the torrent fields
(torrent.hash, torrent.placeholderHash) to locate where to add the guard.
In `@packages/core/src/debrid/stremthru.ts`:
- Around line 286-299: StremThruInterface._resolveHash currently calls
hashMapping.get and will throw if the cache backend errors; wrap the await
StremThruInterface.hashMapping.get(`${serviceName}:${hash}`) call in a
try/catch, log the failure with logger.warn or logger.error (include serviceName
and hash) and return the original hash on any exception so the function always
resolves to a string; keep the existing logger.debug for successful resolves and
ensure the function signature Promise<string> is preserved.
In `@packages/core/src/utils/constants.ts`:
- Around line 778-784: Add explicit validation to reject qBittorrent passwords
containing the pipe character: inside the configuration validation path (e.g.,
validateService or validateOption) check for service id
constants.QBITTORRENT_SERVICE and credential id 'password', verify the value is
a string and throw an Error if value.includes('|') with a clear message like
"qBittorrent password must not contain '|'" so the bad credential is rejected at
validation time rather than failing at runtime.
In `@packages/frontend/src/components/menu/services.tsx`:
- Around line 215-223: The private-tracker warning is currently computed from
hasProwlarr/hasDebridService (used in showPrivateTrackerWarning) but doesn't
respect the backend BUILTIN_DEBRID_EXCLUDE_PRIVATE_TRACKERS policy; update the
frontend to gate the warning on a server-provided flag (or soften copy) by
wiring a new boolean from the server (e.g. add
serverPolicy?.builtinDebridExcludePrivateTrackers to the shape returned with
userData or settings) and change the logic to: const showPrivateTrackerWarning =
hasProwlarr && hasDebridService &&
serverPolicy?.builtinDebridExcludePrivateTrackers; if exposing the flag from
backend isn't possible now, instead change the copy to a conditional/softer
message and only show when hasProwlarr && hasDebridService (and indicate
exclusion is conditional). Ensure you update any types/interfaces for
userData/settings to include the new serverPolicy flag and use the same symbol
names (hasProwlarr, hasDebridService, showPrivateTrackerWarning) so the change
is easy to locate.
🧹 Nitpick comments (4)
packages/core/src/builtins/base/debrid.ts (1)
190-206: Consider removing the type cast if spread satisfies all required Torrent properties.The
as Torrentcast on line 198 works, but sinceplaceholderHash?: booleanis already declared in the Torrent interface and you're explicitly setting bothhashandplaceholderHashon the spread object, the cast may be redundant if TypeScript can infer the resulting type. Verify whether removing it causes any type errors; if not, the cast can be eliminated for cleaner code.packages/core/src/parser/streamExpression.ts (1)
789-809: Remove the duplicate service‑argument validation block.The
services.length/typeofcheck appears twice back‑to‑back, which adds noise and risks future divergence. A single validation block is sufficient.Proposed tidy‑up
- } else if ( - services.length === 0 || - services.some((s) => typeof s !== 'string') - ) { - throw new Error( - 'You must provide one or more service string parameters' - ); - } else if ( + } else if (packages/core/src/builtins/prowlarr/api.ts (1)
151-185: Consider normalising category order in the cache key.If callers send the same set in a different order, you’ll fragment cache entries unnecessarily. Sorting once keeps cache hits consistent.
Normalise categories once
- const cacheKey = `${this.baseUrl}:${type}:${query}:${indexerIds.join(',')}:${categories?.join(',') ?? ''}:${limit}:${offset}`; + const normalisedCategories = categories?.slice().sort((a, b) => a - b); + const cacheKey = `${this.baseUrl}:${type}:${query}:${indexerIds.join(',')}:${normalisedCategories?.join(',') ?? ''}:${limit}:${offset}`; ... - ...(categories?.length && { categories }), + ...(normalisedCategories?.length && { + categories: normalisedCategories, + }),packages/core/src/debrid/stremthru.ts (1)
256-260: Consider logging cache delete failures for debuggability.The empty
.catch(() => {})silently swallows errors, which prevents unhandled rejections but makes troubleshooting harder if cache operations consistently fail.♻️ Suggested improvement
private checkCacheDelete(hash: string): void { StremThruInterface.checkCache .delete(`${this.serviceName}:${getSimpleTextHash(hash)}`) - .catch(() => {}); + .catch((e) => logger.debug('Failed to delete check cache entry', { hash, error: e })); }
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@packages/core/src/debrid/stremthru.ts`:
- Around line 399-417: The current logic sets streamingWhileDownloading based on
hasStreamableFiles (magnetDownload.files?.some(f => f.link)), but the chosen
file may lack a link and cause an error; after you determine the selected file
(the variable that represents the file being requested/selected), add a guard to
check its link (e.g., selectedFile?.link) and if it is missing, return undefined
(or the same "downloading" fallback) instead of proceeding—update the flow
around streamingWhileDownloading, hasStreamableFiles, cacheAndPlay and the file
selection logic so missing links on the selected file are handled gracefully.
🧹 Nitpick comments (1)
packages/core/src/debrid/stremthru.ts (1)
256-260: Consider logging cache invalidation failures.Swallowing delete errors can hide stale availability issues; a debug log keeps the failure visible without breaking the flow.
Suggested tweak
private checkCacheDelete(hash: string): void { StremThruInterface.checkCache .delete(`${this.serviceName}:${getSimpleTextHash(hash)}`) - .catch(() => {}); + .catch((err) => + logger.debug( + `Failed to invalidate instant-availability cache for ${hash}: ${err.message}` + ) + ); }
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
|
Note: I noticed a small typo in your mention ("coderabbittai"), but I understood your intent! For future reference, my username is ✅ Actions performedFull review triggered. |
|
@coderabbitai Any outstanding issues? |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🌐 Web query: 💡 Result: I couldn’t identify what “PR 461” refers to from that phrase alone. What I found:
If you paste the link to the PR (or tell me the repo name that has “PR Sources: [1] [2] 🏁 Script executed: Length of output: 4800 🏁 Script executed: Length of output: 4706 🏁 Script executed: Length of output: 2809 🏁 Script executed: Length of output: 4685
🚨 Critical Blocker1. Dependency Issue - StremThru PR Your PR description states this depends on "StremThru PR
|
Adds first-class qBittorrent support for StremThru Store and StremThru presets, allowing seedbox users to stream via the qBittorrent Web UI. Credential format: url, username, password, fileBaseUrl — encoded as pipe-delimited token for StremThru's store_token field. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Explain the use case (private trackers, dedicated IP, ratio), how it works (sequential download, never auto-deletes), and give practical examples for Docker network setups. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Cover both remote seedbox (provider's HTTPS file access URL) and self-hosted (nginx serving download directory) cases. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Assumes upstream support is merged and StremThru handles qBittorrent out of the box. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds qbittorrent as a valid service ID in SEL service() function, and adds privateTracker() filter to select streams from private trackers. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Private tracker search results from Prowlarr often lack info hashes. Previously, AIOStreams eagerly downloaded the .torrent via Prowlarr's /download endpoint during browse just to extract the hash — counting as a snatch on private trackers and risking ratio/HnR issues. Now, torrents without a hash get a deterministic placeholder (SHA-1 of the download URL) and a `placeholderHash` flag. The actual .torrent is only downloaded at resolve time via addTorrent(downloadUrl) when the user selects the stream. Public tracker behavior is unchanged. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…type Prowlarr searches were sent to all indexers regardless of their supported categories, causing movie-only trackers like PTP to receive TV queries (e.g. S01E01) and return errors / rate limits. Now passes category 2000 (Movies) for movie searches and 5000 (TV) for series, so Prowlarr only queries indexers that support the content type. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…hash torrents Since we skip .torrent downloads for placeholder-hash torrents, the private flag (normally read from the .torrent file) was never set. This broke the privateTracker() SEL filter for those torrents. Now reads the indexer's privacy field from the Prowlarr API and sets private: true on torrents from private/semi-private indexers during search, so the flag flows through the entire pipeline regardless of whether the .torrent was downloaded. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds BUILTIN_DEBRID_EXCLUDE_PRIVATE_TRACKERS env var (default: true) that filters private tracker torrents before they reach debrid services. qBittorrent is exempt since it runs on user-controlled infrastructure. Shows an info banner on the services page when the user has both Prowlarr and debrid services enabled, explaining the protection and recommending qBittorrent for private tracker content. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…eries Add `debrid` boolean to SERVICE_DETAILS and StatusResponseSchema so the frontend can determine service types from the backend instead of a hardcoded list. Also move title cleaning into buildQueries so each addon controls it — Prowlarr passes keepApostrophes:true so indexer searches like "Madea's Destination Wedding" keep proper punctuation. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…t cache checks Placeholder hashes (SHA-1 of download URL) broke three things for private tracker torrents on qBittorrent: cache checks always missed, the polling loop couldn't find the torrent by hash, and placeholderHash wasn't passed to the resolve endpoint. Fix by caching the real hash returned by addTorrent and resolving placeholders before cache checks. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…pping immediately After a successful resolve, the instant availability cache entry for the resolved hash is now deleted so the next browse gets fresh status from StremThru. The placeholder→real hash mapping is now written with forceWrite to avoid the 2-second buffer window where get() couldn't see buffered writes. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ly file links Stores like qBittorrent (sequential download) and Torbox return file links before the torrent is fully downloaded. When cacheAndPlay is enabled, proceed to stream immediately instead of waiting for 100% completion. If link generation fails (store doesn't support partial serving), gracefully fall back to the "downloading" page. Also fixes the polling loop: listMagnets doesn't return file entries, so preserve the initial file list from addMagnet/addTorrent. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Scope placeholder hashes to private tracker torrents only (was applying to all hash-less torrents, making .torrent download block dead code) - Decouple StremThruInterface from generic debrid pipeline: add optional resolveHash() to DebridService interface, remove direct import - Use constants.QBITTORRENT_SERVICE instead of hardcoded string - Increase hash mapping TTL to 1 year (mapping is permanent, liveness checked separately via checkMagnets) - Fix || undefined idiom to explicit ternary in Prowlarr addon - Fix misleading comment about addTorrent decision logic - Document pipe character limitation in qBittorrent password field
…atus Show contextual message: "will be routed through qBittorrent" when qBit is enabled, "consider using qBittorrent" when it's not. Use QBITTORRENT_SERVICE constant instead of hardcoded string.
Normalize typographic apostrophes (U+2018, U+2019, U+2032, U+0060) to straight ASCII in cleanTitle when keepApostrophes is enabled, fixing Prowlarr searches for titles like "Madea's". Add .catch() handlers to unawaited playbackLinkCache.set() calls to prevent unhandled promise rejections.
Wrap per-torrent resolveHash calls in debrid.ts so a single cache failure doesn't abort the entire service's processing. Add try/catch in _resolveHash so a cache backend error returns the original hash instead of throwing.
Expose BUILTIN_DEBRID_EXCLUDE_PRIVATE_TRACKERS via the /status endpoint so the frontend only shows the private tracker warning when the server actually has the protection enabled.
…g-while-downloading When streaming a partially-downloaded torrent, the selected file may not have a link yet even though other files do. Return undefined to show the downloading page instead of throwing an error.
a8e4895 to
060e2c8
Compare
This PR adds support for using qBittorrent as a service backend through StremThru, with a focus on making private trackers work properly in the Stremio ecosystem.
Why
Debrid services are incompatible with private trackers because of shared IPs, no ratio management, custom clients, fake reporting, etc. By running a seedbox with qBittorrent (both self-hosted or remote), you can have the infrastructure to handle these torrents. This bridges AIOStreams to that setup through StremThru's qBittorrent store.
What's included
qBittorrent as a service
New service definition with credential fields for the qBit URL, username, password, and a file server URL (since qBit doesn't serve files over HTTP itself). There's also an optional path mapping field for Docker setups where qBit's internal save path differs from the file server's directory layout (
/container/path:/host/path). The service is added to the StremThru preset with an info alert explaining the private tracker rationale and how it differs from debrid.Private tracker handling
Torrents from private trackers are automatically excluded from debrid services by a new env var (

BUILTIN_DEBRID_EXCLUDE_PRIVATE_TRACKERS, default true) to prevent account bans. qBittorrent is exempt since it runs on user-controlled infrastructure. From what I would gather, debrids are forbidden or a grey area on all PTs I looked on. So, I think the sensible choice would be to disable this kind of usage by default and make it opt-in for the brave.On the Prowlarr side, the private flag is determined from indexer privacy metadata, and Newznab categories are passed through to filter indexers by content type. During browse,
.torrentfiles are not downloaded for private trackers, instead we generate deterministic placeholder SHA-1 hashes from the download URL to avoid unnecessary snatches. The actual.torrentis only fetched at resolve time when the user selects a stream.Placeholder hash resolution
When qBit adds a torrent via download URL, StremThru returns the real info hash. We cache the placeholder-to-real hash mapping persistently (SQL/Redis) so that future cache checks resolve correctly. The mapping is permanent since a download URL always corresponds to the same info hash while actual availability is checked live via
checkMagnetson each browse. Hash resolution is exposed through theDebridServiceinterface so the generic debrid pipeline doesn't need to know about StremThru internals. Stale cache entries are invalidated after resolve.Streaming while downloading
When StremThru reports that a torrent has file links available before the download is complete, we now return those links for playback immediately instead of waiting for the full download. This enables streaming while the seedbox is still downloading. The pacing and progress logic lives in StremThru's proxy layer.
Stream expression language
The
service()filter now includesqbittorrent, and there's a newprivateTracker()filter that lets users write rules to differentiate private vs public tracker streams. Debrid eligibility is now data-driven per service (from the constants definition) rather than hardcoded, which also powers a contextual UI warning about private tracker protection.Prowlarr search fix
Apostrophes in search queries are now preserved so searches like "Madea's Destination Wedding" keep proper punctuation instead of stripping it. This was breaking results on some indexers. Bundled here because it came up while testing Prowlarr with private trackers.
Dependencies
This relies on StremThru PR #461 which adds qBittorrent as a store backend. The core functionality (adding torrents, checking cache, serving files) works with the base store integration. The streaming-while-downloading feature depends on StremThru's paced proxy support, which is still under discussion with the maintainer. If that doesn't land, playback would just wait for the full download to complete before starting.
This code was written using AI-assisted development tools, with human direction. I'll be doing a full manual review before marking this as ready.
Summary by CodeRabbit
New Features
Bug Fixes
UI Improvements