Conversation
WalkthroughAdds a new Connector Workbench: CLI command, in-process coordinator, HTTP server + optional React UI, engine, in-memory storage, debug/transport/replay tools, introspection, schema inference, snapshots/baselines, task tracker, test generator, and related frontend API and assets. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant CLI as CLI (workbench)
participant WB as Workbench
participant Server as HTTP Server
participant Engine as Engine
participant Plugin as Connector Plugin
participant Storage as MemoryStorage
participant Debug as DebugStore
CLI->>WB: Start(listen, enableUI)
WB->>Server: Start HTTP server and register routes
User->>Server: POST /connectors/{id}/install
Server->>WB: InstallConnector(instanceID)
WB->>Engine: Install and RunOneCycle
Engine->>Plugin: FetchNext (paginated)
Plugin-->>Engine: Page of items
Engine->>Storage: Store fetched items
Engine->>Debug: LogPluginCall / LogPluginResult
Server-->>User: 200 OK (install + initial fetch)
sequenceDiagram
autonumber
participant UI as Browser UI
participant Server as HTTP Server
participant Snap as SnapshotManager
participant TG as TestGenerator
participant FS as Filesystem
UI->>Server: POST /connectors/{id}/tests/generate
Server->>Snap: List snapshots grouped by operation
Snap-->>Server: grouped snapshots
Server->>TG: Generate(snapshots)
TG->>FS: writeGeneratedFiles(outputDir)
FS-->>Server: files written
Server-->>UI: GenerateResult (file info)
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 12
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
🤖 Fix all issues with AI agents
In `@tools/workbench/baseline.go`:
- Around line 229-247: The comparison in BaselineManager.compareAccount
currently compares pointer addresses for DefaultAsset (base.DefaultAsset !=
curr.DefaultAsset), which yields false positives; change this to a nil-aware
value comparison similar to the Name logic: treat nil vs non-nil as different,
and if both non-nil compare *base.DefaultAsset != *curr.DefaultAsset, then
append a Change with Field "default_asset" and proper OldValue/NewValue
(dereferenced values or nil) so that identical string contents stored in
different allocations are not reported as changed.
- Around line 292-309: The comparePayment function dereferences Amount
(*big.Int) without nil checks which can panic; update comparePayment to first
handle nils (e.g., if base.Amount==nil && curr.Amount==nil -> no change; if one
is nil -> append Change with OldValue/NewValue as "<nil>" or "0" per project
convention; otherwise call Amount.Cmp) and apply the same nil-safe logic in
compareBalance (look for compareBalance and its Amount/Available/Locked *big.Int
comparisons) so no Cmp is called on a nil pointer; reference the comparePayment
and compareBalance functions and the PSPPayment.Amount (and any balance
Amount/Available/Locked fields) when making the change.
In `@tools/workbench/debug.go`:
- Around line 88-92: The data race is caused by nextID() incrementing
DebugStore.idCounter without synchronization and being called before acquiring
the mutex in addEntry, LogPluginCall, and LogHTTPRequest; fix by either (A)
moving the nextID() invocation inside the DebugStore mutex-protected section in
addEntry, LogPluginCall, and LogHTTPRequest so idCounter++ happens while the
lock is held, or (B) convert idCounter to an atomic.Int64 and update nextID() to
use atomic.AddInt64(&d.idCounter, 1) (or its Go atomic.Int64 method) so
increments are safe concurrently; choose one approach and update references to
idCounter and nextID accordingly in the DebugStore type and the three callers.
In `@tools/workbench/introspect.go`:
- Around line 252-260: The HasPrefix check on fullPath is unsafe because sibling
directories can share prefixes; replace it with a robust check using
filepath.Rel (or ensure both paths have a trailing separator) to verify fullPath
is inside i.basePath: compute relToBase := filepath.Rel(i.basePath, fullPath)
and return an error if Rel returns an error or relToBase starts with ".." (or
equals ".."), referencing the variables/methods fullPath, i.basePath, relPath,
filepath.Join, filepath.Abs and strings.HasPrefix (remove or replace the current
strings.HasPrefix usage with the filepath.Rel based check).
In `@tools/workbench/server.go`:
- Around line 825-835: In handleHTTPCaptureStatus, transport may be nil so
directly accessing transport.MaxBodySize causes a nil-pointer panic; change the
response to compute maxBodySize := 0 (or another safe default) and if transport
!= nil set maxBodySize = transport.MaxBodySize, then return that variable in
s.jsonResponse instead of accessing transport.MaxBodySize unconditionally; keep
the existing enabled logic using transport.IsEnabled() only when transport !=
nil.
- Around line 177-215: The SPA currently only serves /ui/assets/* and /ui/ so
client-side routes like /ui/connectors/123 404; update the router registration
when s.enableUI is true to add a catch-all for non-asset /ui/* paths that serves
the embedded index.html (use the same uiFS/uiContent and fs.ReadFile logic used
for /ui/), ensuring it does not intercept /ui/assets/* (use a more specific
assets route first or check the path prefix). Modify the code around the
existing r.Handle("/ui/assets/*", ...), r.Get("/ui", ...), and r.Get("/ui/",
...) handlers to add a new handler (e.g., r.Get for "/ui/{rest:.*}" or
r.PathPrefix("/ui/") with a guard) that returns index.html and sets Content-Type
to text/html; leave s.handleUIFallback unchanged for root redirect logic.
In `@tools/workbench/testgen.go`:
- Around line 337-351: The extractURLPattern function returns the path including
query strings, causing mismatches with r.URL.Path; update extractURLPattern to
strip off any query string or fragment when returning the path: after obtaining
the path portion (the string returned from the first '/' onward), look for '?'
or '#' and truncate the path at the earliest occurrence so the returned value is
just the path (e.g., "/v1/charges"); keep the rest of the existing logic and
reference extractURLPattern to locate where to apply this truncation so
matchURLPattern comparisons against r.URL.Path succeed.
In `@tools/workbench/transport.go`:
- Around line 87-96: The code currently uses io.ReadAll on req.Body and
resp.Body (captured into entry.RequestBody / entry.ResponseBody) which can OOM
for large payloads; change both reads to use io.LimitReader with a limit of
t.MaxBodySize+1 (or similar) so you only buffer at most the configured cap, then
read that limited reader, close and restore the original bodies as before, and
pass the truncated bytes to t.truncateBody/entry assignment; update the
request-side block that reads req.Body and the response-side block that reads
resp.Body to use io.LimitReader and ensure you still replace req.Body/resp.Body
with io.NopCloser of the preserved bytes stream so the rest of the pipeline can
consume the body.
- Around line 113-134: The code currently truncates the HTTP response into
entry.ResponseBody via t.truncateBody(bodyBytes) and then passes
[]byte(entry.ResponseBody) to t.Schemas.InferFromJSON, which breaks JSON for
responses > MaxBodySize; fix by hoisting bodyBytes out of the resp.Body
nil-check so the original full bodyBytes is still in scope, keep restoring
resp.Body with io.NopCloser(bytes.NewReader(bodyBytes)) and setting
entry.ResponseBody = t.truncateBody(bodyBytes) for storage, but call
t.Schemas.InferFromJSON with the original bodyBytes (not the truncated
entry.ResponseBody); use existing symbols resp.Body, bodyBytes,
entry.ResponseBody, t.truncateBody, t.Schemas.InferFromJSON and
extractOperationFromURL to locate and implement the change.
In `@tools/workbench/ui/src/api.ts`:
- Around line 594-632: globalApi is missing replay-related methods that
correspond to the exported types (ReplayRequest, ReplayResponse,
ResponseComparison, HeaderDiff) and backend endpoints; add methods on the
globalApi client to call the backend routes: createReplay (POST /replay taking
ReplayRequest -> ReplayResponse), listReplayHistory (GET /replay/history ->
ReplayResponse[]), getReplayById (GET /replay/{id} -> ReplayResponse),
createReplayFromCapture (POST /replay/from-capture/{id} -> ReplayResponse),
compareReplay (GET /replay/compare/{id} -> ResponseComparison), dryRunReplay
(POST /replay/dry-run taking ReplayRequest -> ReplayResponse or comparison), and
getReplayCurl (GET /replay/curl/{id} -> string); implement each method using the
same request/response patterns as other globalApi methods (proper HTTP verb,
path, JSON body/response, and types), and export them from the module so callers
can use the exported interfaces.
In `@tools/workbench/ui/src/main.tsx`:
- Around line 7-13: The BrowserRouter in main.tsx is missing a basename matching
the server and Vite build (served under /ui/), causing client-side routes to be
resolved against the full path; update the BrowserRouter element (where
BrowserRouter is used to wrap <App />) to include basename="/ui" so React Router
treats /ui as the app root and routes like /some-route are resolved correctly.
In `@tools/workbench/workbench.go`:
- Around line 244-246: CreateConnector is overwriting w.transport.Schemas for
each connector so only the last connector receives inferred schemas; instead
register per-connector SchemaManagers into a dispatching layer (e.g., build a
Composite/Dispatching SchemaManager and register each connector's schemas there)
rather than assigning w.transport.Schemas = schemas. Modify CreateConnector to
add the new connector's SchemaManager into a map keyed by connector ID/host/path
and replace the single transport.Schemas with a dispatching SchemaManager that
routes lookups per-request (by connectorID/host/path) so all connectors receive
auto-inference instead of being overwritten.
🟡 Minor comments (14)
tools/workbench/tasks.go-299-341 (1)
299-341:⚠️ Potential issue | 🟡 Minor
AddChildTaskonly searches root-level nodes for the parent — nested parents won't be found.The parent lookup at Lines 309-314 iterates only
t.rootTree. If the parent task is itself a child (nested), it won't match and the method returnsnil. This differs fromfindOrCreateNode, which searches recursively.Consider reusing the recursive search pattern or extracting a shared
findNodeByTypehelper.Proposed fix: recursive parent search
- // Find parent - var parent *TaskNode - for _, node := range t.rootTree { - if node.Type == parentTypStr { - parent = node - break - } - } + // Find parent (recursive) + var findParent func([]*TaskNode) *TaskNode + findParent = func(nodes []*TaskNode) *TaskNode { + for _, node := range nodes { + if node.Type == parentTypStr { + return node + } + if found := findParent(node.Children); found != nil { + return found + } + } + return nil + } + parent := findParent(t.rootTree)tools/workbench/tasks.go-86-93 (1)
86-93:⚠️ Potential issue | 🟡 Minor
stopChanis never closed —WaitForStepcan block indefinitely.
stopChanis initialized inNewTaskTrackerbut no method in this file ever closes it. If step mode is enabled andWaitForStepis blocking, there is no way to unblock it for graceful shutdown, potentially leaking the goroutine.Consider adding a
Stop()method that closesstopChan(guarded bysync.Once).Proposed addition
Add a field and method:
type TaskTracker struct { mu sync.RWMutex + stopOnce sync.Once // ... } +// Stop signals all waiting goroutines to exit. +func (t *TaskTracker) Stop() { + t.stopOnce.Do(func() { close(t.stopChan) }) +}tools/workbench/ui/vite.config.ts-7-10 (1)
7-10:⚠️ Potential issue | 🟡 MinorChange
emptyDirOnStarttoemptyOutDir— the standard Vite build option for clearing the output directory before building.The Vite 5.0.0 (your current version) build option is
emptyOutDir, notemptyDirOnStart. The latter is not recognized by Vite and will be silently ignored.tools/workbench/testgen.go-232-318 (1)
232-318:⚠️ Potential issue | 🟡 MinorTemplate does not escape Go string-breaking characters in snapshot data.
If a snapshot's
Name,URLPattern, orDescriptioncontains backticks, double quotes, or newlines, the generated Go source will have syntax errors. Theformat.Sourcefallback (lines 152-160) catches this but produces unusable output.Consider sanitizing template inputs or using
%q-style quoting in the template for string literals.tools/workbench/README.md-17-23 (1)
17-23:⚠️ Potential issue | 🟡 MinorAdd language identifiers to fenced code blocks to satisfy markdownlint.
Several code blocks containing ASCII diagrams lack a language specifier (flagged by markdownlint MD040). Use
textfor these blocks.Applies to lines 17, 27, 50, and 161 as well
-``` +```text payments binarytools/workbench/engine.go-717-728 (1)
717-728:⚠️ Potential issue | 🟡 MinorData race:
state.HasMoreread outside the mutex.
state.HasMoreis read at line 727 after the lock is released at line 725, but another goroutine could concurrently modify it (lines 750–755 acquire the lock to write). This is a data race. Consider readingHasMorewhile the lock is held.Suggested fix
e.mu.Lock() state := e.accountsFetchState if state == nil { state = &fetchState{HasMore: true} e.accountsFetchState = state } currentState := state.State + hasMore := state.HasMore e.mu.Unlock() - if !state.HasMore { + if !hasMore { return &models.FetchNextAccountsResponse{HasMore: false}, nil }The same pattern occurs in
FetchPaymentsOnePage(line 780),FetchBalancesOnePage(line 833), andFetchExternalAccountsOnePage(line 886).tools/workbench/engine.go-285-296 (1)
285-296:⚠️ Potential issue | 🟡 Minor
resp.HasMoreaccessed before error check — fragile if plugin returns pointer type.Lines 285–291 read
resp.HasMoreto pass toCompleteTask, but theerr != nilcheck only comes at line 293. If the plugin ever returns a nil/invalid response alongside an error, this will pass an undefined value (or panic for pointer return types). The same pattern is repeated in everyexecuteFetch*Taskmethod.Suggested fix (apply to all fetch task methods)
if e.tasks != nil && exec != nil { itemCount := 0 + hasMore := false if err == nil { itemCount = len(resp.Accounts) + hasMore = resp.HasMore } - e.tasks.CompleteTask(exec, itemCount, resp.HasMore, err) + e.tasks.CompleteTask(exec, itemCount, hasMore, err) }tools/workbench/schema.go-248-256 (1)
248-256:⚠️ Potential issue | 🟡 Minor
json.Marshal/json.Unmarshalerrors silently ignored inSaveBaseline.Line 249 discards the marshal error and line 251 discards the unmarshal error. If either fails, the baseline will contain zero-valued fields. Same issue in
SaveAllBaselines(lines 265–267).Suggested fix
// Deep copy the schema - data, _ := json.Marshal(schema) + data, err := json.Marshal(schema) + if err != nil { + return fmt.Errorf("failed to marshal schema: %w", err) + } var baseline InferredSchema - json.Unmarshal(data, &baseline) + if err := json.Unmarshal(data, &baseline); err != nil { + return fmt.Errorf("failed to unmarshal schema copy: %w", err) + } baseline.ID = fmt.Sprintf("baseline-%s-%d", operation, time.Now().UnixNano())tools/workbench/replay.go-345-365 (1)
345-365:⚠️ Potential issue | 🟡 Minor
DryRundoesn't apply modifications, unlikeReplayFromCapture.
DryRunresolves the original capture whenOriginalIDis set andMethodis empty, but doesn't merge any user-supplied field overrides onto the resolved request. By contrast,ReplayFromCaptureapplies amodificationsstruct. If the intent is to preview whatReplayFromCapturewould send, this divergence is a functional gap — the preview won't match the actual replay when modifications are provided.tools/workbench/server.go-1725-1742 (1)
1725-1742:⚠️ Potential issue | 🟡 Minor
Content-Dispositionfilename inhandleExportBaselineuses unsanitizedid.Line 1740 injects
id(from URL param) directly into theContent-Dispositionheader. Ifidcontains characters like"or newlines, it could break the header or enable header injection. Consider sanitizing or quoting the value.tools/workbench/snapshots.go-327-342 (1)
327-342:⚠️ Potential issue | 🟡 Minor
sanitizeFilenamereturns an empty string for empty or all-unsafe input.If
nameis""or consists entirely of replaced characters (e.g."///"), the result is""or"___"respectively — the empty case produces a file named.json, which is a hidden file on Unix and likely unintended.Consider adding a fallback:
Suggested fix
func sanitizeFilename(name string) string { // Replace unsafe characters replacer := strings.NewReplacer( "/", "_", "\\", "_", ":", "_", "*", "_", "?", "_", "\"", "_", "<", "_", ">", "_", "|", "_", " ", "_", ) - return replacer.Replace(strings.ToLower(name)) + result := replacer.Replace(strings.ToLower(name)) + if result == "" { + return "unnamed" + } + return result }tools/workbench/snapshots.go-232-260 (1)
232-260:⚠️ Potential issue | 🟡 MinorFilename collisions in
ExportToDirwill silently overwrite snapshots.If two snapshots have names that sanitize to the same string (e.g.
"Fetch Accounts"and"fetch/accounts"), the second write overwrites the first without warning, causing silent data loss.Consider appending a disambiguator (e.g. a short ID suffix) to the filename.
Suggested fix
+ seen := make(map[string]int) for _, s := range snapshots { filename := sanitizeFilename(s.Name) + ".json" + if count, ok := seen[filename]; ok { + filename = sanitizeFilename(s.Name) + fmt.Sprintf("_%d", count) + ".json" + } + seen[sanitizeFilename(s.Name)+".json"]++ path := filepath.Join(dir, filename)tools/workbench/replay.go-296-329 (1)
296-329:⚠️ Potential issue | 🟡 MinorHeader comparison uses empty-string check instead of map-key lookup — misclassifies headers with empty values.
origVal == ""(Line 309) doesn't distinguish between "key absent from map" and "key present with empty value". A header that legitimately has an empty value would be treated as missing.Use the comma-ok idiom to check map membership:
Suggested fix
for key := range allKeys { - origVal := original.ResponseHeaders[key] - replayVal := replay.Headers[key] + origVal, origOk := original.ResponseHeaders[key] + replayVal, replayOk := replay.Headers[key] - if origVal == "" && replayVal != "" { + if !origOk && replayOk { comparison.HeaderDiffs = append(comparison.HeaderDiffs, HeaderDiff{ Key: key, ReplayValue: replayVal, Type: "added", }) - } else if origVal != "" && replayVal == "" { + } else if origOk && !replayOk { comparison.HeaderDiffs = append(comparison.HeaderDiffs, HeaderDiff{ Key: key, OriginalValue: origVal, Type: "removed", }) } else if origVal != replayVal {tools/workbench/server.go-50-57 (1)
50-57:⚠️ Potential issue | 🟡 Minor
AllowedOrigins: ["*"]combined withAllowCredentials: trueviolates the CORS specification and won't work.Per the Fetch Standard,
Access-Control-Allow-Origin: *is only valid for non-credentialed requests. For credentialed requests, the server must return an explicit origin. Browsers will reject the response if both*and credentials are present.The go-chi/cors middleware does not automatically handle this combination—it removed the "reflect origin" workaround due to security concerns. This configuration will not function as intended.
Use one of these approaches instead:
- Specify explicit origins:
AllowedOrigins: []string{"https://app.example.com"}- Use origin patterns:
AllowedOrigins: []string{"https://*"}- Use
AllowOriginFuncto conditionally allow origins and retainAllowCredentials: trueDrop
AllowCredentials: trueentirely if credentials aren't needed.
🧹 Nitpick comments (27)
tools/workbench/ui/.gitignore (1)
1-1: Consider addingdist/to.gitignoreunless build output is intentionally committed.The Vite config sets
outDir: 'dist', which generates build artifacts. Ifdist/is embedded for serving (e.g., viago:embed), this is fine as-is. Otherwise, it should be ignored.#!/bin/bash # Check if dist/ is tracked or if there's a go:embed directive referencing it rg -rn 'go:embed' --type=go -g '*workbench*' fd -t d 'dist' --base-directory tools/workbench/uitools/workbench/debug.go (1)
13-21: Unused constant:DebugEntryTypeResponseis defined but never referenced.The
"response"type is not used in any logging method, and the frontend TypeScriptDebugEntrytype union also omits it. Consider removing it to avoid confusion.tools/workbench/tasks.go (2)
143-149:GetTaskTreeandGetCurrentTaskreturn mutable pointers under read-lock.
GetTaskTree(Line 148) returns the internal[]*TaskNodeslice andGetCurrentTask(Line 173) returns the internal*TaskNodepointer. Callers can mutate node fields without holding the lock, creating a race with methods likeCompleteTaskthat modify these same nodes.For a dev tool this is low-risk, but worth noting if the workbench evolves to serve concurrent UI requests.
208-251:CompleteTaskupdates aTaskExecutionpassed by pointer from outside the lock boundary.
StartTaskreturns&exec(Line 205) which the caller later passes toCompleteTask. Between these calls, theexecvalue lives outside any lock.CompleteTaskthen mutates it (Lines 214-217) and readst.currentTask(Line 240). This is safe only if the same goroutine calls both in sequence — document or enforce this contract.tools/workbench/ui/src/index.css (2)
453-456: Consider preserving a visible focus indicator for keyboard navigation.
outline: noneremoves the default focus ring. The border-color change alone may be hard to spot. A subtlebox-shadoworoutlinewith an offset would improve keyboard usability at minimal effort.input:focus, select:focus, textarea:focus { - outline: none; + outline: 2px solid var(--fm-emerald-500); + outline-offset: -1px; border-color: var(--fm-emerald-500); }
6-6: External font import requires internet connectivity.The Google Fonts import will fail in air-gapped or offline development environments. This is acceptable for a dev tool but worth noting — if offline support becomes important, the fonts could be self-hosted.
tools/workbench/ui/src/api.ts (2)
3-16:Content-Type: application/jsonis unconditionally set on all requests, including GETs.GET requests with a
Content-Typeheader can cause issues with certain proxies or CORS preflight behaviors. Consider only setting it when a body is present.Suggested approach
async function fetchJSON<T>(url: string, options?: RequestInit): Promise<T> { + const headers: Record<string, string> = { ...options?.headers as Record<string, string> }; + if (options?.body) { + headers['Content-Type'] = headers['Content-Type'] ?? 'application/json'; + } const res = await fetch(API_BASE + url, { ...options, - headers: { - 'Content-Type': 'application/json', - ...options?.headers, - }, + headers, });
634-836: Consider reducing the legacyapiwrapper boilerplate.~200 lines repeat the same
if (!selectedConnectorId) throw …; return connectorApi(selectedConnectorId).method(…)guard. AProxyor a small helper could eliminate this duplication while preserving the same runtime behavior.Also,
selectedConnectorIdas a mutable module-levelletis fragile in React — it sits outside the React lifecycle, so components won't re-render when it changes. If the UI grows, consider lifting this into React context or a lightweight store.Example: Proxy-based approach to eliminate boilerplate
function withConnectorGuard<T extends Record<string, (...args: any[]) => any>>( factory: (id: string) => T ): T { return new Proxy({} as T, { get(_target, prop: string) { return (...args: any[]) => { if (!selectedConnectorId) throw new Error('No connector selected'); return (factory(selectedConnectorId) as any)[prop](...args); }; }, }); } export const api = { ...globalApi, ...withConnectorGuard(connectorApi), };cmd/workbench.go (3)
129-134:wb.Stopuses an unbounded context — process can hang on shutdown.If
Stopblocks (e.g., the HTTP server doesn't drain), there's no timeout and no way to force exit (a second SIGINT is also consumed silently by the channel). Consider using a deadline and handling a second signal for a hard exit.Suggested improvement
go func() { <-sigChan fmt.Println("\nShutting down...") cancel() - wb.Stop(context.Background()) + stopCtx, stopCancel := context.WithTimeout(context.Background(), 5*time.Second) + defer stopCancel() + wb.Stop(stopCtx) + // Force exit on second signal + <-sigChan + fmt.Println("\nForce exit") + os.Exit(1) }()
168-188: Provider list output order is non-deterministic.Map iteration in Go is unordered, so running
--list-providerstwice can produce different output. Sorting the keys improves UX.Suggested fix
func printProviders() error { configs := registry.GetConfigs(true) // include debug connectors + providers := make([]string, 0, len(configs)) + for provider := range configs { + providers = append(providers, provider) + } + sort.Strings(providers) + fmt.Println("Available connector providers:") fmt.Println() - for provider, config := range configs { + for _, provider := range providers { + config := configs[provider] fmt.Printf(" %s\n", provider)
190-192: Remove emptyinit()function.This is dead code. The comment says it's added to root in
NewRootCommand, which happens incmd/root.govianewWorkbench()— theinit()serves no purpose.-func init() { - // This will be added to root in NewRootCommand -}tools/workbench/testgen.go (2)
48-65: Duplicate snapshot listing and grouping logic acrossGenerateandGenerateFixturesOnly.Both methods perform the same
List("", nil)check followed byGroupByOperation()and fixture iteration. Consider extracting a shared helper to keep it DRY.Also applies to: 353-371
258-270: GeneratedsetupMockServeriterates a map — pattern match order is non-deterministic.If multiple handlers share overlapping patterns (e.g.,
/v1/accountsand/v1/accounts/*), the first match depends on Go's random map iteration. Since the generated test typically registers one handler pert.Run, this is low risk, but the template itself exposes the pitfall if someone registers multiple handlers.tools/workbench/engine.go (2)
924-945:CreateTransfer/CreatePayoutreturn&respeven whenerr != nil.When the plugin returns an error, the caller receives both a pointer to a potentially zero-value response and the error. Idiomatic Go would return
nilon error to prevent the caller from accidentally using an invalid response.Suggested fix (applies to both methods)
- return &resp, err + if err != nil { + return nil, err + } + return &resp, nil
106-141:Installholds the write lock across the entire plugin call.
e.mu.Lock()is held while callinge.plugin.Install(ctx, ...), which may be a blocking network call. This blocks all concurrent reads (IsRunning,GetFetchStatus, etc.) for the duration.RunOneCycleexplicitly releases the lock before doing work — consider the same pattern here.tools/workbench/workbench.go (1)
283-304:instance.Installedcheck at line 290 is not protected byconnMu.
GetConnectorreturns the pointer underconnMu.RLock, but the subsequent.Installedread (line 290) and.engine.Install()(line 294) happen outside any lock. Two concurrentInstallConnectorcalls for the same ID could both pass the check and callengine.Installtwice. The write at line 299 is protected, but the damage is done.For a dev tool this is unlikely but worth noting — a simple
connMu.RLockaround the check or a CAS pattern would close the gap.tools/workbench/storage.go (2)
384-400:Exportshares internal map references with the caller — mutations alias.The returned
StorageSnapshotdirectly references the internal maps (s.accounts,s.payments, etc.). Any mutation by the caller (e.g., during JSON marshaling with concurrent writes, or intentional modification) will affect internal storage state.Since this is a dev tool, the risk is low, but a shallow copy of each map would make this safe:
Suggested fix (one map shown, apply to all)
+ accounts := make(map[string]models.PSPAccount, len(s.accounts)) + for k, v := range s.accounts { + accounts[k] = v + } + return StorageSnapshot{ - Accounts: s.accounts, + Accounts: accounts, // ... same for other maps
248-255:StoreOthersalways appends — can grow unbounded with duplicate data.
s.others[name] = append(s.others[name], others...)accumulates all items without deduplication. Repeated fetch cycles or retries will keep growing the slice with duplicate entries. Consider whether this should replace rather than append, or deduplicate.tools/workbench/baseline.go (1)
396-408: RedundantSprintfand customjoin— usestrings.Joindirectly.Line 396 wraps
join(...)infmt.Sprintf("%s", ...)which is a no-op for strings. Thejoinhelper at lines 399–408 reimplementsstrings.Joinfrom the standard library (already imported).Suggested fix
- return fmt.Sprintf("%s", join(parts, " | ")) + return strings.Join(parts, " | ")And remove the
joinfunction entirely.tools/workbench/snapshots.go (2)
58-89:baseDirfield is set but never used.
SetBaseDirstores a value inm.baseDir, but neitherExportToDirnorImportFromDirreference it — they both take an explicitdirparameter. This is dead state.Either remove
baseDir/SetBaseDir, or haveExportToDir/ImportFromDirdefault tom.baseDirwhen the caller passes an emptydir.
262-291:ImportFromDirsilently swallows file-read and JSON-unmarshal errors.Lines 277-278 and 282-284
continuepast errors with no logging or reporting. A user importing a directory with a malformed file gets a silent partial import with no indication that some files were skipped.At minimum, consider collecting and returning the skipped filenames or errors so the caller can surface them.
tools/workbench/replay.go (1)
226-237:GetReplayByIDreturns a pointer into the live history slice — potential stale reference.
&r.history[i]is returned underRLock, but once the lock is released,addToHistorycan shift or reallocate the backing array, making the pointer point at stale data. For a dev tool the impact is low, but returning a copy would be safer.Suggested fix
func (r *Replayer) GetReplayByID(id string) *ReplayResponse { r.mu.RLock() defer r.mu.RUnlock() for i := range r.history { if r.history[i].ID == id { - return &r.history[i] + resp := r.history[i] // copy + return &resp } } return nil }tools/workbench/server.go (5)
475-498: Decode errors fromfetchRequestare silently swallowed.In all four fetch handlers,
json.NewDecoder(r.Body).Decode(&req)results are discarded (e.g., Line 484). If the client sends malformed JSON, the request proceeds with a zero-valuefromPayload, which may produce confusing behavior rather than a clear 400 error.Suggested fix (example for handleFetchAccounts, apply similarly to the others)
var req fetchRequest - if r.Body != nil { - json.NewDecoder(r.Body).Decode(&req) + if r.Body != nil && r.ContentLength > 0 { + if err := json.NewDecoder(r.Body).Decode(&req); err != nil { + s.errorResponse(w, http.StatusBadRequest, "invalid request body: "+err.Error()) + return + } }Also applies to: 500-523, 525-548, 550-573
267-271:json.NewEncoder(w).Encodeerror is ignored.If encoding fails (e.g., unencodable types), the client receives a partial/corrupt JSON response with a 200 status (since
WriteHeaderwas already called on Line 269). Consider logging the error at minimum.
1367-1428:handleExportSnapshotsandhandleImportSnapshotsaccept arbitrary filesystem paths from the request body.The
directoryfield is passed directly toExportToDir/ImportFromDirwithout any path validation or sandboxing. While this is a local dev tool, an accidental or malicious request could write to or read from unintended locations (e.g.,"/tmp/../../etc").Consider restricting the path to a configured base directory or at least resolving and validating the absolute path.
735-756:handleImportreads the entire request body into memory without a size limit.
io.ReadAll(r.Body)on Line 742 will consume unbounded memory if a large payload is sent. Consider usingio.LimitReaderto cap the body size.Suggested fix
- body, err := io.ReadAll(r.Body) + const maxImportSize = 10 << 20 // 10 MB + body, err := io.ReadAll(io.LimitReader(r.Body, maxImportSize))
42-223:Start()is a blocking call with noReadHeaderTimeout— susceptible to slowloris.The
http.Serveron Line 217-220 has no read/write/header timeouts configured. For a local dev tool this is low risk, but addingReadHeaderTimeoutis a good practice.
| func (m *BaselineManager) comparePayment(base, curr models.PSPPayment) []Change { | ||
| var changes []Change | ||
|
|
||
| if base.Amount.Cmp(curr.Amount) != 0 { | ||
| changes = append(changes, Change{Field: "amount", OldValue: base.Amount.String(), NewValue: curr.Amount.String()}) | ||
| } | ||
| if base.Asset != curr.Asset { | ||
| changes = append(changes, Change{Field: "asset", OldValue: base.Asset, NewValue: curr.Asset}) | ||
| } | ||
| if base.Status != curr.Status { | ||
| changes = append(changes, Change{Field: "status", OldValue: base.Status, NewValue: curr.Status}) | ||
| } | ||
| if base.Type != curr.Type { | ||
| changes = append(changes, Change{Field: "type", OldValue: base.Type, NewValue: curr.Type}) | ||
| } | ||
|
|
||
| return changes | ||
| } |
There was a problem hiding this comment.
Nil *big.Int dereference in comparePayment will panic.
Line 295: base.Amount.Cmp(curr.Amount) panics if either Amount is nil. PSPPayment.Amount is *big.Int, and a payment with a nil amount (e.g., from a PSP that doesn't always provide it) will crash the comparison. The same issue applies in compareBalance at line 362.
Suggested fix
- if base.Amount.Cmp(curr.Amount) != 0 {
+ if (base.Amount == nil) != (curr.Amount == nil) ||
+ (base.Amount != nil && curr.Amount != nil && base.Amount.Cmp(curr.Amount) != 0) {
+ oldVal := "<nil>"
+ newVal := "<nil>"
+ if base.Amount != nil {
+ oldVal = base.Amount.String()
+ }
+ if curr.Amount != nil {
+ newVal = curr.Amount.String()
+ }
- changes = append(changes, Change{Field: "amount", OldValue: base.Amount.String(), NewValue: curr.Amount.String()})
+ changes = append(changes, Change{Field: "amount", OldValue: oldVal, NewValue: newVal})
}🤖 Prompt for AI Agents
In `@tools/workbench/baseline.go` around lines 292 - 309, The comparePayment
function dereferences Amount (*big.Int) without nil checks which can panic;
update comparePayment to first handle nils (e.g., if base.Amount==nil &&
curr.Amount==nil -> no change; if one is nil -> append Change with
OldValue/NewValue as "<nil>" or "0" per project convention; otherwise call
Amount.Cmp) and apply the same nil-safe logic in compareBalance (look for
compareBalance and its Amount/Available/Locked *big.Int comparisons) so no Cmp
is called on a nil pointer; reference the comparePayment and compareBalance
functions and the PSPPayment.Amount (and any balance Amount/Available/Locked
fields) when making the change.
| // Serve embedded UI | ||
| if s.enableUI { | ||
| // Try to serve from embedded filesystem | ||
| uiContent, err := fs.Sub(uiFS, "ui/dist") | ||
| if err == nil { | ||
| // Serve static assets | ||
| fileServer := http.FileServer(http.FS(uiContent)) | ||
|
|
||
| // Handle /ui/assets/* for static files | ||
| r.Handle("/ui/assets/*", http.StripPrefix("/ui", fileServer)) | ||
|
|
||
| // Handle /ui and /ui/ to serve index.html | ||
| r.Get("/ui", func(w http.ResponseWriter, r *http.Request) { | ||
| http.Redirect(w, r, "/ui/", http.StatusMovedPermanently) | ||
| }) | ||
| r.Get("/ui/", func(w http.ResponseWriter, req *http.Request) { | ||
| indexHTML, err := fs.ReadFile(uiContent, "index.html") | ||
| if err != nil { | ||
| http.Error(w, "UI not available", http.StatusNotFound) | ||
| return | ||
| } | ||
| w.Header().Set("Content-Type", "text/html") | ||
| w.Write(indexHTML) | ||
| }) | ||
| } | ||
|
|
||
| // Root redirects to /ui/ if UI is enabled, otherwise fallback | ||
| r.Get("/", func(w http.ResponseWriter, req *http.Request) { | ||
| if uiContent, err := fs.Sub(uiFS, "ui/dist"); err == nil { | ||
| if _, err := fs.ReadFile(uiContent, "index.html"); err == nil { | ||
| http.Redirect(w, req, "/ui/", http.StatusTemporaryRedirect) | ||
| return | ||
| } | ||
| } | ||
| s.handleUIFallback(w, req) | ||
| }) | ||
| } else { | ||
| r.Get("/", s.handleUIFallback) | ||
| } |
There was a problem hiding this comment.
SPA routing: /ui/* sub-paths (e.g., /ui/connectors/123) will 404.
Only /ui/assets/* and /ui/ are explicitly handled. Any client-side route like /ui/connectors/123 that doesn't match /ui/assets/* will result in a 404, breaking browser refresh on SPA routes. A catch-all that serves index.html for non-asset /ui/* paths is typically needed.
Suggested approach
r.Handle("/ui/assets/*", http.StripPrefix("/ui", fileServer))
+ // Catch-all for SPA client-side routes
+ r.Get("/ui/*", func(w http.ResponseWriter, req *http.Request) {
+ indexHTML, err := fs.ReadFile(uiContent, "index.html")
+ if err != nil {
+ http.Error(w, "UI not available", http.StatusNotFound)
+ return
+ }
+ w.Header().Set("Content-Type", "text/html")
+ w.Write(indexHTML)
+ })
+
r.Get("/ui", func(w http.ResponseWriter, r *http.Request) {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Serve embedded UI | |
| if s.enableUI { | |
| // Try to serve from embedded filesystem | |
| uiContent, err := fs.Sub(uiFS, "ui/dist") | |
| if err == nil { | |
| // Serve static assets | |
| fileServer := http.FileServer(http.FS(uiContent)) | |
| // Handle /ui/assets/* for static files | |
| r.Handle("/ui/assets/*", http.StripPrefix("/ui", fileServer)) | |
| // Handle /ui and /ui/ to serve index.html | |
| r.Get("/ui", func(w http.ResponseWriter, r *http.Request) { | |
| http.Redirect(w, r, "/ui/", http.StatusMovedPermanently) | |
| }) | |
| r.Get("/ui/", func(w http.ResponseWriter, req *http.Request) { | |
| indexHTML, err := fs.ReadFile(uiContent, "index.html") | |
| if err != nil { | |
| http.Error(w, "UI not available", http.StatusNotFound) | |
| return | |
| } | |
| w.Header().Set("Content-Type", "text/html") | |
| w.Write(indexHTML) | |
| }) | |
| } | |
| // Root redirects to /ui/ if UI is enabled, otherwise fallback | |
| r.Get("/", func(w http.ResponseWriter, req *http.Request) { | |
| if uiContent, err := fs.Sub(uiFS, "ui/dist"); err == nil { | |
| if _, err := fs.ReadFile(uiContent, "index.html"); err == nil { | |
| http.Redirect(w, req, "/ui/", http.StatusTemporaryRedirect) | |
| return | |
| } | |
| } | |
| s.handleUIFallback(w, req) | |
| }) | |
| } else { | |
| r.Get("/", s.handleUIFallback) | |
| } | |
| // Serve embedded UI | |
| if s.enableUI { | |
| // Try to serve from embedded filesystem | |
| uiContent, err := fs.Sub(uiFS, "ui/dist") | |
| if err == nil { | |
| // Serve static assets | |
| fileServer := http.FileServer(http.FS(uiContent)) | |
| // Handle /ui/assets/* for static files | |
| r.Handle("/ui/assets/*", http.StripPrefix("/ui", fileServer)) | |
| // Catch-all for SPA client-side routes | |
| r.Get("/ui/*", func(w http.ResponseWriter, req *http.Request) { | |
| indexHTML, err := fs.ReadFile(uiContent, "index.html") | |
| if err != nil { | |
| http.Error(w, "UI not available", http.StatusNotFound) | |
| return | |
| } | |
| w.Header().Set("Content-Type", "text/html") | |
| w.Write(indexHTML) | |
| }) | |
| // Handle /ui and /ui/ to serve index.html | |
| r.Get("/ui", func(w http.ResponseWriter, r *http.Request) { | |
| http.Redirect(w, r, "/ui/", http.StatusMovedPermanently) | |
| }) | |
| r.Get("/ui/", func(w http.ResponseWriter, req *http.Request) { | |
| indexHTML, err := fs.ReadFile(uiContent, "index.html") | |
| if err != nil { | |
| http.Error(w, "UI not available", http.StatusNotFound) | |
| return | |
| } | |
| w.Header().Set("Content-Type", "text/html") | |
| w.Write(indexHTML) | |
| }) | |
| } | |
| // Root redirects to /ui/ if UI is enabled, otherwise fallback | |
| r.Get("/", func(w http.ResponseWriter, req *http.Request) { | |
| if uiContent, err := fs.Sub(uiFS, "ui/dist"); err == nil { | |
| if _, err := fs.ReadFile(uiContent, "index.html"); err == nil { | |
| http.Redirect(w, req, "/ui/", http.StatusTemporaryRedirect) | |
| return | |
| } | |
| } | |
| s.handleUIFallback(w, req) | |
| }) | |
| } else { | |
| r.Get("/", s.handleUIFallback) | |
| } |
🤖 Prompt for AI Agents
In `@tools/workbench/server.go` around lines 177 - 215, The SPA currently only
serves /ui/assets/* and /ui/ so client-side routes like /ui/connectors/123 404;
update the router registration when s.enableUI is true to add a catch-all for
non-asset /ui/* paths that serves the embedded index.html (use the same
uiFS/uiContent and fs.ReadFile logic used for /ui/), ensuring it does not
intercept /ui/assets/* (use a more specific assets route first or check the path
prefix). Modify the code around the existing r.Handle("/ui/assets/*", ...),
r.Get("/ui", ...), and r.Get("/ui/", ...) handlers to add a new handler (e.g.,
r.Get for "/ui/{rest:.*}" or r.PathPrefix("/ui/") with a guard) that returns
index.html and sets Content-Type to text/html; leave s.handleUIFallback
unchanged for root redirect logic.
| // Capture request body | ||
| if req.Body != nil && req.Body != http.NoBody { | ||
| bodyBytes, err := io.ReadAll(req.Body) | ||
| if err == nil { | ||
| req.Body.Close() | ||
| // Restore the body for the actual request | ||
| req.Body = io.NopCloser(bytes.NewReader(bodyBytes)) | ||
| entry.RequestBody = t.truncateBody(bodyBytes) | ||
| } | ||
| } |
There was a problem hiding this comment.
Unbounded io.ReadAll on request and response bodies — OOM risk.
Both request body (line 89) and response body (line 115) are fully read into memory with no size limit. The MaxBodySize truncation only applies when storing the string, not when reading. A multi-GB response body will be fully buffered before truncation.
Consider using io.LimitReader to cap reads:
Suggested fix (response body example)
if resp.Body != nil {
- bodyBytes, err := io.ReadAll(resp.Body)
+ bodyBytes, err := io.ReadAll(io.LimitReader(resp.Body, int64(t.MaxBodySize)+1))
if err == nil {
+ // Read and discard the remainder so the connection can be reused
+ io.Copy(io.Discard, resp.Body)
resp.Body.Close()
resp.Body = io.NopCloser(bytes.NewReader(bodyBytes))
entry.ResponseBody = t.truncateBody(bodyBytes)
}
}The same approach applies to the request body read at line 89 — though for request bodies this is less critical, response bodies from external APIs can be arbitrarily large.
Also applies to: 113-122
🤖 Prompt for AI Agents
In `@tools/workbench/transport.go` around lines 87 - 96, The code currently uses
io.ReadAll on req.Body and resp.Body (captured into entry.RequestBody /
entry.ResponseBody) which can OOM for large payloads; change both reads to use
io.LimitReader with a limit of t.MaxBodySize+1 (or similar) so you only buffer
at most the configured cap, then read that limited reader, close and restore the
original bodies as before, and pass the truncated bytes to t.truncateBody/entry
assignment; update the request-side block that reads req.Body and the
response-side block that reads resp.Body to use io.LimitReader and ensure you
still replace req.Body/resp.Body with io.NopCloser of the preserved bytes stream
so the rest of the pipeline can consume the body.
| // Capture response body | ||
| if resp.Body != nil { | ||
| bodyBytes, err := io.ReadAll(resp.Body) | ||
| if err == nil { | ||
| resp.Body.Close() | ||
| // Restore the body for the caller | ||
| resp.Body = io.NopCloser(bytes.NewReader(bodyBytes)) | ||
| entry.ResponseBody = t.truncateBody(bodyBytes) | ||
| } | ||
| } | ||
|
|
||
| t.Debug.LogHTTPRequest(entry) | ||
|
|
||
| // Auto-infer schema from JSON responses | ||
| if t.Schemas != nil && resp.StatusCode >= 200 && resp.StatusCode < 300 { | ||
| contentType := resp.Header.Get("Content-Type") | ||
| if strings.Contains(contentType, "application/json") && entry.ResponseBody != "" { | ||
| // Extract operation from URL path | ||
| operation := extractOperationFromURL(req.URL.Path) | ||
| t.Schemas.InferFromJSON(operation, req.URL.Path, req.Method, []byte(entry.ResponseBody)) | ||
| } | ||
| } |
There was a problem hiding this comment.
Schema inference receives truncated body — silently fails for responses > 64 KB.
At line 120, the response body is truncated to MaxBodySize for storage. Then at line 132, entry.ResponseBody (the truncated string) is converted back to []byte and passed to InferFromJSON. For any JSON response larger than 64 KB, this will be invalid JSON and schema inference will silently fail (return value is discarded).
Use the original bodyBytes for schema inference instead:
Suggested fix
// Auto-infer schema from JSON responses
if t.Schemas != nil && resp.StatusCode >= 200 && resp.StatusCode < 300 {
contentType := resp.Header.Get("Content-Type")
- if strings.Contains(contentType, "application/json") && entry.ResponseBody != "" {
+ if strings.Contains(contentType, "application/json") && len(bodyBytes) > 0 {
// Extract operation from URL path
operation := extractOperationFromURL(req.URL.Path)
- t.Schemas.InferFromJSON(operation, req.URL.Path, req.Method, []byte(entry.ResponseBody))
+ t.Schemas.InferFromJSON(operation, req.URL.Path, req.Method, bodyBytes)
}
}Note: this requires hoisting bodyBytes into scope outside the if resp.Body != nil block.
🤖 Prompt for AI Agents
In `@tools/workbench/transport.go` around lines 113 - 134, The code currently
truncates the HTTP response into entry.ResponseBody via
t.truncateBody(bodyBytes) and then passes []byte(entry.ResponseBody) to
t.Schemas.InferFromJSON, which breaks JSON for responses > MaxBodySize; fix by
hoisting bodyBytes out of the resp.Body nil-check so the original full bodyBytes
is still in scope, keep restoring resp.Body with
io.NopCloser(bytes.NewReader(bodyBytes)) and setting entry.ResponseBody =
t.truncateBody(bodyBytes) for storage, but call t.Schemas.InferFromJSON with the
original bodyBytes (not the truncated entry.ResponseBody); use existing symbols
resp.Body, bodyBytes, entry.ResponseBody, t.truncateBody,
t.Schemas.InferFromJSON and extractOperationFromURL to locate and implement the
change.
| // Replay types | ||
| export interface ReplayRequest { | ||
| original_id?: string; | ||
| method: string; | ||
| url: string; | ||
| headers?: Record<string, string>; | ||
| body?: string; | ||
| } | ||
|
|
||
| export interface ReplayResponse { | ||
| id: string; | ||
| timestamp: string; | ||
| duration: number; | ||
| original_id?: string; | ||
| request: ReplayRequest; | ||
| status_code: number; | ||
| status: string; | ||
| headers?: Record<string, string>; | ||
| body?: string; | ||
| error?: string; | ||
| } | ||
|
|
||
| export interface ResponseComparison { | ||
| original_id: string; | ||
| replay_id: string; | ||
| status_match: boolean; | ||
| original_status: number; | ||
| replay_status: number; | ||
| body_match: boolean; | ||
| body_diff?: string; | ||
| header_diffs?: HeaderDiff[]; | ||
| } | ||
|
|
||
| export interface HeaderDiff { | ||
| key: string; | ||
| original_value?: string; | ||
| replay_value?: string; | ||
| type: 'added' | 'removed' | 'changed'; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's check the file structure and find the api.ts file
fd -t f "api.ts" tools/workbench/ui/src/Repository: formancehq/payments
Length of output: 92
🏁 Script executed:
# Check if server.go registers replay routes
rg -n "replay" --type=go -i tools/workbench/server.go | head -20Repository: formancehq/payments
Length of output: 1123
🏁 Script executed:
# Search for connectorApi or globalApi definitions
rg -n "connectorApi|globalApi" --type=ts -i tools/workbench/ui/src/api.ts | head -20Repository: formancehq/payments
Length of output: 1156
🏁 Script executed:
# Search for any replay methods in the api.ts file
rg -n "replay" --type=ts -i tools/workbench/ui/src/api.tsRepository: formancehq/payments
Length of output: 270
🏁 Script executed:
# View the globalApi definition section
sed -n '147,250p' tools/workbench/ui/src/api.tsRepository: formancehq/payments
Length of output: 4031
🏁 Script executed:
# Also check the end of globalApi to see all its methods
sed -n '147,500p' tools/workbench/ui/src/api.ts | grep -n "^\};" -A 2 -B 5 | head -40Repository: formancehq/payments
Length of output: 336
🏁 Script executed:
# Search for all function definitions in api.ts that contain "replay" in any form
rg -n "replay|Replay" --type=ts tools/workbench/ui/src/api.tsRepository: formancehq/payments
Length of output: 270
🏁 Script executed:
# View the full connectorApi function
sed -n '191,650p' tools/workbench/ui/src/api.ts | tail -100Repository: formancehq/payments
Length of output: 2309
🏁 Script executed:
# Search explicitly for the word "replay" in the entire api.ts to be absolutely certain
rg -i "replay" tools/workbench/ui/src/api.ts -nRepository: formancehq/payments
Length of output: 270
Add missing replay API methods to match the exported types and backend endpoints.
The ReplayRequest, ReplayResponse, ResponseComparison, and HeaderDiff interfaces are exported, but globalApi lacks corresponding replay methods. The backend exposes multiple replay endpoints (/replay, /replay/history, /replay/{id}, /replay/from-capture/{id}, /replay/compare/{id}, /replay/dry-run, /replay/curl/{id}), so the frontend API client should provide methods to call them.
🤖 Prompt for AI Agents
In `@tools/workbench/ui/src/api.ts` around lines 594 - 632, globalApi is missing
replay-related methods that correspond to the exported types (ReplayRequest,
ReplayResponse, ResponseComparison, HeaderDiff) and backend endpoints; add
methods on the globalApi client to call the backend routes: createReplay (POST
/replay taking ReplayRequest -> ReplayResponse), listReplayHistory (GET
/replay/history -> ReplayResponse[]), getReplayById (GET /replay/{id} ->
ReplayResponse), createReplayFromCapture (POST /replay/from-capture/{id} ->
ReplayResponse), compareReplay (GET /replay/compare/{id} -> ResponseComparison),
dryRunReplay (POST /replay/dry-run taking ReplayRequest -> ReplayResponse or
comparison), and getReplayCurl (GET /replay/curl/{id} -> string); implement each
method using the same request/response patterns as other globalApi methods
(proper HTTP verb, path, JSON body/response, and types), and export them from
the module so callers can use the exported interfaces.
| ReactDOM.createRoot(document.getElementById('root')!).render( | ||
| <React.StrictMode> | ||
| <BrowserRouter> | ||
| <App /> | ||
| </BrowserRouter> | ||
| </React.StrictMode>, | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find and examine Vite configuration
fd "vite.config" --type=f -x cat {}Repository: formancehq/payments
Length of output: 347
🏁 Script executed:
# Check server.go for UI routing setup
find . -name "server.go" -type f -exec cat -n {} +Repository: formancehq/payments
Length of output: 50375
Add basename to BrowserRouter to match server mount path.
The Go backend serves the UI under /ui/ (line 500 in server.go) and Vite builds with base: '/ui/', but BrowserRouter lacks a matching basename prop. This causes client-side routing to fail—React Router will attempt to match routes against the full path (e.g., /ui/some-route) instead of treating /ui/ as the app root. Add basename="/ui" to fix this mismatch.
Suggested change
- <BrowserRouter>
+ <BrowserRouter basename="/ui">📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ReactDOM.createRoot(document.getElementById('root')!).render( | |
| <React.StrictMode> | |
| <BrowserRouter> | |
| <App /> | |
| </BrowserRouter> | |
| </React.StrictMode>, | |
| ) | |
| ReactDOM.createRoot(document.getElementById('root')!).render( | |
| <React.StrictMode> | |
| <BrowserRouter basename="/ui"> | |
| <App /> | |
| </BrowserRouter> | |
| </React.StrictMode>, | |
| ) |
🤖 Prompt for AI Agents
In `@tools/workbench/ui/src/main.tsx` around lines 7 - 13, The BrowserRouter in
main.tsx is missing a basename matching the server and Vite build (served under
/ui/), causing client-side routes to be resolved against the full path; update
the BrowserRouter element (where BrowserRouter is used to wrap <App />) to
include basename="/ui" so React Router treats /ui as the app root and routes
like /some-route are resolved correctly.
| // Link schemas to transport for auto-inference | ||
| w.transport.Schemas = schemas | ||
|
|
There was a problem hiding this comment.
w.transport.Schemas is overwritten per connector — only the last connector gets auto-inference.
Each call to CreateConnector sets w.transport.Schemas = schemas (line 245), overwriting any previously-linked SchemaManager. In multi-connector mode, only the most recently created connector will have its schemas populated from captured HTTP traffic. This contradicts the multi-connector design.
Consider a dispatch layer (e.g., a composite SchemaManager or per-request routing by host/path) so all connectors benefit from schema inference.
🤖 Prompt for AI Agents
In `@tools/workbench/workbench.go` around lines 244 - 246, CreateConnector is
overwriting w.transport.Schemas for each connector so only the last connector
receives inferred schemas; instead register per-connector SchemaManagers into a
dispatching layer (e.g., build a Composite/Dispatching SchemaManager and
register each connector's schemas there) rather than assigning
w.transport.Schemas = schemas. Modify CreateConnector to add the new connector's
SchemaManager into a map keyed by connector ID/host/path and replace the single
transport.Schemas with a dispatching SchemaManager that routes lookups
per-request (by connectorID/host/path) so all connectors receive auto-inference
instead of being overwritten.
There was a problem hiding this comment.
Also, why is schema stored in the transport if it's also stored in the connector instance?
There was a problem hiding this comment.
@laouji, good question! The schema is in both places for different purposes:
ConnectorInstance.schemas(line 73) — the storage for that connector's inferred schemasw.transport.Schemas(line 245) — a pointer telling theDebugTransportwhere to write newly inferred schemas when it captures HTTP responses
The DebugTransport (from transport.go) automatically infers JSON schemas from HTTP responses it intercepts. It needs to know which SchemaManager to call InferFromJSON on. That's why it has a Schemas field.
The architectural flaw is that w.transport is shared across all connectors, but it can only point to one SchemaManager at a time. Each call to CreateConnector overwrites w.transport.Schemas = schemas, so only the most recently created connector receives auto-inferred schemas from HTTP traffic. The earlier connectors lose the benefit.
This ties directly to the bot's earlier comment: in multi-connector mode, you need a dispatching layer (e.g., a composite SchemaManager that routes by connector ID, host, or path) instead of a single pointer, so all active connectors benefit from schema auto-inference simultaneously.
| plugin, err := registry.GetPlugin(connectorID, logger, "wise", name, config) | ||
| ``` | ||
|
|
||
| ## Usage |
There was a problem hiding this comment.
Let's put it higher? Also you can specify that you can just run payments workbench and install the connector via the UI
| INFO Web UI available at http://127.0.0.1:8080 | ||
|
|
||
| # 2. In another terminal, install and run | ||
| $ curl -X POST localhost:8080/api/install |
There was a problem hiding this comment.
If the workbench command is used with a provider and a valid config, it will automatically calls Install.
So this command should not be run then
| ## Limitations | ||
|
|
||
| - **No persistence** - Data is lost on restart (by design) | ||
| - **No webhooks** - Webhook endpoints are mocked |
There was a problem hiding this comment.
I feel that would be nice to overcome this limitation at some point. Some connectors relies a lot on webhooks.
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Fix all issues with AI agents
In `@tools/workbench/debug.go`:
- Around line 104-111: The LogError method on DebugStore calls err.Error()
unconditionally which panics when err is nil; update LogError (and the
DebugEntry construction) to guard against a nil error by checking err == nil and
using an empty string (or a clear "<nil>" placeholder) for the DebugEntry.Error
field instead of calling err.Error(), then call addEntry as before; reference
DebugStore.LogError, DebugEntry, DebugEntryTypeError and addEntry so the
nil-check is added around the Error field population.
In `@tools/workbench/server.go`:
- Around line 1382-1411: The export/import/test handlers currently accept
arbitrary filesystem paths (handleExportSnapshots, handleImportSnapshots,
handleGenerateTests) and pass them directly to conn.snapshots.ExportToDir /
ImportFromDir and test generation (output_dir); restrict and validate the
incoming directory by requiring a configured workspace root on Server (e.g.,
s.workspaceRoot), reject empty or absolute paths, clean the path
(filepath.Clean), join it to the workspace root with filepath.Join, then verify
the resulting path has the workspace root as a prefix (to prevent ../ traversal)
before calling ExportToDir/ImportFromDir or writing tests; if the server has no
workspace root configured, return a 400/403 error and refuse to process the
request.
- Around line 51-57: The CORS config passed to r.Use(cors.Handler(...)) is
invalid because AllowedOrigins: []string{"*"} cannot be used with
AllowCredentials: true; update the cors.Options in the r.Use(cors.Handler(...))
call to either (A) remove AllowCredentials or set it to false, or (B) replace
the wildcard AllowedOrigins with an explicit origin list (e.g., read from
config/env) so Access-Control-Allow-Origin is a specific origin when
AllowCredentials remains true; make this change in the cors.Options struct where
AllowedOrigins and AllowCredentials are defined.
- Around line 2077-2085: The code calls pmt.Amount.String() and
bal.Amount.String() without nil checks which will panic if PSPPayment.Amount or
PSPBalance.Amount is nil; update the transaction and balance handlers to guard
these fields (check pmt.Amount != nil and bal.Amount != nil) and use a safe
fallback string (e.g., "0") when nil, replacing direct pmt.Amount.String() and
bal.Amount.String() usages with a small helper or inline conditional that
returns the fallback if the pointer is nil; ensure the same pattern is applied
for both the transaction construction (using pmt.Amount) and the balance
construction (using bal.Amount).
In `@tools/workbench/workbench.go`:
- Around line 29-31: Remove the unused package-level variable
globalDebugTransport (type *DebugTransport) and its assignment: find the global
declaration "var globalDebugTransport *DebugTransport" and any places that set
globalDebugTransport (the assignment around the initialization point) and either
delete both the declaration and assignment or convert the assignment into a
local variable used where needed (e.g., create a local debugTransport within the
initializer or the function that currently assigns it), ensuring no references
to globalDebugTransport remain and builds/tests pass.
🧹 Nitpick comments (10)
tools/workbench/baseline.go (2)
405-417: Replace customjoinand redundantSprintfwithstrings.Join.
fmt.Sprintf("%s", join(parts, " | "))wraps a string in a no-op format call, and thejoinhelper reimplementsstrings.Join.Suggested simplification
- return fmt.Sprintf("%s", join(parts, " | ")) + return strings.Join(parts, " | ") } - -func join(parts []string, sep string) string { - if len(parts) == 0 { - return "" - } - result := parts[0] - for i := 1; i < len(parts); i++ { - result += sep + parts[i] - } - return result -}
92-118:SaveBaselineaccesses storage under its own write lock — verify no nested locking.
m.storage.GetAccounts()etc. are called whilem.muis held. IfMemoryStoragemethods also acquire locks, the ordering is fine (no cycle), but consider reading storage data before acquiringm.muto minimize lock hold time, similar to howCompareWithCurrentalready does it.tools/workbench/workbench.go (2)
159-168: DoubleStop()call panics onclose(w.stopChan).If
Stopis invoked more than once (e.g., signal handler + deferred cleanup), the second call passes the!w.runningguard butstopChanis already closed from the first call. Actually — re-reading: the second call returns early at line 163 becausew.runningisfalse. So this is safe. Let me re-check…Wait: first call sets
w.running = falseand closesstopChan. Second call hits!w.runningand returnsnil. That's correct. However, ifStartis called again afterStop,w.stopChanis already closed andWait()will not block. Consider re-creatingstopChanor documenting single-use lifecycle.
106-128:New()installs a global HTTP transport as a side effect of construction.
InstallGlobalTransportat Line 120 replaceshttp.DefaultTransportimmediately upon callingNew(), even beforeStart(). This affects all HTTP clients in the process. For a dev tool this is likely intentional, but worth noting that creating aWorkbenchthat is never started will still leave the global transport replaced. IfNewreturns an error later or the workbench is discarded, the original transport is never restored.tools/workbench/introspect.go (3)
268-274: Minor:string(content)is computed twice.Line 272 converts
contentto string for line counting, and the same conversion already happened at Line 271 (Content: string(content)). Store it once.Suggested fix
+ text := string(content) sf := &SourceFile{ Path: relPath, Name: filepath.Base(relPath), - Content: string(content), - Lines: strings.Count(string(content), "\n") + 1, + Content: text, + Lines: strings.Count(text, "\n") + 1, Language: detectLanguage(relPath), }
376-386: Hardcoded Plugin method list may drift from the actual interface.If
models.Plugingains or removes methods,findImplementedMethodswon't reflect the change. Consider generating this list from the interface definition or adding a comment/test to keep it in sync.
400-450:filepath.Walkis deprecated; preferfilepath.WalkDir.
filepath.WalkDir(Go 1.16+) avoids an extraos.Lstatper entry and is the recommended replacement. Same applies toSearchCodeat Line 464.tools/workbench/debug.go (1)
220-246:GetEntrieswithlimit <= 0returns the entire store, not zero entries.The condition
limit <= 0 || limit > len(d.entries)at Line 225 clamps tolen(d.entries)when limit is 0. A caller passinglimit=0to mean "none" would get everything. If this is intentional, a brief doc comment clarifying the behavior would help.tools/workbench/server.go (2)
750-771: Unboundedio.ReadAll(r.Body)— no request body size limit.
handleImport(Line 757) andhandleImportBaseline(Line 1766) read the entire request body into memory with no size cap. A large payload could exhaust memory. Consider wrapping withio.LimitReader.Example
- body, err := io.ReadAll(r.Body) + body, err := io.ReadAll(io.LimitReader(r.Body, 10<<20)) // 10 MB limitAlso applies to: 1759-1779
490-513: Decode errors from fetch request bodies are silently swallowed.In
handleFetchAccountsand siblings,json.NewDecoder(r.Body).Decode(&req)errors are ignored (Lines 499, 524, 549, 574). A malformed JSON body silently proceeds with a zero-valuefromPayload, which may cause confusing downstream behavior. Consider returning a 400 on decode failure.Also applies to: 515-538, 540-563, 565-588
| // LogError adds an error entry. | ||
| func (d *DebugStore) LogError(operation string, err error) { | ||
| d.addEntry(DebugEntry{ | ||
| Type: DebugEntryTypeError, | ||
| Operation: operation, | ||
| Error: err.Error(), | ||
| }) | ||
| } |
There was a problem hiding this comment.
LogError will panic if err is nil.
Line 109 calls err.Error() unconditionally. If a caller passes a nil error, this panics.
Suggested guard
func (d *DebugStore) LogError(operation string, err error) {
+ if err == nil {
+ return
+ }
d.addEntry(DebugEntry{
Type: DebugEntryTypeError,
Operation: operation,
Error: err.Error(),
})
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // LogError adds an error entry. | |
| func (d *DebugStore) LogError(operation string, err error) { | |
| d.addEntry(DebugEntry{ | |
| Type: DebugEntryTypeError, | |
| Operation: operation, | |
| Error: err.Error(), | |
| }) | |
| } | |
| // LogError adds an error entry. | |
| func (d *DebugStore) LogError(operation string, err error) { | |
| if err == nil { | |
| return | |
| } | |
| d.addEntry(DebugEntry{ | |
| Type: DebugEntryTypeError, | |
| Operation: operation, | |
| Error: err.Error(), | |
| }) | |
| } |
🤖 Prompt for AI Agents
In `@tools/workbench/debug.go` around lines 104 - 111, The LogError method on
DebugStore calls err.Error() unconditionally which panics when err is nil;
update LogError (and the DebugEntry construction) to guard against a nil error
by checking err == nil and using an empty string (or a clear "<nil>"
placeholder) for the DebugEntry.Error field instead of calling err.Error(), then
call addEntry as before; reference DebugStore.LogError, DebugEntry,
DebugEntryTypeError and addEntry so the nil-check is added around the Error
field population.
| r.Use(cors.Handler(cors.Options{ | ||
| AllowedOrigins: []string{"*"}, | ||
| AllowedMethods: []string{"GET", "POST", "PUT", "DELETE", "OPTIONS"}, | ||
| AllowedHeaders: []string{"*"}, | ||
| ExposedHeaders: []string{"Link"}, | ||
| AllowCredentials: true, | ||
| MaxAge: 300, |
There was a problem hiding this comment.
CORS: AllowedOrigins: ["*"] with AllowCredentials: true is invalid per the CORS spec.
Browsers reject responses where Access-Control-Allow-Origin: * is combined with Access-Control-Allow-Credentials: true. Either set specific origins or disable AllowCredentials.
Suggested fix for a dev tool
r.Use(cors.Handler(cors.Options{
AllowedOrigins: []string{"*"},
AllowedMethods: []string{"GET", "POST", "PUT", "DELETE", "OPTIONS"},
AllowedHeaders: []string{"*"},
ExposedHeaders: []string{"Link"},
- AllowCredentials: true,
+ AllowCredentials: false,
MaxAge: 300,
}))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| r.Use(cors.Handler(cors.Options{ | |
| AllowedOrigins: []string{"*"}, | |
| AllowedMethods: []string{"GET", "POST", "PUT", "DELETE", "OPTIONS"}, | |
| AllowedHeaders: []string{"*"}, | |
| ExposedHeaders: []string{"Link"}, | |
| AllowCredentials: true, | |
| MaxAge: 300, | |
| r.Use(cors.Handler(cors.Options{ | |
| AllowedOrigins: []string{"*"}, | |
| AllowedMethods: []string{"GET", "POST", "PUT", "DELETE", "OPTIONS"}, | |
| AllowedHeaders: []string{"*"}, | |
| ExposedHeaders: []string{"Link"}, | |
| AllowCredentials: false, | |
| MaxAge: 300, | |
| })) |
🤖 Prompt for AI Agents
In `@tools/workbench/server.go` around lines 51 - 57, The CORS config passed to
r.Use(cors.Handler(...)) is invalid because AllowedOrigins: []string{"*"} cannot
be used with AllowCredentials: true; update the cors.Options in the
r.Use(cors.Handler(...)) call to either (A) remove AllowCredentials or set it to
false, or (B) replace the wildcard AllowedOrigins with an explicit origin list
(e.g., read from config/env) so Access-Control-Allow-Origin is a specific origin
when AllowCredentials remains true; make this change in the cors.Options struct
where AllowedOrigins and AllowCredentials are defined.
| func (s *Server) handleExportSnapshots(w http.ResponseWriter, r *http.Request) { | ||
| conn := s.getConnector(r) | ||
| if conn == nil || conn.snapshots == nil { | ||
| s.errorResponse(w, http.StatusBadRequest, "snapshots not available") | ||
| return | ||
| } | ||
|
|
||
| var req struct { | ||
| Directory string `json:"directory"` | ||
| } | ||
| if err := json.NewDecoder(r.Body).Decode(&req); err != nil { | ||
| s.errorResponse(w, http.StatusBadRequest, "invalid request: "+err.Error()) | ||
| return | ||
| } | ||
|
|
||
| if req.Directory == "" { | ||
| s.errorResponse(w, http.StatusBadRequest, "directory is required") | ||
| return | ||
| } | ||
|
|
||
| if err := conn.snapshots.ExportToDir(req.Directory); err != nil { | ||
| s.errorResponse(w, http.StatusInternalServerError, err.Error()) | ||
| return | ||
| } | ||
|
|
||
| s.jsonResponse(w, http.StatusOK, map[string]string{ | ||
| "status": "exported", | ||
| "directory": req.Directory, | ||
| }) | ||
| } |
There was a problem hiding this comment.
Server-side path traversal: export/import snapshot handlers accept arbitrary filesystem paths.
handleExportSnapshots (Line 1402) and handleImportSnapshots (Line 1433) accept a directory from the request body and directly pass it to ExportToDir / ImportFromDir. A malicious or accidental request can read from or write to any directory accessible to the process (e.g., {"directory": "/etc"}). The same concern applies to handleGenerateTests (Line 1486) with output_dir.
Even for a dev tool, this is risky if the workbench is bound to a non-loopback address. Consider validating the path against an allowlist or restricting to a configured workspace root.
Also applies to: 1413-1443
🤖 Prompt for AI Agents
In `@tools/workbench/server.go` around lines 1382 - 1411, The export/import/test
handlers currently accept arbitrary filesystem paths (handleExportSnapshots,
handleImportSnapshots, handleGenerateTests) and pass them directly to
conn.snapshots.ExportToDir / ImportFromDir and test generation (output_dir);
restrict and validate the incoming directory by requiring a configured workspace
root on Server (e.g., s.workspaceRoot), reject empty or absolute paths, clean
the path (filepath.Clean), join it to the workspace root with filepath.Join,
then verify the resulting path has the workspace root as a prefix (to prevent
../ traversal) before calling ExportToDir/ImportFromDir or writing tests; if the
server has no workspace root configured, return a 400/403 error and refuse to
process the request.
| tx := map[string]interface{}{ | ||
| "id": pmt.Reference, | ||
| "createdAt": pmt.CreatedAt.Format(time.RFC3339), | ||
| "updatedAt": pmt.CreatedAt.Format(time.RFC3339), // Use createdAt as updatedAt if not available | ||
| "currency": pmt.Asset, | ||
| "amount": pmt.Amount.String(), | ||
| "type": mapPaymentTypeToGeneric(pmt.Type), | ||
| "status": mapPaymentStatusToGeneric(pmt.Status), | ||
| } |
There was a problem hiding this comment.
Nil *big.Int dereference in generic handlers.
pmt.Amount.String() (Line 2082) and bal.Amount.String() (Line 1986) will panic if Amount is nil. The PSPPayment.Amount and PSPBalance.Amount fields are *big.Int and can be nil.
Suggested guard for the transaction handler
- "amount": pmt.Amount.String(),
+ "amount": func() string { if pmt.Amount != nil { return pmt.Amount.String() }; return "0" }(),Apply the same pattern for bal.Amount.String() at Line 1986.
Also applies to: 1984-1988
🤖 Prompt for AI Agents
In `@tools/workbench/server.go` around lines 2077 - 2085, The code calls
pmt.Amount.String() and bal.Amount.String() without nil checks which will panic
if PSPPayment.Amount or PSPBalance.Amount is nil; update the transaction and
balance handlers to guard these fields (check pmt.Amount != nil and bal.Amount
!= nil) and use a safe fallback string (e.g., "0") when nil, replacing direct
pmt.Amount.String() and bal.Amount.String() usages with a small helper or inline
conditional that returns the fallback if the pointer is nil; ensure the same
pattern is applied for both the transaction construction (using pmt.Amount) and
the balance construction (using bal.Amount).
laouji
left a comment
There was a problem hiding this comment.
I don't have time to review everything in detail, but I'm not super enthusiastic about the kind of stateful hell that reimplementing a database in memory implies.
There's lots of optimistic locking throughout which is not very concurrency friendly. In a dev environment, I understand this is less of a problem, but if that's the case why have multi connector handling at all?
I fear that having such a stateful application, where state is not very carefully managed, will mean that it'll be hard for a developer to know "is this a bug in my connector?" or "is this a bug in the cache in the workbench?"
Is the motivation for not having an external database that "docker is too difficult"?
Going back to my suggestion about connectors working sort of like "lambdas", can we instead imagine a way to develop connectors that doesn't involve storing state at all?
Maybe if a developer just needs to check that the HTTP requests against the PSP work, we can effectively install/uninstall connectors in each request and instead manage things like cursors as part of request parameters that the developer can manipulate at will.
This stateless input/output approach also would make testing new connectors more simple -> we could design a test suite devs could use to to validate that a newly developed connector works (provided they have the API keys).
|
|
||
| // getConnector retrieves the connector from request context | ||
| func (s *Server) getConnector(r *http.Request) *ConnectorInstance { | ||
| conn, _ := r.Context().Value(connectorCtxKey).(*ConnectorInstance) |
There was a problem hiding this comment.
I'm apprehensive about storing something fetched from an internal cache in context. Context is more designed for a top down approach (eg. the caller can share things with its children).
I don't have a full overview of how you are using connectors between requests, but I think this is likely to cause more headaches and bizarre side effects than it's worth.
I recommend just giving handlers direct access to the cache.
There was a problem hiding this comment.
Good point - refactored the generic server handlers to get the connector directly from the workbench instead of storing it in context. The middleware now only handles auth validation, and handlers call s.getGenericConnector() which fetches directly from s.workbench.GetGenericServerConnector().
| originalTransport http.RoundTripper | ||
|
|
||
| // Multi-connector management | ||
| connectors map[string]*ConnectorInstance |
There was a problem hiding this comment.
Any chance you can reuse or expand on the connectors.Manager which is already used to manage loading / unloading connectors into memory from the registry?
There was a problem hiding this comment.
Prefer to keep it separate for now - the workbench has different lifecycle needs (interactive/ephemeral connectors, no polling period validation, extra state like engine/storage/introspector per instance). The actual shared code is just the registry.GetPlugin() call which is already reused. If plugin loading gets more complex we can revisit.
There was a problem hiding this comment.
If the primary purpose of this workbench is to develop a connector, perhaps it's fine to simplify it and only support one connector at a time.
| // Link schemas to transport for auto-inference | ||
| w.transport.Schemas = schemas | ||
|
|
There was a problem hiding this comment.
Also, why is schema stored in the transport if it's also stored in the connector instance?
| return | ||
| } | ||
|
|
||
| if err := conn.engine.RunOneCycle(r.Context()); err != nil { |
There was a problem hiding this comment.
Considering that this is a recursive function it's likely to timeout for test accounts that have a lot of accounts / data in general.
The underlying for loops should probably add ctx.Done checks to be able to exit early in the case that the client stops listening.
There was a problem hiding this comment.
Fixed - added ctx.Done() checks at the start of each fetch loop iteration (accounts, payments, balances, external accounts, others). The loops now exit early with ctx.Err() if the context is cancelled.
| } | ||
|
|
||
| // TaskTracker tracks task execution for the workbench. | ||
| type TaskTracker struct { |
There was a problem hiding this comment.
The design of this interface seems way too bloated and a bit convoluted.
Seems fair that a task tracker might have a pointer to its own task executions, but then for some reason each task node also has a list of executions attached to it that never seems to be referenced anywhere else?
There is tons of locking and some functions return pointers, which mean that the callers can actually modify in flight data without acquiring the necessary locks.
There was a problem hiding this comment.
Actionable comments posted: 6
🤖 Fix all issues with AI agents
In `@tools/workbench/ui/src/index.css`:
- Around line 276-337: Add a clear keyboard focus indicator by defining a
:focus-visible rule for the base button selector (and ensure it applies across
variants like .btn-primary, .btn-secondary, .btn-danger, .btn-small, .btn-icon)
that is not shown when the button is disabled; for example add a
button:focus-visible:not(:disabled) rule that sets a 2px solid outline using an
appropriate CSS variable (e.g., var(--term-green) or a neutral accessible color)
and an outline-offset (e.g., 2px) so keyboard users get a visible focus ring
while hover styles remain unchanged.
- Around line 345-356: The CSS for the pre selector currently uses overflow-x:
hidden which silently clips long unbreakable tokens; change overflow-x to auto
in the pre rule so horizontal scrollbars appear for long tokens (keep the
existing pre properties like white-space: pre-wrap, word-break: break-word,
overflow-wrap: break-word, color, line-height, etc.), ensuring full values
(URLs, base64, UUIDs) remain inspectable during debugging.
- Around line 517-520: The current focus rule for input:focus, select:focus,
textarea:focus removes the browser focus ring (outline: none) and only changes
border-color, which is hard to perceive; update that rule to retain or replace
the outline with a clearly visible focus ring by adding a high-contrast
box-shadow (e.g., a 0 0 0 3px or similar ring using var(--fm-emerald-500) with
transparency) and/or restoring a visible outline so keyboard users get a strong
WCAG-friendly indicator; ensure the change targets the existing selectors
(input:focus, select:focus, textarea:focus) and preserves the border-color
change while adding the box-shadow/outline for accessibility.
In `@tools/workbench/workbench.go`:
- Around line 146-150: The goroutine calling w.server.Start() swallows startup
errors (e.g., port-in-use) so callers can't detect failure; modify the code
around w.server.Start() to propagate the first startup error back to the caller
by creating an error channel (or using an errgroup) that the goroutine sends to
if Start() returns a non-nil error (and not http.ErrServerClosed), and have the
surrounding function wait on that channel (with a timeout or context) before
returning; keep the existing w.logger.Errorf logging but ensure the error is
also sent on the channel so callers of the function can observe and handle
startup failures.
- Around line 217-221: The code uses req.Name as instanceID and may silently
overwrite an existing connector, leaking resources; update CreateConnector so
when req.Name is provided it checks for an existing key in w.connectors and
rejects the request (return an error) instead of overwriting; perform this
duplicate check and insertion while holding the write lock w.connMu (move the
uniqueness check into the critical section around w.connMu.Lock()/Unlock() or
hold the lock for the entire creation) to avoid the TOCTOU race and ensure no
existing connector is evicted without cleanup.
- Around line 287-307: The read/write race on instance.Installed in
InstallConnector/UninstallConnector allows two callers to race; fix by taking
the write lock (w.connMu.Lock()) around the check-and-set of instance.Installed
so only one goroutine can claim the operation, then release the lock before
calling the long-running engine.Install/engine.Uninstall, and if the engine call
fails revert the Installed flag under the lock; apply the same pattern to
UninstallConnector using instance.engine.Uninstall and ensure you reference
w.connMu, instance.Installed, InstallConnector and UninstallConnector when
making the change.
🧹 Nitpick comments (5)
tools/workbench/ui/src/index.css (3)
6-6: External font import may break offline/air-gapped usage.This is a local dev workbench tool, but
@import url(...)fetches fonts from Google at runtime, making the UI degrade when offline or behind a restrictive proxy. Thevar(--term-font)andvar(--term-font-sans)stacks already include solid system-font fallbacks, so this will still work — but the visual fidelity drop may confuse users.Consider bundling the fonts as local assets (or simply relying on the system fallbacks) so the tool is fully self-contained.
166-170:box-sizingreset doesn't cover pseudo-elements.
*::beforeand*::afterinherit the defaultcontent-box. The standard modern reset includes them:Suggested tweak
-* { +*, +*::before, +*::after { box-sizing: border-box; margin: 0; padding: 0; }
187-204: Scrollbar styling is WebKit-only; Firefox will show default scrollbars.You can add the standard properties alongside the WebKit prefixes for cross-browser parity:
Add to the body or html rule
/* Firefox */ * { scrollbar-width: thin; scrollbar-color: var(--term-border) var(--term-bg); }tools/workbench/workbench.go (2)
361-371:ListConnectorsreturns interior pointers — callers can race with writers.The returned
[]*ConnectorInstanceelements point to the same objects stored inw.connectors. Any concurrent mutation (e.g.,InstallConnectortogglingInstalled) while a caller reads these pointers (e.g., JSON serialization in an HTTP handler) is a data race.Consider returning value copies or a dedicated read-only view struct.
434-440: Error fromregistry.GetPluginTypesilently discarded.If the provider string doesn't match a registered plugin (shouldn't happen here since it comes from
GetConfigs, but the contract isn't guaranteed), the error is dropped andpluginTypedefaults to zero. Consider at least logging the error.
| button { | ||
| cursor: pointer; | ||
| font-family: var(--term-font-sans); | ||
| font-size: 13px; | ||
| border: 1px solid var(--term-border); | ||
| border-radius: var(--term-radius); | ||
| padding: 8px 16px; | ||
| font-weight: 500; | ||
| transition: all 0.15s ease; | ||
| text-transform: uppercase; | ||
| letter-spacing: 0.5px; | ||
| } | ||
|
|
||
| button:disabled { | ||
| opacity: 0.4; | ||
| cursor: not-allowed; | ||
| } | ||
|
|
||
| .btn-primary { | ||
| background: var(--term-green-bg); | ||
| color: var(--term-green); | ||
| border-color: var(--term-green-dim); | ||
| } | ||
|
|
||
| .btn-primary:hover:not(:disabled) { | ||
| background: var(--term-green-dim); | ||
| color: var(--term-bg); | ||
| } | ||
|
|
||
| .btn-secondary { | ||
| background: var(--term-bg-elevated); | ||
| color: var(--term-text); | ||
| border-color: var(--term-border); | ||
| } | ||
|
|
||
| .btn-secondary:hover:not(:disabled) { | ||
| background: var(--term-bg-hover); | ||
| border-color: var(--term-text-dim); | ||
| } | ||
|
|
||
| .btn-danger { | ||
| background: var(--term-red-bg); | ||
| color: var(--term-red); | ||
| border-color: var(--term-red-dim); | ||
| } | ||
|
|
||
| .btn-danger:hover:not(:disabled) { | ||
| background: var(--term-red-dim); | ||
| color: var(--term-bg); | ||
| } | ||
|
|
||
| .btn-small { | ||
| padding: 3px 8px; | ||
| font-size: 12px; | ||
| } | ||
|
|
||
| .btn-icon { | ||
| padding: 4px 8px; | ||
| display: inline-flex; | ||
| align-items: center; | ||
| gap: 4px; | ||
| } |
There was a problem hiding this comment.
Buttons lack visible focus indicators for keyboard navigation.
The base button rule and all variants define :hover states but no :focus-visible style. Keyboard users will see no visual cue when tabbing through controls. The browser's default outline is suppressed by many resets/normalizers, and the dark background makes the default one hard to see regardless.
Suggested addition
button:focus-visible {
outline: 2px solid var(--term-green);
outline-offset: 2px;
}🤖 Prompt for AI Agents
In `@tools/workbench/ui/src/index.css` around lines 276 - 337, Add a clear
keyboard focus indicator by defining a :focus-visible rule for the base button
selector (and ensure it applies across variants like .btn-primary,
.btn-secondary, .btn-danger, .btn-small, .btn-icon) that is not shown when the
button is disabled; for example add a button:focus-visible:not(:disabled) rule
that sets a 2px solid outline using an appropriate CSS variable (e.g.,
var(--term-green) or a neutral accessible color) and an outline-offset (e.g.,
2px) so keyboard users get a visible focus ring while hover styles remain
unchanged.
| pre { | ||
| background: var(--term-bg); | ||
| border: 1px solid var(--term-border-dim); | ||
| border-radius: var(--term-radius); | ||
| padding: var(--term-gap-lg); | ||
| overflow-x: hidden; | ||
| white-space: pre-wrap; | ||
| word-break: break-word; | ||
| overflow-wrap: break-word; | ||
| color: var(--term-text); | ||
| line-height: 1.6; | ||
| } |
There was a problem hiding this comment.
overflow-x: hidden on <pre> silently clips content.
With pre-wrap and break-word most text will wrap, but long unbreakable tokens (base64, URLs, UUIDs) can still overflow and get invisibly clipped. For a dev/debug tool where inspecting full values matters, overflow-x: auto would surface a scrollbar instead of hiding data.
Suggested tweak
pre {
background: var(--term-bg);
border: 1px solid var(--term-border-dim);
border-radius: var(--term-radius);
padding: var(--term-gap-lg);
- overflow-x: hidden;
+ overflow-x: auto;
white-space: pre-wrap;🤖 Prompt for AI Agents
In `@tools/workbench/ui/src/index.css` around lines 345 - 356, The CSS for the pre
selector currently uses overflow-x: hidden which silently clips long unbreakable
tokens; change overflow-x to auto in the pre rule so horizontal scrollbars
appear for long tokens (keep the existing pre properties like white-space:
pre-wrap, word-break: break-word, overflow-wrap: break-word, color, line-height,
etc.), ensuring full values (URLs, base64, UUIDs) remain inspectable during
debugging.
| input:focus, select:focus, textarea:focus { | ||
| outline: none; | ||
| border-color: var(--fm-emerald-500); | ||
| } |
There was a problem hiding this comment.
Input focus state relies solely on a subtle border-color change after removing the outline.
outline: none suppresses the browser's default focus ring. The replacement — a border-color shift to var(--fm-emerald-500) — is a single-pixel change that can be hard to perceive, especially for users with low vision. Adding a box-shadow ring (or keeping an outline) alongside the border change would provide a stronger, WCAG-friendly focus indicator.
Suggested enhancement
input:focus, select:focus, textarea:focus {
outline: none;
border-color: var(--fm-emerald-500);
+ box-shadow: 0 0 0 2px oklch(0.6433 0.028 201.8 / 0.3);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| input:focus, select:focus, textarea:focus { | |
| outline: none; | |
| border-color: var(--fm-emerald-500); | |
| } | |
| input:focus, select:focus, textarea:focus { | |
| outline: none; | |
| border-color: var(--fm-emerald-500); | |
| box-shadow: 0 0 0 2px oklch(0.6433 0.028 201.8 / 0.3); | |
| } |
🤖 Prompt for AI Agents
In `@tools/workbench/ui/src/index.css` around lines 517 - 520, The current focus
rule for input:focus, select:focus, textarea:focus removes the browser focus
ring (outline: none) and only changes border-color, which is hard to perceive;
update that rule to retain or replace the outline with a clearly visible focus
ring by adding a high-contrast box-shadow (e.g., a 0 0 0 3px or similar ring
using var(--fm-emerald-500) with transparency) and/or restoring a visible
outline so keyboard users get a strong WCAG-friendly indicator; ensure the
change targets the existing selectors (input:focus, select:focus,
textarea:focus) and preserves the border-color change while adding the
box-shadow/outline for accessibility.
| go func() { | ||
| if err := w.server.Start(); err != nil && err != http.ErrServerClosed { | ||
| w.logger.Errorf("HTTP server error: %v", err) | ||
| } | ||
| }() |
There was a problem hiding this comment.
HTTP server startup errors are silently swallowed.
If the port is already in use, Start() returns nil (success) while the goroutine only logs the listen error. The caller has no way to detect the failure.
A common pattern is to use a channel or errgroup to propagate the first startup error back before Start returns.
🤖 Prompt for AI Agents
In `@tools/workbench/workbench.go` around lines 146 - 150, The goroutine calling
w.server.Start() swallows startup errors (e.g., port-in-use) so callers can't
detect failure; modify the code around w.server.Start() to propagate the first
startup error back to the caller by creating an error channel (or using an
errgroup) that the goroutine sends to if Start() returns a non-nil error (and
not http.ErrServerClosed), and have the surrounding function wait on that
channel (with a timeout or context) before returning; keep the existing
w.logger.Errorf logging but ensure the error is also sent on the channel so
callers of the function can observe and handle startup failures.
| // Generate instance ID | ||
| instanceID := uuid.New().String()[:8] | ||
| if req.Name != "" { | ||
| instanceID = req.Name | ||
| } |
There was a problem hiding this comment.
Duplicate instance ID silently overwrites existing connector, leaking resources.
When req.Name is provided, it's used directly as the map key with no uniqueness check. A second CreateConnector with the same name overwrites the first entry, orphaning its engine, storage, plugin, and other components (no cleanup is performed on the evicted instance).
Proposed fix: reject duplicates
instanceID := uuid.New().String()[:8]
if req.Name != "" {
instanceID = req.Name
}
+
+ w.connMu.RLock()
+ if _, exists := w.connectors[instanceID]; exists {
+ w.connMu.RUnlock()
+ return nil, fmt.Errorf("connector instance %q already exists", instanceID)
+ }
+ w.connMu.RUnlock()Note: there's still a small TOCTOU window between the check and the later w.connMu.Lock() insert. For full safety, move the duplicate check inside the write-lock block at lines 277-279, or hold the write lock across the entire creation (acceptable for a dev tool).
Also applies to: 277-279
🤖 Prompt for AI Agents
In `@tools/workbench/workbench.go` around lines 217 - 221, The code uses req.Name
as instanceID and may silently overwrite an existing connector, leaking
resources; update CreateConnector so when req.Name is provided it checks for an
existing key in w.connectors and rejects the request (return an error) instead
of overwriting; perform this duplicate check and insertion while holding the
write lock w.connMu (move the uniqueness check into the critical section around
w.connMu.Lock()/Unlock() or hold the lock for the entire creation) to avoid the
TOCTOU race and ensure no existing connector is evicted without cleanup.
| func (w *Workbench) InstallConnector(ctx context.Context, instanceID string) error { | ||
| instance := w.GetConnector(instanceID) | ||
| if instance == nil { | ||
| return fmt.Errorf("connector %s not found", instanceID) | ||
| } | ||
|
|
||
| if instance.Installed { | ||
| return fmt.Errorf("connector %s is already installed", instanceID) | ||
| } | ||
|
|
||
| if err := instance.engine.Install(ctx); err != nil { | ||
| return fmt.Errorf("failed to install connector: %w", err) | ||
| } | ||
|
|
||
| w.connMu.Lock() | ||
| instance.Installed = true | ||
| w.connMu.Unlock() | ||
|
|
||
| w.logger.Infof("Installed connector %s", instanceID) | ||
| return nil | ||
| } |
There was a problem hiding this comment.
Data race on instance.Installed: read without lock, TOCTOU between check and update.
GetConnector (line 288) returns a pointer and releases the read lock immediately. The subsequent read of instance.Installed (line 293) and the later write (line 302) are separated by an unlocked engine.Install call. Two concurrent InstallConnector calls for the same instance can both pass the guard and double-install. The same pattern applies to UninstallConnector (lines 310-330).
Go's race detector will flag the unsynchronized read of Installed at line 293.
Sketch: hold write lock for the flag check + update, release around the engine call
func (w *Workbench) InstallConnector(ctx context.Context, instanceID string) error {
- instance := w.GetConnector(instanceID)
- if instance == nil {
- return fmt.Errorf("connector %s not found", instanceID)
- }
-
- if instance.Installed {
- return fmt.Errorf("connector %s is already installed", instanceID)
- }
-
- if err := instance.engine.Install(ctx); err != nil {
- return fmt.Errorf("failed to install connector: %w", err)
- }
-
w.connMu.Lock()
- instance.Installed = true
+ instance, ok := w.connectors[instanceID]
+ if !ok {
+ w.connMu.Unlock()
+ return fmt.Errorf("connector %s not found", instanceID)
+ }
+ if instance.Installed {
+ w.connMu.Unlock()
+ return fmt.Errorf("connector %s is already installed", instanceID)
+ }
+ instance.Installed = true // optimistic set; revert on failure
w.connMu.Unlock()
+ if err := instance.engine.Install(ctx); err != nil {
+ w.connMu.Lock()
+ instance.Installed = false
+ w.connMu.Unlock()
+ return fmt.Errorf("failed to install connector: %w", err)
+ }
+
w.logger.Infof("Installed connector %s", instanceID)
return nil
}Apply the analogous pattern to UninstallConnector.
🤖 Prompt for AI Agents
In `@tools/workbench/workbench.go` around lines 287 - 307, The read/write race on
instance.Installed in InstallConnector/UninstallConnector allows two callers to
race; fix by taking the write lock (w.connMu.Lock()) around the check-and-set of
instance.Installed so only one goroutine can claim the operation, then release
the lock before calling the long-running engine.Install/engine.Uninstall, and if
the engine call fails revert the Installed flag under the lock; apply the same
pattern to UninstallConnector using instance.engine.Uninstall and ensure you
reference w.connMu, instance.Installed, InstallConnector and UninstallConnector
when making the change.
…hting - Replace connector dropdown with Slack-style sidebar for connector selection - Add URL-based routing with react-router-dom for browser navigation support - Add JSON syntax highlighting with react-syntax-highlighter - Rework State tab layout to list/detail view with filtering - Update terminology from "create" to "instantiate" for connectors - Fix duplicate selection bug in lists (ID generation collision) - Unify data table row heights across tabs - Increase code block font sizes for readability - Hide navigation tabs on homepage when no connector selected - Make CW logo clickable to return to home view Co-authored-by: Cursor <cursoragent@cursor.com>
Fields containing 'privateKey', 'certificate', or 'pem' in their name now use a textarea instead of single-line input, preserving newlines in PEM content that would otherwise be stripped. Co-authored-by: Cursor <cursoragent@cursor.com>
- Fix data race on idCounter using atomic.Int64 - Fix path traversal vulnerability in introspect.go - Fix pointer comparison for DefaultAsset in baseline.go Co-authored-by: Cursor <cursoragent@cursor.com>
Expose the generic connector REST API from the workbench so staging services can install a "generic connector" pointing to the local workbench, enabling end-to-end integration testing. - Add /generic/* routes (accounts, balances, beneficiaries, transactions) - Return cached data from storage instead of making fresh API calls - Add connector selection and optional API key authentication - Add UI panel in Debug tab for configuration Co-authored-by: Cursor <cursoragent@cursor.com>
Change imports from pkg/registry to internal/connectors/plugins/registry so the workbench can be merged before the pkg refactoring. The internal registry will automatically delegate to pkg/registry once that refactor lands. Also adds light/dark mode toggle to the UI with system preference detection. Co-authored-by: Cursor <cursoragent@cursor.com>
- Add ctx.Done() checks in fetch loops to exit early on client disconnect - Refactor generic server to get connector directly instead of via context, providing clearer data flow as suggested in review Co-authored-by: Cursor <cursoragent@cursor.com>
d8e25f6 to
ff33ff7
Compare
There was a problem hiding this comment.
Actionable comments posted: 15
🤖 Fix all issues with AI agents
In `@tools/workbench/engine.go`:
- Around line 998-1014: The FetchStatus struct is missing fields for
ExternalAccounts and Others, causing GetFetchStatus to return incomplete info;
update the FetchStatus type to include fields mirroring Accounts (for
ExternalAccounts) and Payments/Balances style (for Others) so it can represent
has_more/pages_fetched/total_items for externalAccountsState and
othersFetchState; then adjust GetFetchStatus to populate these new
ExternalAccounts and Others fields from externalAccountsState and
othersFetchState respectively (ensure JSON tags like "external_accounts" and
"others" match existing API conventions).
- Around line 60-65: The othersFetchState map is allocated in NewEngine and
cleared in ResetFetchState but no FetchOthersOnePage function exists; add a
FetchOthersOnePage method following the same pattern as
FetchAccountsOnePage/FetchPaymentsOnePage: accept context, fromPayload ID
(string) and page parameters consistent with the other OnePage methods, look up
or create an entry in engine.othersFetchState keyed by the provided
name/fromPayload ID (matching how othersFetchState is declared as
map[string]map[string]*fetchState), use the shared fetchState logic to call the
underlying API, update and return the page results, and ensure it
clears/initializes the nested map when missing; reference othersFetchState,
FetchOthersOnePage, NewEngine, ResetFetchState and fetchState when implementing
to mirror existing behavior.
- Around line 751-796: The fetch functions (e.g., FetchAccountsOnePage) read
fetchState.HasMore after releasing e.mu, causing a data race; hold the mutex
while inspecting and copying needed fields from the shared fetchState (e.g.,
HasMore and State) into local variables, then release the lock before making the
plugin call and before updating state again; specifically, inside
FetchAccountsOnePage lock e.mu, check if state == nil (initialize if needed),
read localHasMore := state.HasMore and localState := state.State, and if
!localHasMore return early while still holding the lock (or after copying), then
unlock and proceed to call e.plugin.FetchNextAccounts, and after the call lock
e.mu again to write resp.NewState, resp.HasMore, PagesFetched, TotalItems; apply
the same pattern to FetchPaymentsOnePage, FetchBalancesOnePage, and
FetchExternalAccountsOnePage to avoid races with ResetFetchState and concurrent
fetches.
In `@tools/workbench/README.md`:
- Around line 17-23: Update the fenced code blocks that contain ASCII-art
diagrams (for example the block starting with "payments binary" and the other
diagram blocks containing the same ASCII-art) to include a language specifier by
changing their opening fence from ``` to ```text so the linter recognizes them
as text; ensure you modify all diagram blocks (the ones matching the ASCII-art
layout shown in the diff) consistently.
In `@tools/workbench/replay.go`:
- Around line 345-365: DryRun currently only fills missing fields from the
captured request when req.Method == "" and thus ignores any modifications in
ReplayRequest; update DryRun (in function DryRun for type Replayer) to mirror
ReplayFromCapture's merge behavior: when req.OriginalID != "" fetch original via
r.debug.GetHTTPRequestByID, error if nil, then build the result by taking
modified values from req when present (Method, URL, Body) and merging headers
using copyHeaders + overlay req.Headers so request-specified headers override
captured ones; return that merged ReplayRequest so DryRun previews the actual
replayed request.
- Around line 140-151: The code reads the entire HTTP response into memory using
io.ReadAll(httpResp.Body) before truncating, which can OOM for large bodies;
change the read to use an io.LimitReader capped at 256*1024+1 so you never
allocate more than the cap and can still detect truncation, then set
response.Body to the truncated string plus "\n... [truncated]" if the read
indicates the body exceeded the cap; update the body read site that assigns
bodyBytes (used with httpResp.Body and response.Body) and preserve the existing
error assignment to response.Error if the read fails.
- Around line 367-392: In Replayer.CreateCurlCommand, req.Method and req.URL are
inserted verbatim and can allow shell injection; apply the same single-quote
escaping used for headers/body (strings.ReplaceAll(value, "'", "'\\''")) and
wrap the method and URL in single quotes (e.g., "-X 'escapedMethod'" and "
'escapedURL'") so both are safe when composed into the curl string; update the
-X write (currently writing req.Method) and the final URL write (currently
writing req.URL) to use escaped versions via strings.ReplaceAll before writing
to the buffer.
In `@tools/workbench/schema.go`:
- Around line 26-38: The FieldSchema.Required field is never set; update the
schema inference to mark Required = true when a field appears in every sample by
comparing FieldSchema.SeenCount against the total sample count: pass the
totalSamples (or context) into inferField (or into the top-level
inferSchema/merge logic) and after aggregating counts set field.Required =
(field.SeenCount == totalSamples); for nested objects/array items apply the same
check recursively and ensure any code that merges schemas (e.g.,
mergeFieldSchemas or the caller that aggregates inferField results) preserves
and updates Required based on the global sample count.
- Around line 239-256: SaveBaseline (and likewise SaveAllBaselines) currently
discards errors from json.Marshal/json.Unmarshal when doing a deep copy, which
can produce silently corrupt zero-value baselines; update SaveBaseline to check
and handle both errors (capture the error returned by json.Marshal and return it
or wrap it with context, then capture the error from json.Unmarshal and
return/wrap it) before assigning baseline.ID and storing into
m.baselines[operation]; apply the same pattern in SaveAllBaselines so that on
any marshal/unmarshal failure the method returns an error and does not store an
invalid baseline.
In `@tools/workbench/server.go`:
- Around line 750-771: The import handlers read the entire request body
unbounded (handleImport and handleImportBaseline), which can cause OOM; change
them to wrap r.Body with a size limiter (e.g., use http.MaxBytesReader or
io.LimitReader with a sensible max bytes constant) before reading/decoding,
handle the error for requests that exceed the limit (return 413 Payload Too
Large or a clear error via s.errorResponse), and ensure you still close the body
and only pass the limited reader to json.Unmarshal or json.Decoder before
calling conn.storage.Import to avoid processing oversized payloads.
In `@tools/workbench/snapshots.go`:
- Around line 245-257: The loop in ExportToDir that builds filenames using
sanitizeFilename(s.Name) can overwrite files when two snapshots sanitize to the
same name; change the filename generation in the loop (where sanitizeFilename,
s.Name and s.ID are used) to include a unique component such as the snapshot ID
or the loop index (e.g., fmt.Sprintf("%s-%s.json", sanitizeFilename(s.Name),
s.ID) or include the index) so each output file is unique, and optionally check
for existing path collisions before writing.
In `@tools/workbench/storage.go`:
- Around line 384-400: The Export function currently returns references to
MemoryStorage's internal maps so the returned StorageSnapshot is not isolated;
change Export (in MemoryStorage.Export) to deep-copy each map field (Accounts,
Payments, Balances, ExternalAccounts, Others, States, TasksTree, WebhookConfigs)
while holding s.mu.RLock so the snapshot contains newly allocated maps with
copied entries (and keep ExportedAt as the snapshot time). Likewise update
Import (MemoryStorage.Import) to not assign incoming StorageSnapshot maps
directly to s.*; instead acquire s.mu.Lock and allocate new maps for s.accounts,
s.payments, s.balances, s.externalAccounts, s.others, s.states, s.tasksTree,
s.webhookConfigs and copy entries from the snapshot into those new maps so the
caller's snapshot and the live MemoryStorage do not share underlying map
references.
In `@tools/workbench/ui/src/api.ts`:
- Line 216: The getStatus function is calling the connector endpoint with a
trailing slash which is inconsistent with other methods (install, uninstall,
reset) and may hit a different backend route; change getStatus to use
`${prefix}` (no trailing slash) so it matches the other endpoints and backend
expectations — update the getStatus arrow function that returns
fetchJSON<ConnectorStatus> to use `${prefix}` instead of `${prefix}/`.
In `@tools/workbench/ui/vite.config.ts`:
- Around line 7-10: Replace the Vite build option emptyDirOnStart with the Vite
5-compatible emptyOutDir in the build config: locate the build object in the
Vite config (where outDir is set to 'dist') and change the property name from
emptyDirOnStart to emptyOutDir (set to true) so the dist directory is actually
cleaned before builds; ensure no other code references emptyDirOnStart remains.
In `@tools/workbench/workbench.go`:
- Around line 332-352: DeleteConnector currently holds connMu while calling
instance.engine.Uninstall, which can block other ops; change the flow to: under
connMu lock locate and remove the connector from the connectors map (using
w.connectors and instance := w.connectors[instanceID]), then release the lock;
after unlocking, if the removed instance had Installed == true and
instance.engine != nil call instance.engine.Uninstall(ctx) and log any errors;
finally log the deletion (w.logger.Infof). Ensure you still return an error if
the instance was not found (keep the initial not-found check) and do not hold
connMu during the outbound Uninstall call.
🧹 Nitpick comments (23)
tools/workbench/ui/.gitignore (1)
1-1: Consider adding common ignores for a Vite/React project.The
.gitignoreonly coversnode_modules/. You may also want to ignoredist/(unless it's intentionally committed for Go embedding),.env, and other build artifacts.Suggested additions
node_modules/ +dist/ +.env +.env.localtools/workbench/ui/src/index.css (1)
6-6: External Google Fonts import may break offline development.The workbench is meant to be a lightweight local dev tool. Loading fonts from
fonts.googleapis.comrequires internet access and introduces a third-party network dependency. Consider bundling the fonts locally or adding a fallback that degrades gracefully when offline — the fallback font stacks on lines 39–40 will work, but there will be a noticeable FOUT or layout shift while the external request times out.tools/workbench/ui/src/api.ts (2)
3-16:Content-Type: application/jsonis set unconditionally, including on GET/DELETE requests with no body.This is benign for most servers but technically incorrect for bodyless requests. More importantly, if a future caller passes a non-JSON body (e.g.,
FormData), this header will override the correct content type.Set Content-Type only when there's a body
async function fetchJSON<T>(url: string, options?: RequestInit): Promise<T> { const res = await fetch(API_BASE + url, { ...options, headers: { - 'Content-Type': 'application/json', + ...(options?.body ? { 'Content-Type': 'application/json' } : {}), ...options?.headers, }, });
654-856: The legacyapiobject recreatesconnectorApi(...)on every single method call.Each connector-scoped method in the
apiexport instantiates a newconnectorApiobject just to call one function. For a dev tool this won't cause real performance issues, but it's needlessly repetitive. Consider caching the connector API instance whensetSelectedConnectoris called.Cache the connector API instance
export let selectedConnectorId: string | null = null; +let _cachedConnectorApi: ReturnType<typeof connectorApi> | null = null; export function setSelectedConnector(id: string | null) { selectedConnectorId = id; + _cachedConnectorApi = id ? connectorApi(id) : null; } +function getSelectedApi(): ReturnType<typeof connectorApi> { + if (!selectedConnectorId || !_cachedConnectorApi) throw new Error('No connector selected'); + return _cachedConnectorApi; +}Then each method simplifies to, e.g.:
install: () => getSelectedApi().install(),cmd/workbench.go (4)
122-134: Signal handler callswb.Stopwith an unboundedcontext.Background().If
wb.Stopblocks (e.g., waiting for in-flight HTTP requests to drain), the process will hang indefinitely after Ctrl+C. Use a context with a timeout so a second signal or timeout forces exit.Add a shutdown timeout
go func() { <-sigChan fmt.Println("\nShutting down...") cancel() - wb.Stop(context.Background()) + stopCtx, stopCancel := context.WithTimeout(context.Background(), 5*time.Second) + defer stopCancel() + wb.Stop(stopCtx) }()Don't forget to add
"time"to the imports.
136-160: Connector creation usesctxthat may already be cancelled by the signal handler.There's a small race: if a signal arrives between
wb.Start(ctx)returning andwb.CreateConnector(ctx, ...), the context is already cancelled and the connector won't be created. For a dev tool this is acceptable, but worth noting since the user would see a confusing error rather than a clean shutdown.
190-192: Remove the emptyinit()function.The comment says "This will be added to root in NewRootCommand" but the function body is empty. It serves no purpose and is dead code.
Remove it
-func init() { - // This will be added to root in NewRootCommand -}
168-188: Provider listing output order is non-deterministic.
registry.GetConfigsreturns amap, and iterating over it withrangeproduces a random order each run. For a CLI tool, sorted output improves readability and is easier to scan.Sort providers before printing
+import "sort" + func printProviders() error { configs := registry.GetConfigs(true) + providers := make([]string, 0, len(configs)) + for provider := range configs { + providers = append(providers, provider) + } + sort.Strings(providers) + fmt.Println("Available connector providers:") fmt.Println() - for provider, config := range configs { + for _, provider := range providers { + config := configs[provider] fmt.Printf(" %s\n", provider)tools/workbench/responses.go (1)
69-89:Schemefield may serialize as"UNKNOWN"instead of being omitted.
pay.Scheme.String()on the zero value likely returns a non-empty string (e.g.,"UNKNOWN"), so theomitemptytag on Line 31 won't trigger. If the intent is to omit the field when the scheme is unknown, you'd need an explicit check similar to howhandleGenericTransactions(inserver.go, Line 2086) guardsPAYMENT_SCHEME_UNKNOWN.Low-priority, the UI can just ignore
"UNKNOWN".tools/workbench/testgen.go (2)
83-102: Fixture filename collisions when multiple operations share the same sanitized name.
sanitizeFilename(snap.Operation)combined with the index is scoped per-group iteration, but fixture filenames are collected into a flat list. If two different operations sanitize to the same string (e.g.,"fetch/accounts"→"fetch_accounts"and"fetch_accounts"→"fetch_accounts"), their fixtures will collide.Low risk in practice, but worth noting.
232-318: Generated test template:w.Write(fixture)return value is ignored.Inside the generated template (Line 297),
w.Write(fixture)ignores the error return. This will appear in every generated test file and may trigger linter warnings. Minor since these are test helpers, but worth a//nolintor_, _ = w.Write(fixture)in the template.tools/workbench/transport.go (1)
139-185:extractOperationFromURLonly handles single-digit version prefixes.Line 148 checks
len(part) == 2 && part[0] == 'v'— this correctly skipsv1…v9but missesv10,v11, etc. For a heuristic in a dev tool this is unlikely to cause problems, but a small regex or relaxed check would be more robust.tools/workbench/baseline.go (2)
402-417: Redundantjoinhelper — usestrings.Joinfrom the standard library.The custom
joinfunction duplicatesstrings.Join. Additionally,fmt.Sprintf("%s", ...)on Line 405 is a no-op wrapper around a string.Proposed fix
- return fmt.Sprintf("%s", join(parts, " | ")) + return strings.Join(parts, " | ") } - -func join(parts []string, sep string) string { - if len(parts) == 0 { - return "" - } - result := parts[0] - for i := 1; i < len(parts); i++ { - result += sep + parts[i] - } - return result -}
92-118:SaveBaselinecallsm.storage.Get*while holdingm.mu.Lock().If
MemoryStoragemethods ever call back intoBaselineManager, this creates a deadlock risk. In practice this is likely fine (storage has its own independent lock), but consider fetching data before acquiring the lock since only the map insertion needs it.Proposed restructuring
func (m *BaselineManager) SaveBaseline(name string) (*Baseline, error) { - m.mu.Lock() - defer m.mu.Unlock() - accounts := m.storage.GetAccounts() payments := m.storage.GetPayments() balances := m.storage.GetBalances() externalAccounts := m.storage.GetExternalAccounts() baseline := &Baseline{ // ... same as before } + m.mu.Lock() m.baselines[baseline.ID] = baseline + m.mu.Unlock() + return baseline, nil }tools/workbench/snapshots.go (1)
263-291:ImportFromDirsilently swallows read and parse errors.Lines 278 and 284
continuepast errors without logging or accumulating them. The caller only sees the count of successfully imported snapshots with no indication of failures. Consider at minimum logging a warning or returning a list of failed filenames alongside the count.tools/workbench/server.go (2)
490-513: Fetch handler decode errors are silently swallowed.In all four
handleFetch*handlers (Lines 499, 524, 549, 574),json.NewDecoder(r.Body).Decode(&req)errors are ignored. If the body contains malformed JSON, the request proceeds with a zero-valuefetchRequest(nilFromPayload), which may produce confusing results instead of a clear error. Consider at least logging or returning the error when the body is non-empty.Also applies to: 515-538, 540-563, 565-588
1847-1878: API key comparison is not constant-time.Line 1870 uses
!=for API key comparison, which is vulnerable to timing attacks. For a local dev tool this is acceptable, but if the generic server is exposed on a network, considersubtle.ConstantTimeCompare.tools/workbench/engine.go (2)
291-303:resp.HasMoreaccessed before the error check — misleading in the error path.When
err != nil,respis the zero-value struct, soresp.HasMoreisfalseandlen(resp.Accounts)is0. This won't panic, but passingresp.HasMoreintoCompleteTaskon the error path is semantically misleading. The same pattern repeats in everyexecuteFetch*Taskmethod (lines 389–400, 473–484, 557–568, 648–658).Consider reordering so the error check happens before
CompleteTask, or guardresp.HasMoreexplicitly:Suggested reordering
- // Complete task tracking - if e.tasks != nil && exec != nil { - itemCount := 0 - if err == nil { - itemCount = len(resp.Accounts) - } - e.tasks.CompleteTask(exec, itemCount, resp.HasMore, err) - } - - if err != nil { + if err != nil { + if e.tasks != nil && exec != nil { + e.tasks.CompleteTask(exec, 0, false, err) + } e.debug.LogError("fetch_accounts", err) return fmt.Errorf("fetch accounts failed: %w", err) } + + // Complete task tracking + if e.tasks != nil && exec != nil { + e.tasks.CompleteTask(exec, len(resp.Accounts), resp.HasMore, nil) + }
245-341: Extensive duplication acrossexecuteFetch*Taskmethods.The five
executeFetch*Taskmethods follow an identical structure: paginated loop with context check, task tracking, plugin call, debug logging, storage, state change tracking, and state saving. Only the plugin method name, response field, and storage call differ. Consider extracting a generic helper to reduce ~400 lines of near-identical code.Also applies to: 343-425, 427-509, 511-593
tools/workbench/introspect.go (3)
456-497:SearchCodereads entire files into memory without a size guard.
os.ReadFile(path)at line 472 will read arbitrarily large files. A connector directory could contain large generated or vendored files. Consider adding a file-size check (e.g., skip files > 1 MB) to avoid unexpected memory pressure.Suggested guard
filepath.Walk(i.basePath, func(path string, info os.FileInfo, err error) error { if err != nil || info.IsDir() { return nil } if !strings.HasSuffix(path, ".go") { return nil } + if info.Size() > 1<<20 { // skip files > 1MB + return nil + } content, err := os.ReadFile(path)
400-450:filepath.Walkfollows symlinks — could escapebasePathvia symlinked subdirectories.
filepath.Walkfollows symlinks. A symlink inside the connector directory pointing outsidebasePathwould causefindImplementedMethods(andSearchCode) to read files outside the intended scope. Consider usingfilepath.WalkDirand checkingentry.Type()&os.ModeSymlink != 0to skip symlinks, or at minimum validate eachpathstays withinbasePath.
131-185:GetInforeturnsnilerror even when sub-operations fail.
GetCapabilities(line 141) andGetConfig(line 154) errors are silently discarded. If the provider isn't registered, the caller gets aConnectorInfowith empty capabilities and config, with no indication that something went wrong. This is acceptable if intentional, but consider logging a warning via the introspector (it currently has no logger) so developers aren't confused by empty results.tools/workbench/storage.go (1)
351-357:GetWebhookConfigsreturns the internal slice directly — callers can mutate it.Unlike
GetAccounts/GetPaymentswhich return copies,GetWebhookConfigsreturns the stored slice reference. If a caller appends or modifies elements, the storage is silently corrupted.Suggested fix
func (s *MemoryStorage) GetWebhookConfigs() []models.PSPWebhookConfig { s.mu.RLock() defer s.mu.RUnlock() - return s.webhookConfigs + result := make([]models.PSPWebhookConfig, len(s.webhookConfigs)) + copy(result, s.webhookConfigs) + return result }
| accountsFetchState *fetchState | ||
| paymentsFetchState map[string]*fetchState // keyed by fromPayload ID | ||
| balancesFetchState map[string]*fetchState | ||
| externalAccountsState map[string]*fetchState | ||
| othersFetchState map[string]map[string]*fetchState // keyed by name, then fromPayload ID | ||
| } |
There was a problem hiding this comment.
FetchOthersOnePage is missing — othersFetchState is allocated but unused.
The othersFetchState field (line 64) is initialized in NewEngine (line 94) and cleared in ResetFetchState (line 991), but there is no FetchOthersOnePage method. Every other fetch type has a corresponding OnePage method. This looks like an oversight.
Do you want me to generate the FetchOthersOnePage implementation following the same pattern as the other OnePage methods?
Also applies to: 958-980
🤖 Prompt for AI Agents
In `@tools/workbench/engine.go` around lines 60 - 65, The othersFetchState map is
allocated in NewEngine and cleared in ResetFetchState but no FetchOthersOnePage
function exists; add a FetchOthersOnePage method following the same pattern as
FetchAccountsOnePage/FetchPaymentsOnePage: accept context, fromPayload ID
(string) and page parameters consistent with the other OnePage methods, look up
or create an entry in engine.othersFetchState keyed by the provided
name/fromPayload ID (matching how othersFetchState is declared as
map[string]map[string]*fetchState), use the shared fetchState logic to call the
underlying API, update and return the page results, and ensure it
clears/initializes the nested map when missing; reference othersFetchState,
FetchOthersOnePage, NewEngine, ResetFetchState and fetchState when implementing
to mirror existing behavior.
| // FetchAccountsOnePage fetches one page of accounts. | ||
| func (e *Engine) FetchAccountsOnePage(ctx context.Context, fromPayload json.RawMessage) (*models.FetchNextAccountsResponse, error) { | ||
| e.mu.Lock() | ||
| state := e.accountsFetchState | ||
| if state == nil { | ||
| state = &fetchState{HasMore: true} | ||
| e.accountsFetchState = state | ||
| } | ||
| currentState := state.State | ||
| e.mu.Unlock() | ||
|
|
||
| if !state.HasMore { | ||
| return &models.FetchNextAccountsResponse{HasMore: false}, nil | ||
| } | ||
|
|
||
| callID := e.debug.LogPluginCall("FetchNextAccounts", models.FetchNextAccountsRequest{ | ||
| FromPayload: fromPayload, | ||
| State: currentState, | ||
| PageSize: e.pageSize, | ||
| }) | ||
| start := time.Now() | ||
|
|
||
| resp, err := e.plugin.FetchNextAccounts(ctx, models.FetchNextAccountsRequest{ | ||
| FromPayload: fromPayload, | ||
| State: currentState, | ||
| PageSize: e.pageSize, | ||
| }) | ||
|
|
||
| e.debug.LogPluginResult(callID, resp, time.Since(start), err) | ||
|
|
||
| if err != nil { | ||
| return nil, err | ||
| } | ||
|
|
||
| e.mu.Lock() | ||
| state.State = resp.NewState | ||
| state.HasMore = resp.HasMore | ||
| state.PagesFetched++ | ||
| state.TotalItems += len(resp.Accounts) | ||
| e.mu.Unlock() | ||
|
|
||
| if len(resp.Accounts) > 0 { | ||
| e.storage.StoreAccounts(resp.Accounts) | ||
| } | ||
|
|
||
| return &resp, nil |
There was a problem hiding this comment.
Data race: state.HasMore read outside the mutex.
At line 762, state.HasMore is read after the mutex is released at line 760. Another goroutine calling ResetFetchState (which nils out accountsFetchState under the lock) or a concurrent FetchAccountsOnePage (which writes state.HasMore at line 787) creates a race on the fetchState struct. The same pattern exists in FetchPaymentsOnePage (line 815), FetchBalancesOnePage (line 868), and FetchExternalAccountsOnePage (line 921).
Move the HasMore check inside the critical section:
Sketch for FetchAccountsOnePage
e.mu.Lock()
state := e.accountsFetchState
if state == nil {
state = &fetchState{HasMore: true}
e.accountsFetchState = state
}
currentState := state.State
+ hasMore := state.HasMore
e.mu.Unlock()
- if !state.HasMore {
+ if !hasMore {
return &models.FetchNextAccountsResponse{HasMore: false}, nil
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // FetchAccountsOnePage fetches one page of accounts. | |
| func (e *Engine) FetchAccountsOnePage(ctx context.Context, fromPayload json.RawMessage) (*models.FetchNextAccountsResponse, error) { | |
| e.mu.Lock() | |
| state := e.accountsFetchState | |
| if state == nil { | |
| state = &fetchState{HasMore: true} | |
| e.accountsFetchState = state | |
| } | |
| currentState := state.State | |
| e.mu.Unlock() | |
| if !state.HasMore { | |
| return &models.FetchNextAccountsResponse{HasMore: false}, nil | |
| } | |
| callID := e.debug.LogPluginCall("FetchNextAccounts", models.FetchNextAccountsRequest{ | |
| FromPayload: fromPayload, | |
| State: currentState, | |
| PageSize: e.pageSize, | |
| }) | |
| start := time.Now() | |
| resp, err := e.plugin.FetchNextAccounts(ctx, models.FetchNextAccountsRequest{ | |
| FromPayload: fromPayload, | |
| State: currentState, | |
| PageSize: e.pageSize, | |
| }) | |
| e.debug.LogPluginResult(callID, resp, time.Since(start), err) | |
| if err != nil { | |
| return nil, err | |
| } | |
| e.mu.Lock() | |
| state.State = resp.NewState | |
| state.HasMore = resp.HasMore | |
| state.PagesFetched++ | |
| state.TotalItems += len(resp.Accounts) | |
| e.mu.Unlock() | |
| if len(resp.Accounts) > 0 { | |
| e.storage.StoreAccounts(resp.Accounts) | |
| } | |
| return &resp, nil | |
| // FetchAccountsOnePage fetches one page of accounts. | |
| func (e *Engine) FetchAccountsOnePage(ctx context.Context, fromPayload json.RawMessage) (*models.FetchNextAccountsResponse, error) { | |
| e.mu.Lock() | |
| state := e.accountsFetchState | |
| if state == nil { | |
| state = &fetchState{HasMore: true} | |
| e.accountsFetchState = state | |
| } | |
| currentState := state.State | |
| hasMore := state.HasMore | |
| e.mu.Unlock() | |
| if !hasMore { | |
| return &models.FetchNextAccountsResponse{HasMore: false}, nil | |
| } | |
| callID := e.debug.LogPluginCall("FetchNextAccounts", models.FetchNextAccountsRequest{ | |
| FromPayload: fromPayload, | |
| State: currentState, | |
| PageSize: e.pageSize, | |
| }) | |
| start := time.Now() | |
| resp, err := e.plugin.FetchNextAccounts(ctx, models.FetchNextAccountsRequest{ | |
| FromPayload: fromPayload, | |
| State: currentState, | |
| PageSize: e.pageSize, | |
| }) | |
| e.debug.LogPluginResult(callID, resp, time.Since(start), err) | |
| if err != nil { | |
| return nil, err | |
| } | |
| e.mu.Lock() | |
| state.State = resp.NewState | |
| state.HasMore = resp.HasMore | |
| state.PagesFetched++ | |
| state.TotalItems += len(resp.Accounts) | |
| e.mu.Unlock() | |
| if len(resp.Accounts) > 0 { | |
| e.storage.StoreAccounts(resp.Accounts) | |
| } | |
| return &resp, nil |
🤖 Prompt for AI Agents
In `@tools/workbench/engine.go` around lines 751 - 796, The fetch functions (e.g.,
FetchAccountsOnePage) read fetchState.HasMore after releasing e.mu, causing a
data race; hold the mutex while inspecting and copying needed fields from the
shared fetchState (e.g., HasMore and State) into local variables, then release
the lock before making the plugin call and before updating state again;
specifically, inside FetchAccountsOnePage lock e.mu, check if state == nil
(initialize if needed), read localHasMore := state.HasMore and localState :=
state.State, and if !localHasMore return early while still holding the lock (or
after copying), then unlock and proceed to call e.plugin.FetchNextAccounts, and
after the call lock e.mu again to write resp.NewState, resp.HasMore,
PagesFetched, TotalItems; apply the same pattern to FetchPaymentsOnePage,
FetchBalancesOnePage, and FetchExternalAccountsOnePage to avoid races with
ResetFetchState and concurrent fetches.
| type FetchStatus struct { | ||
| Accounts struct { | ||
| HasMore bool `json:"has_more"` | ||
| PagesFetched int `json:"pages_fetched"` | ||
| TotalItems int `json:"total_items"` | ||
| } `json:"accounts"` | ||
| Payments map[string]struct { | ||
| HasMore bool `json:"has_more"` | ||
| PagesFetched int `json:"pages_fetched"` | ||
| TotalItems int `json:"total_items"` | ||
| } `json:"payments"` | ||
| Balances map[string]struct { | ||
| HasMore bool `json:"has_more"` | ||
| PagesFetched int `json:"pages_fetched"` | ||
| TotalItems int `json:"total_items"` | ||
| } `json:"balances"` | ||
| } |
There was a problem hiding this comment.
FetchStatus omits ExternalAccounts and Others.
FetchStatus reports status for Accounts, Payments, and Balances, but has no fields for ExternalAccounts or Others, even though both types are tracked via externalAccountsState and othersFetchState. This makes GetFetchStatus report incomplete information.
🤖 Prompt for AI Agents
In `@tools/workbench/engine.go` around lines 998 - 1014, The FetchStatus struct is
missing fields for ExternalAccounts and Others, causing GetFetchStatus to return
incomplete info; update the FetchStatus type to include fields mirroring
Accounts (for ExternalAccounts) and Payments/Balances style (for Others) so it
can represent has_more/pages_fetched/total_items for externalAccountsState and
othersFetchState; then adjust GetFetchStatus to populate these new
ExternalAccounts and Others fields from externalAccountsState and
othersFetchState respectively (ensure JSON tags like "external_accounts" and
"others" match existing API conventions).
| ``` | ||
| payments binary | ||
| │ | ||
| ├── payments server ← Full production mode (Temporal + PostgreSQL) | ||
| │ | ||
| └── payments workbench ← Lightweight dev mode (in-memory) | ||
| ``` |
There was a problem hiding this comment.
Add language specifiers to fenced code blocks.
Static analysis flags multiple code blocks without language specifiers (lines 17, 27, 50, 161). For ASCII-art diagrams, use ```text to satisfy the linter.
Example fix for the first block
-```
+```text
payments binary
│Apply the same change to the diagram blocks at lines 27, 50, and 161.
🧰 Tools
🪛 markdownlint-cli2 (0.20.0)
[warning] 17-17: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
In `@tools/workbench/README.md` around lines 17 - 23, Update the fenced code
blocks that contain ASCII-art diagrams (for example the block starting with
"payments binary" and the other diagram blocks containing the same ASCII-art) to
include a language specifier by changing their opening fence from ``` to ```text
so the linter recognizes them as text; ensure you modify all diagram blocks (the
ones matching the ASCII-art layout shown in the diff) consistently.
| // Read body | ||
| bodyBytes, err := io.ReadAll(httpResp.Body) | ||
| if err != nil { | ||
| response.Error = fmt.Sprintf("failed to read response body: %v", err) | ||
| } else { | ||
| // Limit body size | ||
| if len(bodyBytes) > 256*1024 { | ||
| response.Body = string(bodyBytes[:256*1024]) + "\n... [truncated]" | ||
| } else { | ||
| response.Body = string(bodyBytes) | ||
| } | ||
| } |
There was a problem hiding this comment.
io.ReadAll reads the entire response body into memory before truncating.
The 256 KB display truncation at line 146 is good, but io.ReadAll at line 141 reads the full body first. A multi-GB response (e.g., replaying a file download endpoint) would cause OOM. Use io.LimitReader to cap the read:
Suggested fix
- bodyBytes, err := io.ReadAll(httpResp.Body)
+ bodyBytes, err := io.ReadAll(io.LimitReader(httpResp.Body, 256*1024+1))This way you never allocate more than ~256 KB, and you can still detect truncation by checking len(bodyBytes) > 256*1024.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Read body | |
| bodyBytes, err := io.ReadAll(httpResp.Body) | |
| if err != nil { | |
| response.Error = fmt.Sprintf("failed to read response body: %v", err) | |
| } else { | |
| // Limit body size | |
| if len(bodyBytes) > 256*1024 { | |
| response.Body = string(bodyBytes[:256*1024]) + "\n... [truncated]" | |
| } else { | |
| response.Body = string(bodyBytes) | |
| } | |
| } | |
| // Read body | |
| bodyBytes, err := io.ReadAll(io.LimitReader(httpResp.Body, 256*1024+1)) | |
| if err != nil { | |
| response.Error = fmt.Sprintf("failed to read response body: %v", err) | |
| } else { | |
| // Limit body size | |
| if len(bodyBytes) > 256*1024 { | |
| response.Body = string(bodyBytes[:256*1024]) + "\n... [truncated]" | |
| } else { | |
| response.Body = string(bodyBytes) | |
| } | |
| } |
🤖 Prompt for AI Agents
In `@tools/workbench/replay.go` around lines 140 - 151, The code reads the entire
HTTP response into memory using io.ReadAll(httpResp.Body) before truncating,
which can OOM for large bodies; change the read to use an io.LimitReader capped
at 256*1024+1 so you never allocate more than the cap and can still detect
truncation, then set response.Body to the truncated string plus "\n...
[truncated]" if the read indicates the body exceeded the cap; update the body
read site that assigns bodyBytes (used with httpResp.Body and response.Body) and
preserve the existing error assignment to response.Error if the read fails.
| for _, s := range snapshots { | ||
| filename := sanitizeFilename(s.Name) + ".json" | ||
| path := filepath.Join(dir, filename) | ||
|
|
||
| data, err := json.MarshalIndent(s, "", " ") | ||
| if err != nil { | ||
| return fmt.Errorf("failed to marshal snapshot %s: %w", s.ID, err) | ||
| } | ||
|
|
||
| if err := os.WriteFile(path, data, 0644); err != nil { | ||
| return fmt.Errorf("failed to write snapshot %s: %w", s.ID, err) | ||
| } | ||
| } |
There was a problem hiding this comment.
ExportToDir can silently overwrite snapshots that share the same sanitized name.
The filename is derived from s.Name (Line 246), which is user-provided. If two snapshots produce the same sanitized name, the second write overwrites the first with no warning. Consider incorporating the snapshot ID or an index into the filename.
Proposed fix
- filename := sanitizeFilename(s.Name) + ".json"
+ filename := sanitizeFilename(s.Name) + "_" + s.ID[:8] + ".json"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| for _, s := range snapshots { | |
| filename := sanitizeFilename(s.Name) + ".json" | |
| path := filepath.Join(dir, filename) | |
| data, err := json.MarshalIndent(s, "", " ") | |
| if err != nil { | |
| return fmt.Errorf("failed to marshal snapshot %s: %w", s.ID, err) | |
| } | |
| if err := os.WriteFile(path, data, 0644); err != nil { | |
| return fmt.Errorf("failed to write snapshot %s: %w", s.ID, err) | |
| } | |
| } | |
| for _, s := range snapshots { | |
| filename := sanitizeFilename(s.Name) + "_" + s.ID[:8] + ".json" | |
| path := filepath.Join(dir, filename) | |
| data, err := json.MarshalIndent(s, "", " ") | |
| if err != nil { | |
| return fmt.Errorf("failed to marshal snapshot %s: %w", s.ID, err) | |
| } | |
| if err := os.WriteFile(path, data, 0644); err != nil { | |
| return fmt.Errorf("failed to write snapshot %s: %w", s.ID, err) | |
| } | |
| } |
🤖 Prompt for AI Agents
In `@tools/workbench/snapshots.go` around lines 245 - 257, The loop in ExportToDir
that builds filenames using sanitizeFilename(s.Name) can overwrite files when
two snapshots sanitize to the same name; change the filename generation in the
loop (where sanitizeFilename, s.Name and s.ID are used) to include a unique
component such as the snapshot ID or the loop index (e.g.,
fmt.Sprintf("%s-%s.json", sanitizeFilename(s.Name), s.ID) or include the index)
so each output file is unique, and optionally check for existing path collisions
before writing.
| // Export exports all storage data as a snapshot. | ||
| func (s *MemoryStorage) Export() StorageSnapshot { | ||
| s.mu.RLock() | ||
| defer s.mu.RUnlock() | ||
|
|
||
| return StorageSnapshot{ | ||
| Accounts: s.accounts, | ||
| Payments: s.payments, | ||
| Balances: s.balances, | ||
| ExternalAccounts: s.externalAccounts, | ||
| Others: s.others, | ||
| States: s.states, | ||
| TasksTree: s.tasksTree, | ||
| WebhookConfigs: s.webhookConfigs, | ||
| ExportedAt: time.Now(), | ||
| } | ||
| } |
There was a problem hiding this comment.
Export returns shared map references — snapshot is not isolated from live storage.
StorageSnapshot fields point to the same underlying maps as MemoryStorage. Any subsequent Store* call mutates both the live storage and the "snapshot." This defeats the purpose of exporting a point-in-time snapshot.
The same issue applies to Import (lines 403–433) in reverse: the imported maps become the live storage, so the caller's snapshot object mutates along with it.
Suggested fix for Export (deep-copy maps)
func (s *MemoryStorage) Export() StorageSnapshot {
s.mu.RLock()
defer s.mu.RUnlock()
+ accounts := make(map[string]models.PSPAccount, len(s.accounts))
+ for k, v := range s.accounts {
+ accounts[k] = v
+ }
+ payments := make(map[string]models.PSPPayment, len(s.payments))
+ for k, v := range s.payments {
+ payments[k] = v
+ }
+ balances := make(map[string]models.PSPBalance, len(s.balances))
+ for k, v := range s.balances {
+ balances[k] = v
+ }
+ extAccounts := make(map[string]models.PSPAccount, len(s.externalAccounts))
+ for k, v := range s.externalAccounts {
+ extAccounts[k] = v
+ }
+ others := make(map[string][]models.PSPOther, len(s.others))
+ for k, v := range s.others {
+ cp := make([]models.PSPOther, len(v))
+ copy(cp, v)
+ others[k] = cp
+ }
+ states := make(map[string]json.RawMessage, len(s.states))
+ for k, v := range s.states {
+ states[k] = v
+ }
+
return StorageSnapshot{
- Accounts: s.accounts,
- Payments: s.payments,
- Balances: s.balances,
- ExternalAccounts: s.externalAccounts,
- Others: s.others,
- States: s.states,
+ Accounts: accounts,
+ Payments: payments,
+ Balances: balances,
+ ExternalAccounts: extAccounts,
+ Others: others,
+ States: states,
TasksTree: s.tasksTree,
WebhookConfigs: s.webhookConfigs,
ExportedAt: time.Now(),
}
}Apply the same pattern to Import — copy the incoming maps rather than assigning them directly.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Export exports all storage data as a snapshot. | |
| func (s *MemoryStorage) Export() StorageSnapshot { | |
| s.mu.RLock() | |
| defer s.mu.RUnlock() | |
| return StorageSnapshot{ | |
| Accounts: s.accounts, | |
| Payments: s.payments, | |
| Balances: s.balances, | |
| ExternalAccounts: s.externalAccounts, | |
| Others: s.others, | |
| States: s.states, | |
| TasksTree: s.tasksTree, | |
| WebhookConfigs: s.webhookConfigs, | |
| ExportedAt: time.Now(), | |
| } | |
| } | |
| // Export exports all storage data as a snapshot. | |
| func (s *MemoryStorage) Export() StorageSnapshot { | |
| s.mu.RLock() | |
| defer s.mu.RUnlock() | |
| accounts := make(map[string]models.PSPAccount, len(s.accounts)) | |
| for k, v := range s.accounts { | |
| accounts[k] = v | |
| } | |
| payments := make(map[string]models.PSPPayment, len(s.payments)) | |
| for k, v := range s.payments { | |
| payments[k] = v | |
| } | |
| balances := make(map[string]models.PSPBalance, len(s.balances)) | |
| for k, v := range s.balances { | |
| balances[k] = v | |
| } | |
| extAccounts := make(map[string]models.PSPAccount, len(s.externalAccounts)) | |
| for k, v := range s.externalAccounts { | |
| extAccounts[k] = v | |
| } | |
| others := make(map[string][]models.PSPOther, len(s.others)) | |
| for k, v := range s.others { | |
| cp := make([]models.PSPOther, len(v)) | |
| copy(cp, v) | |
| others[k] = cp | |
| } | |
| states := make(map[string]json.RawMessage, len(s.states)) | |
| for k, v := range s.states { | |
| states[k] = v | |
| } | |
| return StorageSnapshot{ | |
| Accounts: accounts, | |
| Payments: payments, | |
| Balances: balances, | |
| ExternalAccounts: extAccounts, | |
| Others: others, | |
| States: states, | |
| TasksTree: s.tasksTree, | |
| WebhookConfigs: s.webhookConfigs, | |
| ExportedAt: time.Now(), | |
| } | |
| } |
🤖 Prompt for AI Agents
In `@tools/workbench/storage.go` around lines 384 - 400, The Export function
currently returns references to MemoryStorage's internal maps so the returned
StorageSnapshot is not isolated; change Export (in MemoryStorage.Export) to
deep-copy each map field (Accounts, Payments, Balances, ExternalAccounts,
Others, States, TasksTree, WebhookConfigs) while holding s.mu.RLock so the
snapshot contains newly allocated maps with copied entries (and keep ExportedAt
as the snapshot time). Likewise update Import (MemoryStorage.Import) to not
assign incoming StorageSnapshot maps directly to s.*; instead acquire s.mu.Lock
and allocate new maps for s.accounts, s.payments, s.balances,
s.externalAccounts, s.others, s.states, s.tasksTree, s.webhookConfigs and copy
entries from the snapshot into those new maps so the caller's snapshot and the
live MemoryStorage do not share underlying map references.
|
|
||
| return { | ||
| // Status | ||
| getStatus: () => fetchJSON<ConnectorStatus>(`${prefix}/`), |
There was a problem hiding this comment.
Trailing slash inconsistency in connector status endpoint.
getStatus uses `${prefix}/` (with trailing slash) while all other endpoints like install, uninstall, reset, etc. don't have a trailing slash. This can cause issues if the backend treats /connectors/{id}/ and /connectors/{id} differently.
Suggested fix
- getStatus: () => fetchJSON<ConnectorStatus>(`${prefix}/`),
+ getStatus: () => fetchJSON<ConnectorStatus>(prefix),Also on line 221:
- delete: () => fetchJSON<{ status: string }>(`${prefix}/`, { method: 'DELETE' }),
+ delete: () => fetchJSON<{ status: string }>(prefix, { method: 'DELETE' }),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| getStatus: () => fetchJSON<ConnectorStatus>(`${prefix}/`), | |
| getStatus: () => fetchJSON<ConnectorStatus>(prefix), |
| getStatus: () => fetchJSON<ConnectorStatus>(`${prefix}/`), | |
| delete: () => fetchJSON<{ status: string }>(prefix, { method: 'DELETE' }), |
🤖 Prompt for AI Agents
In `@tools/workbench/ui/src/api.ts` at line 216, The getStatus function is calling
the connector endpoint with a trailing slash which is inconsistent with other
methods (install, uninstall, reset) and may hit a different backend route;
change getStatus to use `${prefix}` (no trailing slash) so it matches the other
endpoints and backend expectations — update the getStatus arrow function that
returns fetchJSON<ConnectorStatus> to use `${prefix}` instead of `${prefix}/`.
| build: { | ||
| outDir: 'dist', | ||
| emptyDirOnStart: true, | ||
| }, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat tools/workbench/ui/package.json | jq '.dependencies.vite // .devDependencies.vite // "not found"'Repository: formancehq/payments
Length of output: 71
Replace emptyDirOnStart with emptyOutDir for Vite 5 compatibility.
The project uses Vite 5.0.0, but emptyDirOnStart was introduced in Vite 6. This option will be silently ignored, preventing the dist directory from being cleaned before builds. Use emptyOutDir: true instead, which is the correct option for Vite 5.
Current code
build: {
outDir: 'dist',
emptyDirOnStart: true,
},
🤖 Prompt for AI Agents
In `@tools/workbench/ui/vite.config.ts` around lines 7 - 10, Replace the Vite
build option emptyDirOnStart with the Vite 5-compatible emptyOutDir in the build
config: locate the build object in the Vite config (where outDir is set to
'dist') and change the property name from emptyDirOnStart to emptyOutDir (set to
true) so the dist directory is actually cleaned before builds; ensure no other
code references emptyDirOnStart remains.
| // DeleteConnector removes a connector instance. | ||
| func (w *Workbench) DeleteConnector(ctx context.Context, instanceID string) error { | ||
| w.connMu.Lock() | ||
| defer w.connMu.Unlock() | ||
|
|
||
| instance, ok := w.connectors[instanceID] | ||
| if !ok { | ||
| return fmt.Errorf("connector %s not found", instanceID) | ||
| } | ||
|
|
||
| // Uninstall if installed | ||
| if instance.Installed && instance.engine != nil { | ||
| if err := instance.engine.Uninstall(ctx); err != nil { | ||
| w.logger.Errorf("Error uninstalling connector %s during delete: %v", instanceID, err) | ||
| } | ||
| } | ||
|
|
||
| delete(w.connectors, instanceID) | ||
| w.logger.Infof("Deleted connector %s", instanceID) | ||
| return nil | ||
| } |
There was a problem hiding this comment.
DeleteConnector holds connMu while calling engine.Uninstall, which makes an outbound plugin call.
engine.Uninstall (line 344) calls the plugin's Uninstall method, which may make network calls. Holding the write lock on connMu for the duration blocks all GetConnector, ListConnectors, and other connector operations. Consider uninstalling outside the lock (look up and remove from map, then uninstall the removed instance).
Suggested restructure
func (w *Workbench) DeleteConnector(ctx context.Context, instanceID string) error {
w.connMu.Lock()
- defer w.connMu.Unlock()
-
instance, ok := w.connectors[instanceID]
if !ok {
+ w.connMu.Unlock()
return fmt.Errorf("connector %s not found", instanceID)
}
-
- // Uninstall if installed
- if instance.Installed && instance.engine != nil {
- if err := instance.engine.Uninstall(ctx); err != nil {
- w.logger.Errorf("Error uninstalling connector %s during delete: %v", instanceID, err)
- }
- }
-
delete(w.connectors, instanceID)
+ w.connMu.Unlock()
+
+ if instance.Installed && instance.engine != nil {
+ if err := instance.engine.Uninstall(ctx); err != nil {
+ w.logger.Errorf("Error uninstalling connector %s during delete: %v", instanceID, err)
+ }
+ }
+
w.logger.Infof("Deleted connector %s", instanceID)
return nil
}🤖 Prompt for AI Agents
In `@tools/workbench/workbench.go` around lines 332 - 352, DeleteConnector
currently holds connMu while calling instance.engine.Uninstall, which can block
other ops; change the flow to: under connMu lock locate and remove the connector
from the connectors map (using w.connectors and instance :=
w.connectors[instanceID]), then release the lock; after unlocking, if the
removed instance had Installed == true and instance.engine != nil call
instance.engine.Uninstall(ctx) and log any errors; finally log the deletion
(w.logger.Infof). Ensure you still return an error if the instance was not found
(keep the initial not-found check) and do not hold connMu during the outbound
Uninstall call.
The //go:embed ui/dist/* directive requires files to exist at build time. Add exception to .gitignore and commit the built UI distribution files. Co-authored-by: Cursor <cursoragent@cursor.com>
- Remove unused globalDebugTransport variable - Remove unused parseIntParam function - Fix printf format for models.TaskType (use %v instead of %s) - Add error return value handling for various function calls Co-authored-by: Cursor <cursoragent@cursor.com>
Co-authored-by: Cursor <cursoragent@cursor.com>
Co-authored-by: Cursor <cursoragent@cursor.com>
- Add error handling for Body.Close() calls - Remove unnecessary fmt.Sprintf wrapper - Simplify hex character check condition Co-authored-by: Cursor <cursoragent@cursor.com>
Co-authored-by: Cursor <cursoragent@cursor.com>
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In `@tools/workbench/engine.go`:
- Around line 711-722: The WebhookBaseUrl is hardcoded in the CreateWebhooks
request inside the CreateWebhooks call and log (see the WebhookBaseUrl field in
the models.CreateWebhooksRequest passed to e.plugin.CreateWebhooks and
e.debug.LogPluginCall); change it to use the engine's configured listen address
instead of "http://localhost:8080/webhooks" (e.g. derive the base from an
existing field like e.listenAddr or e.config.ListenAddr, format it with the
"/webhooks" path, and use that value in both the debug.LogPluginCall and
e.plugin.CreateWebhooks requests) so the webhook URL respects the --listen
configuration.
In `@tools/workbench/replay.go`:
- Around line 271-332: Replayer.Compare never sets ResponseComparison.BodyDiff,
so when original.ResponseBody != replay.Body callers only get BodyMatch=false
with no diff; update Replayer.Compare (the Compare method) to populate
comparison.BodyDiff whenever original.ResponseBody != replay.Body by computing
at least the lengths and the first differing byte/character offset (e.g.,
compare original.ResponseBody vs replay.Body to record OriginalSize, ReplaySize
and FirstDiffOffset or a short snippet around that offset), and attach that data
to comparison.BodyDiff before returning so callers can see how the bodies
differ.
- Around line 86-155: The Replay function currently issues HTTP requests to any
user-supplied URL (in ReplayRequest) which allows SSRF to
internal/metadata/localhost addresses; add URL validation in Replayer.Replay
before creating the request (i.e., before http.NewRequestWithContext) by parsing
req.URL, extracting the host, resolving/inspecting the IP(s) and blocking
private, loopback, link-local and cloud metadata ranges (e.g., 169.254.169.254),
or alternatively enforce an allowlist of permitted hostnames/IPs configurable on
Replayer (and log a warning when a request is rejected); ensure the check covers
hostnames that resolve to private IPs and apply the same logic when req.URL is
an IP literal or contains a port.
In `@tools/workbench/workbench.go`:
- Around line 155-190: The Stop method signals shutdown too early by closing
w.stopChan before cleanup, which causes Wait() callers to unblock while server
shutdown, connector Uninstall (conn.engine.Uninstall) and RestoreGlobalTransport
remain in progress; to fix, move the close(w.stopChan) call to the very end of
Workbench.Stop after the server.Stop(ctx), the loop that uninstalls connectors
(iterating w.connectors and calling conn.engine.Uninstall), and
RestoreGlobalTransport(w.originalTransport) have completed, preserving locking
around w.mu and w.connMu as currently used and ensuring w.running is set to
false before closing the channel.
🧹 Nitpick comments (8)
cmd/workbench.go (1)
190-192: Remove the emptyinit()function.The comment says "This will be added to root in NewRootCommand", but the function body is empty. If the command is wired elsewhere, this stub is dead code and adds noise.
Proposed fix
-func init() { - // This will be added to root in NewRootCommand -}tools/workbench/baseline.go (1)
408-417: Usestrings.Joininstead of a customjoinfunction.The standard library already provides
strings.Joinwhich does exactly this. Thestringspackage is already imported.Proposed fix
- return join(parts, " | ") + return strings.Join(parts, " | ") } - -func join(parts []string, sep string) string { - if len(parts) == 0 { - return "" - } - result := parts[0] - for i := 1; i < len(parts); i++ { - result += sep + parts[i] - } - return result -}tools/workbench/engine.go (1)
245-341: Significant code duplication across allexecuteFetch*Taskmethods.
executeFetchAccountsTask,executeFetchPaymentsTask,executeFetchBalancesTask,executeFetchExternalAccountsTask, andexecuteFetchOthersTaskshare nearly identical structure — context cancellation check, task tracking, step-mode wait, debug logging, plugin call, result storage, state tracking, and child task execution. Consider extracting a generic fetch loop helper parameterized by the plugin call and storage function to reduce ~400 lines of near-duplicate code.tools/workbench/server.go (1)
232-238: No read/write timeouts on the HTTP server.The
http.Serveris created withoutReadTimeout,WriteTimeout, orReadHeaderTimeout. For a dev tool this is low-risk, but if exposed beyond localhost, slow clients could hold connections indefinitely.Proposed fix
s.server = &http.Server{ - Addr: s.addr, - Handler: r, + Addr: s.addr, + Handler: r, + ReadHeaderTimeout: 10 * time.Second, + ReadTimeout: 30 * time.Second, + WriteTimeout: 60 * time.Second, }tools/workbench/introspect.go (1)
25-53: Hardcoded relative path guessing is fragile.
NewIntrospectorwalks a fixed list of relative paths to find connector sources. If the working directory doesn't match any expected layout,basePathsilently remains empty. This is acceptable for a dev tool, but consider accepting an optional explicit source path to improve UX when the heuristic fails.tools/workbench/workbench.go (2)
102-123:InstallGlobalTransportin the constructor mutateshttp.DefaultTransport— side-effect at construction time.
Newis a constructor, but it immediately installs a global HTTP transport (line 116). This means merely creating aWorkbenchvalue (even in tests or for inspection) hijacks all outbound HTTP traffic in the process. Consider deferring the global transport installation toStart(), so the side-effect is tied to the explicit lifecycle action rather than object construction.
425-445: Silently swallowingGetPluginTypeerror.Line 431 discards the error from
registry.GetPluginType(provider). If the registry has a config entry but no matching plugin type (e.g., a registration bug), this silently returns a zero-valuePluginType. For a developer-facing workbench, surfacing this as a warning would aid debugging.Proposed improvement
for provider, config := range configs { - pluginType, _ := registry.GetPluginType(provider) + pluginType, err := registry.GetPluginType(provider) + if err != nil { + w.logger.Errorf("Failed to get plugin type for %s: %v", provider, err) + } result = append(result, AvailableConnector{tools/workbench/replay.go (1)
199-207: History trimming retains the backing array — minor memory concern.
r.history = r.history[1:]reslices without copying, so the underlying array only grows. FormaxHistory=100this is negligible, but if the limit is ever raised significantly, consider using a ring buffer or copying to a fresh slice.
| callID := e.debug.LogPluginCall("CreateWebhooks", models.CreateWebhooksRequest{ | ||
| FromPayload: fromPayload, | ||
| ConnectorID: e.connectorID.String(), | ||
| WebhookBaseUrl: "http://localhost:8080/webhooks", | ||
| }) | ||
| start := time.Now() | ||
|
|
||
| resp, err := e.plugin.CreateWebhooks(ctx, models.CreateWebhooksRequest{ | ||
| FromPayload: fromPayload, | ||
| ConnectorID: e.connectorID.String(), | ||
| WebhookBaseUrl: "http://localhost:8080/webhooks", | ||
| }) |
There was a problem hiding this comment.
Hardcoded webhook base URL ignores configured listen address.
WebhookBaseUrl is hardcoded to "http://localhost:8080/webhooks" on Lines 714 and 721. If the user configures a different listen address via --listen, the webhook URL will be wrong. Consider deriving this from the engine's configuration or passing the actual listen address.
🤖 Prompt for AI Agents
In `@tools/workbench/engine.go` around lines 711 - 722, The WebhookBaseUrl is
hardcoded in the CreateWebhooks request inside the CreateWebhooks call and log
(see the WebhookBaseUrl field in the models.CreateWebhooksRequest passed to
e.plugin.CreateWebhooks and e.debug.LogPluginCall); change it to use the
engine's configured listen address instead of "http://localhost:8080/webhooks"
(e.g. derive the base from an existing field like e.listenAddr or
e.config.ListenAddr, format it with the "/webhooks" path, and use that value in
both the debug.LogPluginCall and e.plugin.CreateWebhooks requests) so the
webhook URL respects the --listen configuration.
| // Replay executes a request and returns the response. | ||
| func (r *Replayer) Replay(ctx context.Context, req ReplayRequest) (*ReplayResponse, error) { | ||
| // Validate request | ||
| if req.Method == "" { | ||
| return nil, fmt.Errorf("method is required") | ||
| } | ||
| if req.URL == "" { | ||
| return nil, fmt.Errorf("URL is required") | ||
| } | ||
|
|
||
| // Create HTTP request | ||
| var body io.Reader | ||
| if req.Body != "" { | ||
| body = strings.NewReader(req.Body) | ||
| } | ||
|
|
||
| httpReq, err := http.NewRequestWithContext(ctx, req.Method, req.URL, body) | ||
| if err != nil { | ||
| return nil, fmt.Errorf("failed to create request: %w", err) | ||
| } | ||
|
|
||
| // Set headers | ||
| for key, value := range req.Headers { | ||
| httpReq.Header.Set(key, value) | ||
| } | ||
|
|
||
| // Execute request | ||
| start := time.Now() | ||
| httpResp, err := r.client.Do(httpReq) | ||
| duration := time.Since(start) | ||
|
|
||
| response := &ReplayResponse{ | ||
| ID: uuid.New().String(), | ||
| Timestamp: time.Now(), | ||
| Duration: duration, | ||
| OriginalID: req.OriginalID, | ||
| Request: req, | ||
| } | ||
|
|
||
| if err != nil { | ||
| response.Error = err.Error() | ||
| r.addToHistory(*response) | ||
| return response, nil // Return response with error, not error | ||
| } | ||
| defer func() { _ = httpResp.Body.Close() }() | ||
|
|
||
| // Capture response | ||
| response.StatusCode = httpResp.StatusCode | ||
| response.Status = httpResp.Status | ||
| response.Headers = make(map[string]string) | ||
| for key, values := range httpResp.Header { | ||
| response.Headers[key] = strings.Join(values, ", ") | ||
| } | ||
|
|
||
| // Read body | ||
| bodyBytes, err := io.ReadAll(httpResp.Body) | ||
| if err != nil { | ||
| response.Error = fmt.Sprintf("failed to read response body: %v", err) | ||
| } else { | ||
| // Limit body size | ||
| if len(bodyBytes) > 256*1024 { | ||
| response.Body = string(bodyBytes[:256*1024]) + "\n... [truncated]" | ||
| } else { | ||
| response.Body = string(bodyBytes) | ||
| } | ||
| } | ||
|
|
||
| r.addToHistory(*response) | ||
| return response, nil | ||
| } |
There was a problem hiding this comment.
SSRF: Replay accepts arbitrary URLs with no validation or restriction.
Replay takes any user-supplied URL and issues an HTTP request from the server process. Even in a dev tool, this allows requests to internal services, cloud metadata endpoints (e.g., 169.254.169.254), or localhost services. Consider at minimum logging a warning, or adding an allowlist/denylist for target hosts.
🤖 Prompt for AI Agents
In `@tools/workbench/replay.go` around lines 86 - 155, The Replay function
currently issues HTTP requests to any user-supplied URL (in ReplayRequest) which
allows SSRF to internal/metadata/localhost addresses; add URL validation in
Replayer.Replay before creating the request (i.e., before
http.NewRequestWithContext) by parsing req.URL, extracting the host,
resolving/inspecting the IP(s) and blocking private, loopback, link-local and
cloud metadata ranges (e.g., 169.254.169.254), or alternatively enforce an
allowlist of permitted hostnames/IPs configurable on Replayer (and log a warning
when a request is rejected); ensure the check covers hostnames that resolve to
private IPs and apply the same logic when req.URL is an IP literal or contains a
port.
| // Compare compares a replay response with the original captured request. | ||
| func (r *Replayer) Compare(replayID string) (*ResponseComparison, error) { | ||
| replay := r.GetReplayByID(replayID) | ||
| if replay == nil { | ||
| return nil, fmt.Errorf("replay not found: %s", replayID) | ||
| } | ||
|
|
||
| if replay.OriginalID == "" { | ||
| return nil, fmt.Errorf("replay has no original request to compare with") | ||
| } | ||
|
|
||
| original := r.debug.GetHTTPRequestByID(replay.OriginalID) | ||
| if original == nil { | ||
| return nil, fmt.Errorf("original request not found: %s", replay.OriginalID) | ||
| } | ||
|
|
||
| comparison := &ResponseComparison{ | ||
| OriginalID: replay.OriginalID, | ||
| ReplayID: replayID, | ||
| OriginalStatus: original.ResponseStatus, | ||
| ReplayStatus: replay.StatusCode, | ||
| StatusMatch: original.ResponseStatus == replay.StatusCode, | ||
| BodyMatch: original.ResponseBody == replay.Body, | ||
| } | ||
|
|
||
| // Compare headers | ||
| allKeys := make(map[string]bool) | ||
| for k := range original.ResponseHeaders { | ||
| allKeys[k] = true | ||
| } | ||
| for k := range replay.Headers { | ||
| allKeys[k] = true | ||
| } | ||
|
|
||
| for key := range allKeys { | ||
| origVal := original.ResponseHeaders[key] | ||
| replayVal := replay.Headers[key] | ||
|
|
||
| if origVal == "" && replayVal != "" { | ||
| comparison.HeaderDiffs = append(comparison.HeaderDiffs, HeaderDiff{ | ||
| Key: key, | ||
| ReplayValue: replayVal, | ||
| Type: "added", | ||
| }) | ||
| } else if origVal != "" && replayVal == "" { | ||
| comparison.HeaderDiffs = append(comparison.HeaderDiffs, HeaderDiff{ | ||
| Key: key, | ||
| OriginalValue: origVal, | ||
| Type: "removed", | ||
| }) | ||
| } else if origVal != replayVal { | ||
| comparison.HeaderDiffs = append(comparison.HeaderDiffs, HeaderDiff{ | ||
| Key: key, | ||
| OriginalValue: origVal, | ||
| ReplayValue: replayVal, | ||
| Type: "changed", | ||
| }) | ||
| } | ||
| } | ||
|
|
||
| return comparison, nil | ||
| } |
There was a problem hiding this comment.
BodyDiff is never populated — comparison reports mismatch but no diff detail.
comparison.BodyDiff (line 258) is declared with omitempty but is never set anywhere in Compare. When bodies don't match, the caller gets body_match: false with no indication of how they differ. This limits the usefulness of the comparison feature. Consider at minimum showing a size difference or the first divergent byte offset.
🤖 Prompt for AI Agents
In `@tools/workbench/replay.go` around lines 271 - 332, Replayer.Compare never
sets ResponseComparison.BodyDiff, so when original.ResponseBody != replay.Body
callers only get BodyMatch=false with no diff; update Replayer.Compare (the
Compare method) to populate comparison.BodyDiff whenever original.ResponseBody
!= replay.Body by computing at least the lengths and the first differing
byte/character offset (e.g., compare original.ResponseBody vs replay.Body to
record OriginalSize, ReplaySize and FirstDiffOffset or a short snippet around
that offset), and attach that data to comparison.BodyDiff before returning so
callers can see how the bodies differ.
| func (w *Workbench) Stop(ctx context.Context) error { | ||
| w.mu.Lock() | ||
| if !w.running { | ||
| w.mu.Unlock() | ||
| return nil | ||
| } | ||
| w.running = false | ||
| close(w.stopChan) | ||
| w.mu.Unlock() | ||
|
|
||
| w.logger.Info("Stopping workbench...") | ||
|
|
||
| // Stop HTTP server | ||
| if err := w.server.Stop(ctx); err != nil { | ||
| w.logger.Errorf("Error stopping HTTP server: %v", err) | ||
| } | ||
|
|
||
| // Uninstall all connectors | ||
| w.connMu.Lock() | ||
| for id, conn := range w.connectors { | ||
| if conn.Installed && conn.engine != nil { | ||
| if err := conn.engine.Uninstall(ctx); err != nil { | ||
| w.logger.Errorf("Error uninstalling connector %s: %v", id, err) | ||
| } | ||
| } | ||
| } | ||
| w.connMu.Unlock() | ||
|
|
||
| // Restore original HTTP transport | ||
| if w.originalTransport != nil { | ||
| RestoreGlobalTransport(w.originalTransport) | ||
| w.logger.Info("HTTP debug transport removed") | ||
| } | ||
|
|
||
| return nil | ||
| } |
There was a problem hiding this comment.
Stop closes stopChan before uninstalling connectors — Wait() callers unblock prematurely.
At line 162, close(w.stopChan) signals Wait() to return. But the actual cleanup (server shutdown at line 168, connector uninstall at lines 173-181, transport restore at lines 184-187) happens afterward. Any caller using Wait() to know when shutdown is complete will proceed while resources are still being torn down.
Move close(w.stopChan) to the end of Stop, after all cleanup is finished.
Proposed fix
func (w *Workbench) Stop(ctx context.Context) error {
w.mu.Lock()
if !w.running {
w.mu.Unlock()
return nil
}
w.running = false
- close(w.stopChan)
w.mu.Unlock()
w.logger.Info("Stopping workbench...")
// Stop HTTP server
if err := w.server.Stop(ctx); err != nil {
w.logger.Errorf("Error stopping HTTP server: %v", err)
}
// Uninstall all connectors
w.connMu.Lock()
for id, conn := range w.connectors {
if conn.Installed && conn.engine != nil {
if err := conn.engine.Uninstall(ctx); err != nil {
w.logger.Errorf("Error uninstalling connector %s: %v", id, err)
}
}
}
w.connMu.Unlock()
// Restore original HTTP transport
if w.originalTransport != nil {
RestoreGlobalTransport(w.originalTransport)
w.logger.Info("HTTP debug transport removed")
}
+ close(w.stopChan)
return nil
}🤖 Prompt for AI Agents
In `@tools/workbench/workbench.go` around lines 155 - 190, The Stop method signals
shutdown too early by closing w.stopChan before cleanup, which causes Wait()
callers to unblock while server shutdown, connector Uninstall
(conn.engine.Uninstall) and RestoreGlobalTransport remain in progress; to fix,
move the close(w.stopChan) call to the very end of Workbench.Stop after the
server.Stop(ctx), the loop that uninstalls connectors (iterating w.connectors
and calling conn.engine.Uninstall), and
RestoreGlobalTransport(w.originalTransport) have completed, preserving locking
around w.mu and w.connMu as currently used and ensuring w.running is set to
false before closing the channel.
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #641 +/- ##
==========================================
- Coverage 62.72% 62.54% -0.19%
==========================================
Files 819 820 +1
Lines 36740 36848 +108
==========================================
Hits 23046 23046
- Misses 12156 12264 +108
Partials 1538 1538 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
The //go:embed directive needs at least one file to exist. Instead of committing the entire built UI, commit only a .gitkeep placeholder. The server has fallback code that displays instructions when the full UI isn't available. Co-authored-by: Cursor <cursoragent@cursor.com>
No description provided.