-
Notifications
You must be signed in to change notification settings - Fork 87
feat: compressed mint photon api #2198
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
📝 WalkthroughWalkthroughThis PR introduces compressed mint indexer API support, adds bootstrap orchestration with async shutdown handling and multi-path RPC selection in forester, refines compressible state calculations using rent-based logic, implements chunked transaction processing for mint compression, and extends the Photon API with mint query models while updating tests for dynamic epoch targeting and structured program configuration. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Forester Bootstrap
participant RPC as RPC/Photon
participant Tracker as MintAccountTracker
participant DB as DashMap Storage
Client->>RPC: Create HTTP client
Client->>RPC: Detect localhost (is_localhost)
alt Localhost
Client->>RPC: getProgramAccounts (standard API)
else Remote
Client->>RPC: getProgramAccountsV2 (V2 API)
end
Client->>Client: Setup shutdown listener (Arc<AtomicBool>)
loop For each fetched account
Client->>Tracker: update_from_account(pubkey, data, lamports)
Tracker->>Tracker: Validate account type & PDA
Tracker->>Tracker: Deserialize & calculate_compressible_slot
Tracker->>DB: Insert MintAccountState
end
Client->>Client: Poll shutdown signal
Client->>RPC: Abort ongoing operations
Client->>Client: Return BootstrapResult {pages, fetched, inserted}
sequenceDiagram
participant Compress as compress_batch
participant Builder as Instruction Builder
participant Tx as Transaction
participant Chain as Blockchain
participant Tracker as MintAccountTracker
Compress->>Compress: Split input into MAX_INSTRUCTIONS_PER_TX chunks
loop For each chunk
Compress->>Builder: Build instructions (in parallel)
Builder->>Compress: Return serialized instructions
Compress->>Tx: Create transaction from chunk
Compress->>Chain: Submit transaction
Chain->>Compress: Await confirmation
alt Confirmed
Compress->>Tracker: Remove confirmed MintAccountStates
Compress->>Compress: Collect signature
else Not Confirmed
Compress->>Compress: Log warning, keep in tracker
end
end
Compress->>Compress: Return Vec<Signature>
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Principal Engineer's NotesThis is a substantive, multi-layered change that deserves careful review across three distinct domains: 1. Forester Bootstrap & Compression ⚙️The new
2. SDK Indexer API Surface 📡Adding three new trait methods is good API expansion, but note:
3. Type Safety & Photon Models 🏗️The generated Photon API models look solid. However:
4. Test Refactoring 🧪The shift to dynamic epoch targeting in Read CLAUDE.md for context on how testing and validation fit into the broader architecture. 📜 Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 17
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
forester/src/config.rs (1)
365-399: Include the offending entry incompressible_pda_programsparse errors.When multiple entries are provided, returning only the parser error can make it hard to spot which value failed. Consider echoing the raw entry alongside the error.
Proposed refinement
- .map(|s| { - s.parse::<crate::compressible::config::PdaProgramConfig>() - .map_err(|e| ConfigError::InvalidArguments { - field: "compressible_pda_programs", - invalid_values: vec![e], - }) - }) + .map(|s| { + let raw = s.clone(); + s.parse::<crate::compressible::config::PdaProgramConfig>() + .map_err(|e| ConfigError::InvalidArguments { + field: "compressible_pda_programs", + invalid_values: vec![format!("{raw}: {e}")], + }) + })forester/tests/test_compressible_ctoken.rs (1)
252-270: Wait for subscription-driven tracker updates before asserting.These assertions can race the websocket-driven tracker updates (airdrop + compression), which will intermittently flake under load. Add a small polling loop with timeout before asserting lamports and remaining accounts.
🐛 Proposed fix (apply the same pattern after compression)
- let accounts = tracker.get_all_token_accounts(); - assert_eq!(accounts[0].lamports, lamports + 10_000_000); + let accounts = tokio::time::timeout(Duration::from_secs(5), async { + loop { + let accounts = tracker.get_all_token_accounts(); + if accounts[0].lamports == lamports + 10_000_000 { + break accounts; + } + sleep(Duration::from_millis(200)).await; + } + }) + .await + .expect("Tracker did not observe lamports update in time"); + assert_eq!(accounts[0].lamports, lamports + 10_000_000);Also applies to: 334-341
forester/src/compressible/subscriber.rs (1)
176-225: Handle account closures by invokinghandle_removal.
programSubscribesends updates for closed accounts withlamports == 0(often empty data). The current path always tries to decode andhandle_update, which leaves stale entries in trackers. Add a guard to callhandle_removalwhen lamports are zero or data is empty.🐛 Proposed fix
- // Decode Base64 account data + // Account closed? Remove it from the tracker + if response.value.account.lamports == 0 { + self.handler.handle_removal(&pubkey); + return; + } + + // Decode Base64 account data use solana_account_decoder::UiAccountData;
🤖 Fix all issues with AI agents
In `@forester/Cargo.toml`:
- Line 46: The crate currently pins hex = "0.4" in its Cargo.toml; replace that
explicit version with a workspace-managed dependency by changing the dependency
entry to use workspace = true (e.g., hex = { workspace = true }) and ensure the
workspace root contains a [workspace.dependencies] entry for hex (adding hex =
"0.4" there if missing) so all workspace crates share the same hex version.
In `@forester/src/compressible/bootstrap_helpers.rs`:
- Around line 126-161: The get_current_slot function duplicates timeout, HTTP
and RPC error handling; refactor it to call the existing send_rpc_request helper
instead of performing its own client.post/.send/.json and timeout logic. Build
the same JSON payload inside get_current_slot, call send_rpc_request(rpc_url,
payload, RPC_REQUEST_TIMEOUT, client) (or the send_rpc_request signature used in
the repo), then extract the slot via result.get("result").and_then(|v|
v.as_u64()).ok_or_else(|| anyhow::anyhow!("Failed to extract slot from
response")); remove the timeout block and the duplicated error mapping so slot
fetching reuses send_rpc_request’s consistent HTTP status and RPC error
handling.
In `@forester/src/compressible/config.rs`:
- Around line 35-38: Update the doc comments that currently mention
`discriminator_hex` and `--pda-program` to reflect that parsing/serde uses
base58 and the CLI flag is `--compressible-pda-program`; specifically edit the
top comment block for the compressible PDA program configuration (the doc above
the compressible PDA config struct/type in config.rs) and the other occurrences
around lines 148-151 to say "base58" instead of "discriminator_hex" and to
reference `--compressible-pda-program` instead of `--pda-program`, so the
documentation matches the actual parsing and CLI wiring.
In `@forester/src/compressible/ctoken/state.rs`:
- Around line 77-115: The method update_from_token must skip mint/non-token
accounts so the tracker only stores actual token accounts; add an early guard in
update_from_token that detects when the provided Token (ctoken) represents a
mint (not a token account) and return Ok(()) without inserting; use whatever
discriminator the Token type provides (e.g., an is_mint()/is_account() check or
pattern match) to decide this, keeping the rest of the logic
(calculate_compressible_slot, building CTokenAccountState, and calling
self.insert) unchanged; this prevents get_ready_to_compress from including mint
accounts.
In `@forester/src/compressible/mint/compressor.rs`:
- Around line 52-121: compress_batch currently builds all CompressAndCloseMint
instructions and sends them in a single transaction (using instruction_futures
-> instructions -> rpc.create_and_send_transaction), which can exceed Solana
transaction size/compute limits; fix by chunking mint_states into smaller
batches (e.g., max_instructions_per_tx constant) and for each chunk build
instructions via the existing instruction_futures logic and call
create_and_send_transaction per chunk, collecting/returning the final
Signature(s) or changing the method to return a Vec<Signature> or the last
signature, and ensure errors from create_and_send_transaction are mapped as they
are now; update compress_batch signature/return accordingly and reuse existing
variables (rpc_pool, payer_keypair, MintActionParams,
create_mint_action_instruction) so only batching logic is added.
- Around line 183-186: The call to
futures::stream::iter(compression_futures).buffer_unordered(max_concurrent) can
panic if max_concurrent is 0; clamp the concurrency to at least 1 before calling
buffer_unordered (e.g., replace uses of max_concurrent with
max_concurrent.max(1) or create let concurrency = max_concurrent.max(1) and pass
that) in the compressor.rs spot where compression_futures and max_concurrent are
used, and apply the same guard to the PDA compressor and any other
buffer_unordered call that accepts user-supplied concurrency values.
In `@forester/src/compressible/pda/compressor.rs`:
- Around line 189-276: The current implementation builds and sends a single
batched instruction using account_states (via build_compress_accounts_idempotent
and create_and_send_transaction), which can exceed Solana transaction/account
limits; modify compressible logic to chunk account_states into safe-sized
batches (e.g., configurable MAX_BATCH_SIZE) and for each chunk repeat deriving
compressed_addresses, fetching compressed_accounts/hashes, calling
get_validity_proof, building the ix with build_compress_accounts_idempotent, and
sending via create_and_send_transaction, aggregating/returning the signatures
(or return the last signature) and ensure errors are handled per-chunk;
reference account_states, derive_address, compressed_addresses,
compressed_accounts, hashes, proof_with_context,
build_compress_accounts_idempotent, and create_and_send_transaction when
implementing the chunking.
- Around line 165-168: Clamp the concurrency value passed into buffer_unordered
to at least 1 to avoid runtime panics: compute a local capped value (e.g., let
cap = std::cmp::max(1, max_concurrent_batches as usize) or std::cmp::max(1,
max_concurrent_sends as usize) depending on the site) and replace usages of
buffer_unordered(max_concurrent) / buffer_unordered(max_concurrent_batches) with
buffer_unordered(cap). Apply the same pattern where you iterate over
compression_futures / mint compression futures / send transaction futures /
epoch_manager futures so the variables used by buffer_unordered are never zero.
In `@forester/src/compressible/pda/types.rs`:
- Around line 1-28: Add PartialEq and Eq derives to PdaAccountState so it can be
compared in tests; update the struct declaration for PdaAccountState (the pub
struct PdaAccountState { ... }) to include #[derive(Clone, Debug, PartialEq,
Eq)] and ensure no fields (Pubkey or u64) prevent deriving Eq/PartialEq.
In `@forester/src/epoch_manager.rs`:
- Around line 2035-2055: The extra pda_tracker.remove(&account_state.pubkey)
call in the results processing loop duplicates removal already performed by
PdaCompressor::compress_batch_concurrent (and similarly the MintCompressor
counterpart mentioned for lines 2110–2123); remove the redundant remove call(s)
here so that only the compressor implementation owns tracker mutation, leaving
the match arms to only log success/error and update counters (e.g., keep
total_compressed increment on Ok, but delete the pda_tracker.remove invocation),
and ensure the compressor functions remain responsible for calling
pda_tracker.remove when they succeed.
In `@forester/src/lib.rs`:
- Around line 241-259: The current code only spawns the CToken bootstrap when
shutdown_bootstrap is Some, skipping tracker initialization when it's None;
always spawn the tokio task and pass the optional shutdown signal through.
Extract rpc_url and clone ctoken_tracker as before, capture shutdown_bootstrap
(the Option) into a local variable (or clone/move it into the closure) and call
compressible::bootstrap_ctoken_accounts(rpc_url, tracker_clone,
shutdown_bootstrap_option).await inside tokio::spawn, logging errors or
completion as before so the bootstrap runs regardless of whether the shutdown
receiver is present.
In `@forester/tests/test_compressible_mint.rs`:
- Around line 138-148: Replace the current multi-step checks that call
rpc.get_account(mint_pda), deserialize via Mint::deserialize and individually
assert mint.metadata.mint_decompressed with a single assert_eq against an
expected Mint instance: deserialize the on‑chain account into `mint` (using
Mint::deserialize), extract any runtime values (PDAs, timestamps, lamports,
etc.) you need, construct an `expected_mint: Mint` using those runtime values,
then assert_eq!(mint, expected_mint); factor the construction into a small
helper (e.g., build_expected_mint) to keep the test DRY and reuse later in this
file.
- Around line 57-69: The comment and the test disagree about
CreateMintParams.rent_payment semantics: the comment says CreateMint
auto-decompresses when rent_payment > 0 but the test sets rent_payment: 0;
update one to match the intended behavior. Either change the comment to say
"rent_payment == 0 => immediately compressible for testing" or set
CreateMintParams { ... rent_payment: 1, ... } to trigger auto-decompression;
adjust the inline comment accordingly so CreateMintParams and its rent_payment
comment are consistent.
In `@forester/tests/test_compressible_pda.rs`:
- Around line 323-329: Wrap the calls to get_create_accounts_proof (which
internally calls get_validity_proof) in a retry loop at the four test call
sites: inside test_compressible_pda_bootstrap,
test_compressible_pda_compression, and both places in
test_compressible_pda_subscription; perform up to 3 attempts with a 5-second
backoff between attempts (use tokio::time::sleep(Duration::from_secs(5))). On
each attempt await get_create_accounts_proof and break on success; if it errors,
log or store the error and retry until attempts exhausted, then return or expect
the final error so the test still fails if all retries fail. Ensure you only
change the invocation sites (wrapping the existing call) and preserve the
original arguments and behavior.
In `@sdk-libs/client/src/indexer/options.rs`:
- Around line 116-158: GetCompressedMintsByAuthorityOptions currently leaves
authority_type as None which relies on server defaults; change the Rust default
so clients default to MintAuthorityType::Both. Update the Default implementation
or the GetCompressedMintsByAuthorityOptions::new() constructor to set
authority_type = Some(MintAuthorityType::Both) (and keep the existing Optional
type/signature), or adjust docs to explicitly state None means Both—preferably
set Some(MintAuthorityType::Both) in new()/Default to make client behavior
explicit; reference GetCompressedMintsByAuthorityOptions, new(), Default,
authority_type, and MintAuthorityType::Both when making the change.
In `@sdk-libs/client/src/local_test_validator.rs`:
- Around line 10-14: The upgradeable_programs field currently uses a tuple
Vec<(String, String, String)> which is unclear; replace it with a named struct
(e.g., UpgradeableProgramConfig with fields program_id, program_path,
upgrade_authority) and change the type of upgradeable_programs in
LightValidatorConfig to Vec<UpgradeableProgramConfig>, updating any construction
and pattern matching sites that reference upgradeable_programs or the tuple
order, while leaving sbf_programs as Vec<(String,String)> if desired.
In `@sdk-libs/program-test/src/indexer/test_indexer.rs`:
- Around line 999-1022: Replace the todo!() panics in get_compressed_mint,
get_compressed_mint_by_pda, and get_compressed_mints_by_authority with a
deterministic Err(IndexerError) return so callers get an error result instead of
a panic; follow the existing error pattern used elsewhere in the indexer (e.g.,
return an IndexerError variant such as UnimplementedOperation or a new
IndexerError::new("get_compressed_mint not implemented") consistent with how
other unimplemented RPCs are reported) and include the method name in the error
message to aid diagnostics.
| async fn get_compressed_mint( | ||
| &self, | ||
| _address: Address, | ||
| _config: Option<IndexerRpcConfig>, | ||
| ) -> Result<Response<Option<CompressedMint>>, IndexerError> { | ||
| todo!("get_compressed_mint not implemented") | ||
| } | ||
|
|
||
| async fn get_compressed_mint_by_pda( | ||
| &self, | ||
| _mint_pda: &Pubkey, | ||
| _config: Option<IndexerRpcConfig>, | ||
| ) -> Result<Response<Option<CompressedMint>>, IndexerError> { | ||
| todo!("get_compressed_mint_by_pda not implemented") | ||
| } | ||
|
|
||
| async fn get_compressed_mints_by_authority( | ||
| &self, | ||
| _authority: &Pubkey, | ||
| _options: Option<GetCompressedMintsByAuthorityOptions>, | ||
| _config: Option<IndexerRpcConfig>, | ||
| ) -> Result<Response<ItemsWithCursor<CompressedMint>>, IndexerError> { | ||
| todo!("get_compressed_mints_by_authority not implemented") | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid todo!() panics in compressed-mint stub methods.
These three trait-required methods will panic if called during testing or local development. Instead, return a deterministic error that allows test code to handle the unimplemented operation gracefully.
Replace todo!("...") with an appropriate IndexerError variant (e.g., a structured error or similar pattern used elsewhere in the indexer for unimplemented methods) so callers receive a proper error result rather than a panic.
🤖 Prompt for AI Agents
In `@sdk-libs/program-test/src/indexer/test_indexer.rs` around lines 999 - 1022,
Replace the todo!() panics in get_compressed_mint, get_compressed_mint_by_pda,
and get_compressed_mints_by_authority with a deterministic Err(IndexerError)
return so callers get an error result instead of a panic; follow the existing
error pattern used elsewhere in the indexer (e.g., return an IndexerError
variant such as UnimplementedOperation or a new
IndexerError::new("get_compressed_mint not implemented") consistent with how
other unimplemented RPCs are reported) and include the method name in the error
message to aid diagnostics.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
forester/src/compressible/ctoken/compressor.rs (1)
145-152: Verify the sentinel used fordelegate_indexwhen no delegate is present.
Line 145-152 usesunwrap_or(0). If0is not the explicit “none” sentinel forCompressAndCloseIndices, this can incorrectly point to the first packed account (often the output queue), causing instruction failure or misbinding.Run the following to confirm how
delegate_indexis interpreted and which sentinel is expected:#!/bin/bash # Inspect CompressAndCloseIndices and delegate_index usage rg -n "CompressAndCloseIndices|delegate_index" -S rg -n "compress_and_close|delegate_index" -S programs/ program-libs/ sdk-libs/ forester/ || true
🤖 Fix all issues with AI agents
In `@forester/src/compressible/mint/bootstrap.rs`:
- Around line 18-106: The bootstrap_mint_accounts function duplicates common
scaffolding (shutdown flag setup, reqwest client, localhost branching, logging)
found across ctoken/mint/pda bootstraps; extract that into a shared helper
(e.g., run_bootstrap or BootstrapRunner) that accepts rpc_url, program_id
(Pubkey), filters (Option<Vec<serde_json::Value>>), shutdown_rx
(Option<oneshot::Receiver<()>>), a processing closure (FnMut(RawAccountData) ->
bool) and a label for logs, and then call that helper from
bootstrap_mint_accounts while keeping only mint-specific bits
(MINT_ACCOUNT_TYPE_FILTER, ACCOUNT_TYPE_OFFSET, the process_account closure, and
calls to tracker.update_from_account) and replace direct calls to
bootstrap_standard_api/bootstrap_v2_api with the helper’s internal selection
logic; ensure the helper returns the same tuple shapes (pages/fetched/inserted)
so existing call sites can be adapted with minimal changes.
In `@forester/src/compressible/mint/state.rs`:
- Around line 18-32: The compressible-slot math in calculate_compressible_slot
is off-by-one: use the first unpaid epoch as the compressible boundary. Update
calculate_compressible_slot to compute compressible_slot as
(get_last_funded_epoch(...) + 1) * SLOTS_PER_EPOCH (referencing
calculate_compressible_slot, get_last_funded_epoch and SLOTS_PER_EPOCH) so
is_ready_to_compress (which checks current_slot > compressible_slot) only
returns true once the account has entered an unpaid epoch.
In `@forester/src/compressible/pda/bootstrap.rs`:
- Around line 99-106: The memcmp filter built for the discriminator (see
discriminator_base58, filters, and program_config.discriminator) omits an
explicit encoding field; update the JSON filter to include "encoding": "base58"
alongside "offset" and "bytes" so RPCs that require explicit encoding receive
base58, matching the pattern used in mint/bootstrap.rs.
In `@forester/src/compressible/pda/compressor.rs`:
- Around line 284-305: The batch transaction path logs a warning when
rpc.confirm_transaction(signature) returns false but still returns
Ok(signature), causing callers (like compress()) to treat it as success; update
the compress_accounts_idempotent flow so that when confirmed is false you return
an Err (e.g., anyhow::anyhow!("Batch transaction not confirmed: {}", signature))
instead of Ok(signature), keep the tracing::warn log, and ensure you only remove
entries from self.tracker when confirmed is true (no tracker changes on the
error path); use the same error crate/type used elsewhere in this file to match
existing error handling.
In `@forester/tests/test_compressible_mint.rs`:
- Around line 354-439: Replace the if/else that checks ready_accounts with a
precondition assert and then run the compression logic unconditionally: remove
the surrounding if !ready_accounts.is_empty() { ... } else { panic!(...) }
wrapper and add assert!(!ready_accounts.is_empty(), "Mint should be ready to
compress with rent_payment=0"); above the existing code that creates
MintCompressor::new(...), calls
compressor.compress_batch(&ready_accounts).await, polls
rpc.get_account(mint_pda), and exercises
rpc.get_compressed_account/get_compressed_mint/get_compressed_mint_by_pda so the
body no longer needs nesting or the else panic.
In `@forester/tests/test_compressible_pda.rs`:
- Around line 403-411: Extract the repeated polling pattern into a reusable
async helper like wait_for_condition(timeout: Duration, poll_interval: Duration,
condition: F) -> bool (as suggested) and replace the manual loops (e.g., the
loop that uses start = tokio::time::Instant::now(); while start.elapsed() <
timeout { if !tracker.is_empty() { ... }
sleep(Duration::from_millis(500)).await; }) with calls to that helper passing
the 60s timeout, 500ms interval and a closure that checks tracker.is_empty();
ensure the helper uses tokio::time::Instant and tokio::time::sleep so tests
remain async-compatible and update other occurrences (lines referenced) to call
wait_for_condition to remove duplication and centralize timeout behavior.
In `@program-libs/compressible/Cargo.toml`:
- Line 38: The solana-rent dependency in Cargo.toml is declared unconditionally
but is only used as a mock fallback when not on-chain (see the
#[cfg(not(all(...)))] guard in src/rent/mod.rs), so mark the dependency optional
by adding optional = true to the solana-rent entry in Cargo.toml while not tying
it to any feature flags (do not add it to the solana feature or other features);
keep all existing uses and cfg guards unchanged so solana-rent remains available
for tests/local dev but not forced for consumers.
In `@sdk-libs/client/src/local_test_validator.rs`:
- Around line 71-78: The loop that appends upgradeable program args (iterating
config.upgradeable_programs and calling path.push_str with
upgradeable_program.program_path) can break if paths contain spaces/special
chars; update the push to quote or shell-escape each argument (e.g., wrap
program_path/program_id/upgrade_authority in quotes or use a shell-escaping
helper like shell_escape::escape) so the constructed command string safely
preserves spaces and special characters, and ensure the module imports the
escape helper if you choose shell_escape.
♻️ Duplicate comments (7)
forester/tests/test_compressible_pda.rs (1)
323-329: Add retry/backoff aroundget_create_accounts_proofto avoid CI flakiness.This call to
get_create_accounts_proof(which internally callsget_validity_proof) is susceptible to transient indexer failures, as evidenced by the pipeline failure. The same issue exists at lines 511, 773, and 835.Consider wrapping these calls with retry logic (max 3 attempts, 5-second backoff) to improve CI stability. Based on learnings, CI should run batched tree transfer tests with retry logic due to known flakiness.
🔧 Suggested retry wrapper
async fn get_create_accounts_proof_with_retry<R: Rpc + Indexer>( rpc: &R, program_id: &Pubkey, inputs: Vec<CreateAccountsProofInput>, ) -> Result<CreateAccountsProofResult, Box<dyn std::error::Error>> { let max_attempts = 3; let mut last_error = None; for attempt in 1..=max_attempts { match get_create_accounts_proof(rpc, program_id, inputs.clone()).await { Ok(result) => return Ok(result), Err(e) => { println!("Attempt {}/{} failed: {:?}", attempt, max_attempts, e); last_error = Some(e); if attempt < max_attempts { sleep(Duration::from_secs(5)).await; } } } } Err(last_error.unwrap().into()) }sdk-libs/program-test/src/indexer/test_indexer.rs (1)
999-1023: Avoidtodo!()panics in compressed-mint stubs.These methods will panic if the new Indexer APIs are invoked during tests; return a deterministic
IndexerErrorinstead so callers can handle the unimplemented path.🔧 Proposed fix
@@ async fn get_compressed_mint( &self, _address: Address, _config: Option<IndexerRpcConfig>, ) -> Result<Response<Option<CompressedMint>>, IndexerError> { - todo!("get_compressed_mint not implemented") + Err(IndexerError::NotImplemented( + "get_compressed_mint not implemented".to_string(), + )) } @@ async fn get_compressed_mint_by_pda( &self, _mint_pda: &Pubkey, _config: Option<IndexerRpcConfig>, ) -> Result<Response<Option<CompressedMint>>, IndexerError> { - todo!("get_compressed_mint_by_pda not implemented") + Err(IndexerError::NotImplemented( + "get_compressed_mint_by_pda not implemented".to_string(), + )) } @@ async fn get_compressed_mints_by_authority( &self, _authority: &Pubkey, _authority_type: MintAuthorityType, _options: Option<GetCompressedMintsByAuthorityOptions>, _config: Option<IndexerRpcConfig>, ) -> Result<Response<ItemsWithCursor<CompressedMint>>, IndexerError> { - todo!("get_compressed_mints_by_authority not implemented") + Err(IndexerError::NotImplemented( + "get_compressed_mints_by_authority not implemented".to_string(), + )) }forester/src/lib.rs (1)
241-259: CToken bootstrap is skipped whenshutdown_bootstrapisNone.That leaves the ctokens tracker without historical state while PDA/Mint bootstraps still run. Spawn the task unconditionally and pass the optional shutdown receiver through.
🔧 Suggested fix
@@ - // Spawn bootstrap task for ctokens - if let Some(shutdown_bootstrap_rx) = shutdown_bootstrap { - let tracker_clone = ctoken_tracker.clone(); - let rpc_url = config.external_services.rpc_url.clone(); - - tokio::spawn(async move { - if let Err(e) = compressible::bootstrap_ctoken_accounts( - rpc_url, - tracker_clone, - Some(shutdown_bootstrap_rx), - ) - .await - { - tracing::error!("CToken bootstrap failed: {:?}", e); - } else { - tracing::info!("CToken bootstrap complete"); - } - }); - } + // Spawn bootstrap task for ctokens + let tracker_clone = ctoken_tracker.clone(); + let rpc_url = config.external_services.rpc_url.clone(); + let shutdown_bootstrap_rx = shutdown_bootstrap; + + tokio::spawn(async move { + if let Err(e) = compressible::bootstrap_ctoken_accounts( + rpc_url, + tracker_clone, + shutdown_bootstrap_rx, + ) + .await + { + tracing::error!("CToken bootstrap failed: {:?}", e); + } else { + tracing::info!("CToken bootstrap complete"); + } + });forester/src/compressible/mint/compressor.rs (1)
208-211: Prevent panic whenmax_concurrent == 0.
buffer_unordered(0)will panic; clamp or validate before calling.🔧 Proposed fix
- let results: Vec<_> = futures::stream::iter(compression_futures) - .buffer_unordered(max_concurrent) + let concurrency = max_concurrent.max(1); + let results: Vec<_> = futures::stream::iter(compression_futures) + .buffer_unordered(concurrency)forester/src/compressible/pda/compressor.rs (1)
165-168: Guardmax_concurrentagainst zero beforebuffer_unordered.
Line 165-168 can panic ifmax_concurrent == 0; this was already flagged earlier.forester/src/epoch_manager.rs (1)
2035-2055: Redundant tracker removal aftercompress_batch_concurrent.This was flagged in a prior review: both
PdaCompressor::compress_batch_concurrentandMintCompressor::compress_batch_concurrentalready remove successful accounts from their trackers internally. The explicitpda_tracker.remove(&account_state.pubkey)(line 2043) andmint_tracker.remove(&mint_state.pubkey)(line 2115) calls duplicate that work, causing unnecessary DashMap writes.Consider letting only the compressor implementation own removal, leaving these match arms to just log and update counters.
Also applies to: 2110-2123
forester/tests/test_compressible_mint.rs (1)
91-104: Comment contradicts actualrent_paymentvalue.Line 91 states "CreateMint auto-decompresses (rent_payment > 0)" but line 102 sets
rent_payment: 0. This inconsistency was flagged previously. Either update the comment to reflect thatrent_payment=0means "immediately compressible for testing" or adjust the value if decompression behavior differs.📝 Suggested fix
- // Build params - CreateMint auto-decompresses (rent_payment > 0) + // Build params - rent_payment=0 makes mint immediately compressible for testing let params = CreateMintParams {
| let timeout = Duration::from_secs(60); | ||
|
|
||
| while start.elapsed() < timeout { | ||
| if !tracker.is_empty() { | ||
| println!("Bootstrap found {} PDA accounts", tracker.len()); | ||
| break; | ||
| } | ||
| sleep(Duration::from_millis(500)).await; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Consider extracting the polling timeout pattern.
The 60-second timeout with 500ms polling interval appears in multiple places (here, line 814, line 875, line 942). Consider extracting this into a helper to reduce duplication and ensure consistent timeout behavior across tests.
async fn wait_for_condition<F, Fut>(
timeout: Duration,
poll_interval: Duration,
mut condition: F,
) -> bool
where
F: FnMut() -> Fut,
Fut: std::future::Future<Output = bool>,
{
let start = tokio::time::Instant::now();
while start.elapsed() < timeout {
if condition().await {
return true;
}
sleep(poll_interval).await;
}
false
}🤖 Prompt for AI Agents
In `@forester/tests/test_compressible_pda.rs` around lines 403 - 411, Extract the
repeated polling pattern into a reusable async helper like
wait_for_condition(timeout: Duration, poll_interval: Duration, condition: F) ->
bool (as suggested) and replace the manual loops (e.g., the loop that uses start
= tokio::time::Instant::now(); while start.elapsed() < timeout { if
!tracker.is_empty() { ... } sleep(Duration::from_millis(500)).await; }) with
calls to that helper passing the 60s timeout, 500ms interval and a closure that
checks tracker.is_empty(); ensure the helper uses tokio::time::Instant and
tokio::time::sleep so tests remain async-compatible and update other occurrences
(lines referenced) to call wait_for_condition to remove duplication and
centralize timeout behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@forester/src/compressible/bootstrap_helpers.rs`:
- Around line 117-123: Replace uses of the deprecated base64::decode(...) with
the engine-based API so the code compiles under base64 v0.20+; for example,
change the call that assigns data from base64::decode(data_str) (the variable
named data in bootstrap_helpers.rs) to use
base64::engine::general_purpose::STANDARD.decode(data_str) (or import
base64::Engine and call STANDARD.decode(data_str)), preserve the existing error
handling and debug log that references pubkey, and apply the same refactor to
other occurrences in subscriber.rs and tree_data_sync.rs so all base64 decoding
uses the engine-based API.
In `@forester/src/compressible/pda/compressor.rs`:
- Around line 78-103: The code currently deserializes the config with
LightConfig::try_from_slice (skipping SDK checks) and uses a hardcoded
derive_pda(program_id, 0); replace the raw deserialization with the SDK
validator by calling LightConfig::load_checked on the fetched config account
(the account returned for config_pda) so the SDK can enforce config_bump == 0,
verify the PDA derivation matches config_pda, and apply any other constraints;
keep using LightConfig::derive_pda(program_id, 0) to locate config_pda, fetch
the account via rpc.get_account(...) as you already do, then pass that account
into LightConfig::load_checked and propagate any error instead of using
LightConfig::try_from_slice.
♻️ Duplicate comments (2)
forester/src/compressible/pda/compressor.rs (1)
165-169: Potential runtime panic withbuffer_unordered(0).If
max_concurrentis passed as0,buffer_unordered(0)will panic at runtime. This value can originate from configuration (e.g., environment variables), so defensive clamping is warranted.🔧 Proposed fix
// Execute concurrently with limit + let concurrency = max_concurrent.max(1); let results: Vec<_> = futures::stream::iter(compression_futures) - .buffer_unordered(max_concurrent) + .buffer_unordered(concurrency) .collect() .await;forester/tests/test_compressible_mint.rs (1)
92-104: Comment doesn't match the code behavior.Line 92 says "CreateMint auto-decompresses (rent_payment > 0)" but line 103 sets
rent_payment: 0. The code works correctly for testing (immediately compressible), but the comment is misleading.📝 Proposed fix
- // Build params - CreateMint auto-decompresses (rent_payment > 0) + // Build params - rent_payment=0 makes the mint immediately compressible for testing let params = CreateMintParams {
| let data = match base64::decode(data_str) { | ||
| Ok(bytes) => bytes, | ||
| Err(e) => { | ||
| debug!("Failed to decode base64 for account {}: {:?}", pubkey, e); | ||
| return None; | ||
| } | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
🧩 Analysis chain
🏁 Script executed:
cat forester/Cargo.toml | grep -A 5 "base64"Repository: Lightprotocol/light-protocol
Length of output: 323
🏁 Script executed:
cat -n forester/src/compressible/bootstrap_helpers.rs | sed -n '110,130p'Repository: Lightprotocol/light-protocol
Length of output: 705
🏁 Script executed:
find . -name "CLAUDE.md" -type fRepository: Lightprotocol/light-protocol
Length of output: 645
🏁 Script executed:
head -100 Cargo.toml | grep -A 10 "\[workspace"Repository: Lightprotocol/light-protocol
Length of output: 745
🏁 Script executed:
cat Cargo.toml | grep -B 5 -A 5 "base64"Repository: Lightprotocol/light-protocol
Length of output: 320
🏁 Script executed:
cat CLAUDE.md | head -150Repository: Lightprotocol/light-protocol
Length of output: 7261
🏁 Script executed:
cat forester/CLAUDE.mdRepository: Lightprotocol/light-protocol
Length of output: 122
🏁 Script executed:
grep -r "base64::" forester/src/ | head -20Repository: Lightprotocol/light-protocol
Length of output: 382
🏁 Script executed:
cat Cargo.toml | grep -A 2 'base64 = "0.13"'Repository: Lightprotocol/light-protocol
Length of output: 121
Consider this refactoring when upgrading to base64 0.20+.
The base64::decode API is still the standard in the current version (0.13), but it was deprecated in base64 v0.20+. When you eventually upgrade the dependency, update to the engine-based API:
- let data = match base64::decode(data_str) {
+ use base64::{Engine as _, engine::general_purpose::STANDARD};
+ let data = match STANDARD.decode(data_str) {Apply the same pattern to other instances in subscriber.rs and tree_data_sync.rs for consistency.
🤖 Prompt for AI Agents
In `@forester/src/compressible/bootstrap_helpers.rs` around lines 117 - 123,
Replace uses of the deprecated base64::decode(...) with the engine-based API so
the code compiles under base64 v0.20+; for example, change the call that assigns
data from base64::decode(data_str) (the variable named data in
bootstrap_helpers.rs) to use
base64::engine::general_purpose::STANDARD.decode(data_str) (or import
base64::Engine and call STANDARD.decode(data_str)), preserve the existing error
handling and debug log that references pubkey, and apply the same refactor to
other occurrences in subscriber.rs and tree_data_sync.rs so all base64 decoding
uses the engine-based API.
c499bd8 to
0093038
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
forester/src/compressible/ctoken/state.rs (1)
93-101: Avoid leaving stale state when slot calculation fails.
On failure you now returnOk(())without updating or removing the existing entry, so a previously tracked account can remain with an outdatedcompressible_slotand still be considered for compression. Consider removing the entry (or bubbling the error) to avoid acting on stale data.🛠️ Suggested fix (remove stale entry on failure)
Err(e) => { warn!( "Failed to calculate compressible slot for {}: {}", pubkey, e ); + self.remove(&pubkey); return Ok(()); }forester/src/compressible/bootstrap_helpers.rs (1)
223-225: Consider extending localhost detection for edge cases.The current check covers the common cases (
localhost,127.0.0.1), but you may want to also handle:
0.0.0.0(sometimes used for local binding)[::1]or::1(IPv6 localhost)This is a minor edge case—the current implementation works for typical local development setups.
forester/tests/test_compressible_mint.rs (1)
321-343: Preferbuild_expected_minthere to keep assertions consistent.
This reduces duplication and keeps the expected‑state logic centralized.♻️ Suggested refactor
- // Build expected Mint - let expected_mint = Mint { - base: BaseMint { - mint_authority: Some(payer.pubkey().to_bytes().into()), - supply: 0, - decimals: 9, - is_initialized: true, - freeze_authority: None, - }, - metadata: MintMetadata { - version: metadata_version, - mint_decompressed: true, - mint: mint_pda.to_bytes().into(), - mint_signer: mint_seed.pubkey().to_bytes(), - bump, - }, - reserved: [0u8; 16], - account_type: ACCOUNT_TYPE_MINT, - compression, - extensions: None, - }; + // Build expected Mint + let expected_mint = build_expected_mint( + &payer.pubkey(), + 9, + &mint_pda, + &mint_seed.pubkey().to_bytes(), + bump, + metadata_version, + compression, + );
🤖 Fix all issues with AI agents
In `@forester/src/compressible/bootstrap_helpers.rs`:
- Around line 399-405: The spawned shutdown listener (tokio::spawn using
rx.await and shutdown_flag_clone.store) can leak if run_bootstrap returns early;
change the approach to either (1) race the bootstrap work against the shutdown
signal using tokio::select! (pass shutdown_rx into
bootstrap_standard_api/bootstrap_v2_api and check it there) so no background
task is spawned, or (2) keep the JoinHandle from tokio::spawn and explicitly
abort or await it before run_bootstrap returns (drop/abort the handle on all
return paths) to ensure the spawned task is cleaned up; update the code paths
that currently use the shutdown_rx + shutdown_flag_clone + rx.await spawn block
accordingly.
In `@forester/src/compressible/mint/compressor.rs`:
- Around line 153-170: The code currently pushes every chunk's signature into
signatures regardless of `confirmed`; update the logic in the function handling
the batching (the loop that uses `confirmed`, `chunk`, `chunk_idx`, `signature`,
and `self.tracker.remove`) so that you only push `signature` into the
`signatures` Vec when `confirmed` is true (i.e., move the
`signatures.push(signature)` inside the `if confirmed { ... }` block and keep
`self.tracker.remove(&mint_state.pubkey)` behavior unchanged), ensuring only
confirmed signatures are returned to callers.
In `@forester/tests/test_compressible_ctoken.rs`:
- Around line 528-532: The current loop uses .expect(&format!(...)) which
allocates even on success; change the lookup chain (the iterator .find(...)
calling .expect(...)) to defer allocation by replacing the expect/format call
with unwrap_or_else that panics only on the failure path (e.g., use
.unwrap_or_else(|| panic!("Bootstrap should have found account {}", pubkey))).
Update the code around expected_pubkeys, accounts, and account_state to use this
unwrap_or_else approach so the formatted string is only created when the lookup
fails.
In `@program-libs/CLAUDE.md`:
- Around line 63-72: Add a blank line between the "## Testing" heading and the
fenced code block so the triple-backtick block is preceded by an empty line;
locate the "## Testing" section in CLAUDE.md and insert a single blank line
before the code fence (the ``` bash block containing the three cargo test lines)
to follow Markdown best practices.
- Around line 1-4: Replace the current paragraph summary in CLAUDE.md with a 2–5
bullet "Summary" section, and add "Used In" and "Navigation" headings with brief
bullet lists; keep the existing high-level description but reformat it into the
header structure ("## Summary", "## Used In", "## Navigation", plus any existing
high-level sections) so the file matches the CLAUDE.md template and provides
clear links to on-chain programs, sdk-libs, and relevant tests or crate docs.
♻️ Duplicate comments (3)
forester/src/compressible/pda/compressor.rs (1)
174-178: Guard againstmax_concurrent == 0to preventbuffer_unorderedpanic.This was flagged in a previous review and addressed in
mint/compressor.rs(lines 191-198), but the same fix is missing here.buffer_unordered(0)panics at runtime.🔧 Proposed fix
+ // Guard against max_concurrent == 0 to avoid buffer_unordered panic + if max_concurrent == 0 { + return account_states + .iter() + .cloned() + .map(|state| Err((state, anyhow::anyhow!("max_concurrent must be > 0")))) + .collect(); + } + // Execute concurrently with limit let results: Vec<_> = futures::stream::iter(compression_futures) .buffer_unordered(max_concurrent),
sdk-libs/client/src/local_test_validator.rs (1)
71-77: Consider shell-escaping upgradeable program args.If any path contains spaces/special characters,
sh -cwill misparse the command line. This is low-risk for test infra but still worth hardening if paths can vary.sdk-libs/program-test/src/indexer/test_indexer.rs (1)
999-1023: Replacetodo!()with deterministic errors to avoid test panics.
These stub methods will panic if called; return anIndexerErrorinstead so callers can handle unimplemented paths.✅ Safer stub implementation
async fn get_compressed_mint( &self, _address: Address, _config: Option<IndexerRpcConfig>, ) -> Result<Response<Option<CompressedMint>>, IndexerError> { - todo!("get_compressed_mint not implemented") + Err(IndexerError::CustomError( + "get_compressed_mint not implemented".to_string(), + )) } async fn get_compressed_mint_by_pda( &self, _mint_pda: &Pubkey, _config: Option<IndexerRpcConfig>, ) -> Result<Response<Option<CompressedMint>>, IndexerError> { - todo!("get_compressed_mint_by_pda not implemented") + Err(IndexerError::CustomError( + "get_compressed_mint_by_pda not implemented".to_string(), + )) } async fn get_compressed_mints_by_authority( &self, _authority: &Pubkey, _authority_type: MintAuthorityType, _options: Option<GetCompressedMintsByAuthorityOptions>, _config: Option<IndexerRpcConfig>, ) -> Result<Response<ItemsWithCursor<CompressedMint>>, IndexerError> { - todo!("get_compressed_mints_by_authority not implemented") + Err(IndexerError::CustomError( + "get_compressed_mints_by_authority not implemented".to_string(), + )) }
- Implemented `get_compressed_mint` and `get_compressed_mint_by_pda` methods in the PhotonIndexer. - Added `get_compressed_mints_by_authority` method to retrieve compressed mints based on authority. - Introduced new data structures: `CompressedMint`, `MintData`, and `CompressedMintList` to handle compressed mint data. - Updated Photon API with new request and response models for compressed mint operations. - Enhanced error handling for API responses related to compressed mints. - Updated tests to cover new functionality for compressed mint retrieval.
…files; refactor mint compression logic to handle batching and improve error handling; enhance test cases for mint creation and compression
…ons to remove Option wrapper and set default value
…ction to use into() for conversion
…int compression test with improved assertions and logging
…and use it for mint metadata
…e balance after rent exemption and compression cost
7671bfc to
e2663c4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
program-libs/CLAUDE.md (1)
68-70: Include--all-featuresfor light-batched-merkle-tree tests.The testing guidance here should match the standard unit test command set for program-libs; batched-merkle-tree also needs
--all-features. As per coding guidelines, keep CLAUDE.md accurate. Based on learnings, update the command.✅ Proposed doc fix
-cargo test -p light-batched-merkle-tree +cargo test -p light-batched-merkle-tree --all-featuresforester/src/compressible/pda/compressor.rs (1)
227-243: Consider adding error context for individual account failures in parallel fetch.When
try_join_allfails, you lose visibility into which specific compressed account caused the failure. For debugging larger batches, capturing which address failed would be helpful.♻️ Optional improvement
let compressed_account_futures = compressed_addresses.iter().map(|addr| { let rpc_clone = self.rpc_pool.clone(); let addr = *addr; async move { let rpc = rpc_clone.get_connection().await?; rpc.get_compressed_account(addr, None) .await - .map_err(|e| anyhow::anyhow!("Failed to get compressed account: {:?}", e))? + .map_err(|e| anyhow::anyhow!("Failed to get compressed account {:?}: {:?}", addr, e))? .value - .ok_or_else(|| anyhow::anyhow!("Compressed account not found: {:?}", addr)) + .ok_or_else(|| anyhow::anyhow!("Compressed account not found for address: {:?}", addr)) } });forester/src/compressible/ctoken/state.rs (1)
93-102: Silent failure on compressible slot calculation may mask upstream issues.Changing from error propagation to
warn!+Ok(())means callers won't know when accounts fail to be tracked due to missing extensions or invalid rent data. This is fine for graceful degradation, but consider whether you want metrics or alerting on these "soft failures" in production—especially if many accounts start hitting this path unexpectedly.If this is intentional (e.g., you expect some accounts to lack the Compressible extension), a brief inline comment documenting that expectation would help future readers.
🤖 Fix all issues with AI agents
In `@forester/package.json`:
- Line 6: The package.json "build" npm script currently runs "cargo build" which
generates artifacts under target/debug, but NX expects outputs at
{workspaceRoot}/target/release; update the "build" script to run cargo build
--release (i.e., change the "build" script entry in package.json) so outputs
land in target/release, or alternatively change the NX configuration outputs to
point to {workspaceRoot}/target/debug to match the existing "build"
script—modify either the package.json "build" script or the NX outputs setting
to make them consistent.
In `@forester/src/compressible/bootstrap_helpers.rs`:
- Around line 415-473: Both branches of the RPC bootstrap (the is_localhost
branch calling bootstrap_standard_api and the else branch calling
bootstrap_v2_api) repeat the same post-call steps: aborting
shutdown_listener_handle, unwrapping the api_result, logging with label, and
constructing a BootstrapResult; extract that shared logic into a small helper or
closure (e.g., a function handle_bootstrap_result) that takes the api_result (or
its Ok tuple) and label and performs handle.abort(), info! logging, and
BootstrapResult construction, then call that helper from both places to remove
duplication while keeping bootstrap_standard_api and bootstrap_v2_api calls
intact.
In `@forester/tests/test_compressible_mint.rs`:
- Around line 759-767: The test currently finds a mint by comparing
m.mint.decimals == 9 which can collide with other mints; update the lookup in
the mints_by_authority query to use a unique identifier (e.g., compare m.address
or m.mint.compression_address against the known compression_address_1) so the
assert targets the specific mint under test (adjust the variable referenced in
the closure used by found_mint and keep the subsequent assert the same).
In `@sdk-libs/client/src/indexer/types.rs`:
- Around line 1058-1061: The doc comment for the struct field `extensions` is
incorrect: it states "Serialized extensions (base64 encoded)" but the field
`extensions: Option<Vec<u8>>` actually contains decoded bytes (base64 decoding
occurs in the `TryFrom` implementation). Update the comment on `extensions` in
types.rs to reflect that it stores decoded/serialized bytes (e.g., "Serialized
extensions (decoded bytes)") and, if helpful, add a brief note pointing to the
`TryFrom` implementation that performs base64 decoding.
In `@sdk-libs/photon-api/src/models/_get_compressed_mint_post_request_params.rs`:
- Around line 22-28: The current GetCompressedMintPostRequestParams::new()
constructs an instance with both address and mint_pda as None which allows
invalid requests; replace it with a checked constructor that enforces exactly
one selector is set (either address or mint_pda). Implement either two explicit
constructors (e.g., GetCompressedMintPostRequestParams::with_address(address:
String) and ::with_mint_pda(mint_pda: String)) or a single checked factory
(e.g., ::try_new(address: Option<String>, mint_pda: Option<String>) ->
Result<Self, Error>) that returns an error if both or neither are provided;
update callers to use the new constructors and remove/privatize the unsafe
public new() to prevent creating malformed params.
♻️ Duplicate comments (2)
forester/src/compressible/pda/compressor.rs (1)
176-180:buffer_unordered(0)will panic at runtime ifmax_concurrentis zero.This issue was flagged in a previous review and remains unaddressed. If
max_concurrent(derived frommax_concurrent_batchesconfig) is set to 0 via environment variable, this will cause a runtime panic. Guard with.max(1)before passing tobuffer_unordered().🔧 Proposed fix
// Execute concurrently with limit + let concurrency = max_concurrent.max(1); let results: Vec<_> = futures::stream::iter(compression_futures) - .buffer_unordered(max_concurrent) + .buffer_unordered(concurrency) .collect() .await;sdk-libs/program-test/src/indexer/test_indexer.rs (1)
999-1023: Avoidtodo!()panics for the new compressed-mint methods. These should return a deterministicErr(IndexerError::...)so tests can handle the unimplemented path gracefully instead of crashing.
forester/package.json
Outdated
| "version": "0.3.0", | ||
| "license": "GPL-3.0", | ||
| "scripts": { | ||
| "build": "cargo build", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Build mode doesn't match nx output directory.
The build script runs cargo build which produces artifacts in target/debug by default, but the nx configuration declares outputs at {workspaceRoot}/target/release. This mismatch means nx caching won't work correctly—it'll look for outputs that don't exist.
Either update the build script to use release mode, or adjust the nx outputs to match the actual build location.
♻️ Option A: Build in release mode
- "build": "cargo build",
+ "build": "cargo build --release",♻️ Option B: Point nx to debug output
"build": {
"outputs": [
- "{workspaceRoot}/target/release"
+ "{workspaceRoot}/target/debug"
]
}Also applies to: 16-19
🤖 Prompt for AI Agents
In `@forester/package.json` at line 6, The package.json "build" npm script
currently runs "cargo build" which generates artifacts under target/debug, but
NX expects outputs at {workspaceRoot}/target/release; update the "build" script
to run cargo build --release (i.e., change the "build" script entry in
package.json) so outputs land in target/release, or alternatively change the NX
configuration outputs to point to {workspaceRoot}/target/debug to match the
existing "build" script—modify either the package.json "build" script or the NX
outputs setting to make them consistent.
| let result = if is_localhost(rpc_url) { | ||
| debug!("Detected localhost, using standard getProgramAccounts"); | ||
| let api_result = bootstrap_standard_api( | ||
| &client, | ||
| rpc_url, | ||
| program_id, | ||
| filters, | ||
| Some(&shutdown_flag), | ||
| process_fn, | ||
| ) | ||
| .await; | ||
|
|
||
| // Abort shutdown listener before returning (success or error) | ||
| if let Some(handle) = shutdown_listener_handle { | ||
| handle.abort(); | ||
| } | ||
|
|
||
| let (fetched, inserted) = api_result?; | ||
|
|
||
| info!( | ||
| "{} bootstrap complete: {} fetched, {} inserted", | ||
| label, fetched, inserted | ||
| ); | ||
|
|
||
| BootstrapResult { | ||
| pages: 1, | ||
| fetched, | ||
| inserted, | ||
| } | ||
| } else { | ||
| debug!("Using getProgramAccountsV2 with pagination"); | ||
| let api_result = bootstrap_v2_api( | ||
| &client, | ||
| rpc_url, | ||
| program_id, | ||
| filters, | ||
| Some(&shutdown_flag), | ||
| process_fn, | ||
| ) | ||
| .await; | ||
|
|
||
| // Abort shutdown listener before returning (success or error) | ||
| if let Some(handle) = shutdown_listener_handle { | ||
| handle.abort(); | ||
| } | ||
|
|
||
| let (pages, fetched, inserted) = api_result?; | ||
|
|
||
| info!( | ||
| "{} bootstrap complete: {} pages, {} fetched, {} inserted", | ||
| label, pages, fetched, inserted | ||
| ); | ||
|
|
||
| BootstrapResult { | ||
| pages, | ||
| fetched, | ||
| inserted, | ||
| } | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Consider extracting common post-bootstrap logic.
Both branches share the same pattern: abort handle → unwrap result → log → construct BootstrapResult. While the current duplication is acceptable for readability, you could extract this into a small closure or helper if you anticipate adding more API variants.
♻️ Optional refactor to reduce duplication
+ // Helper to finalize bootstrap
+ let finalize = |handle: Option<tokio::task::JoinHandle<()>>,
+ api_result: Result<(usize, usize, usize), _>,
+ label: &str| -> Result<BootstrapResult> {
+ if let Some(h) = handle {
+ h.abort();
+ }
+ let (pages, fetched, inserted) = api_result?;
+ info!(
+ "{} bootstrap complete: {} pages, {} fetched, {} inserted",
+ label, pages, fetched, inserted
+ );
+ Ok(BootstrapResult { pages, fetched, inserted })
+ };
+
let result = if is_localhost(rpc_url) {
debug!("Detected localhost, using standard getProgramAccounts");
- let api_result = bootstrap_standard_api(...)
+ let api_result = bootstrap_standard_api(...)
.await;
- // ... current logic
+ finalize(shutdown_listener_handle, api_result.map(|(f, i)| (1, f, i)), label)?
} else {
// similar for V2 path
};🤖 Prompt for AI Agents
In `@forester/src/compressible/bootstrap_helpers.rs` around lines 415 - 473, Both
branches of the RPC bootstrap (the is_localhost branch calling
bootstrap_standard_api and the else branch calling bootstrap_v2_api) repeat the
same post-call steps: aborting shutdown_listener_handle, unwrapping the
api_result, logging with label, and constructing a BootstrapResult; extract that
shared logic into a small helper or closure (e.g., a function
handle_bootstrap_result) that takes the api_result (or its Ok tuple) and label
and performs handle.abort(), info! logging, and BootstrapResult construction,
then call that helper from both places to remove duplication while keeping
bootstrap_standard_api and bootstrap_v2_api calls intact.
| /// Whether the mint has been decompressed | ||
| pub mint_decompressed: bool, | ||
| /// Serialized extensions (base64 encoded) | ||
| pub extensions: Option<Vec<u8>>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Inaccurate doc comment: field holds decoded bytes, not base64.
The comment says "Serialized extensions (base64 encoded)" but the field stores decoded Vec<u8>. The base64 decoding happens in the TryFrom implementation (line 1089). Consider updating the comment to reflect the actual field contents.
📝 Suggested doc comment fix
- /// Serialized extensions (base64 encoded)
+ /// Serialized extensions (decoded bytes from the API's base64 representation)
pub extensions: Option<Vec<u8>>,📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /// Whether the mint has been decompressed | |
| pub mint_decompressed: bool, | |
| /// Serialized extensions (base64 encoded) | |
| pub extensions: Option<Vec<u8>>, | |
| /// Whether the mint has been decompressed | |
| pub mint_decompressed: bool, | |
| /// Serialized extensions (decoded bytes from the API's base64 representation) | |
| pub extensions: Option<Vec<u8>>, |
🤖 Prompt for AI Agents
In `@sdk-libs/client/src/indexer/types.rs` around lines 1058 - 1061, The doc
comment for the struct field `extensions` is incorrect: it states "Serialized
extensions (base64 encoded)" but the field `extensions: Option<Vec<u8>>`
actually contains decoded bytes (base64 decoding occurs in the `TryFrom`
implementation). Update the comment on `extensions` in types.rs to reflect that
it stores decoded/serialized bytes (e.g., "Serialized extensions (decoded
bytes)") and, if helpful, add a brief note pointing to the `TryFrom`
implementation that performs base64 decoding.
sdk-libs/photon-api/src/models/_get_compressed_mint_post_request_params.rs
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
forester/src/compressible/ctoken/state.rs (1)
93-101: Avoid stale tracker entries when compressible-slot calc fails.
ReturningOk(())without updating/removing leaves any previously tracked state in place, which can schedule compression at the wrong slot. Consider removing the existing entry (or propagate the error) on failure.🔧 Suggested fix
Err(e) => { warn!( "Failed to calculate compressible slot for {}: {}", pubkey, e ); + // Avoid stale entries if this account was previously tracked. + self.remove(&pubkey); return Ok(()); }forester/tests/test_compressible_ctoken.rs (2)
100-156: Add a timeout around registration/active phase waits to avoid long hangs.When
current_slotis already in active phase, the new logic waits for the next epoch’s registration start with no upper bound. That can stall CI for a long time if epochs are long or slot progress is slow. A bounded wait (with a clear error) keeps the test deterministic.
Consider guarding the twowhile rpc.get_slot() < ...loops with a timeout and failing fast if exceeded.🔧 Suggested timeout guard
+ let wait_deadline = tokio::time::Instant::now() + Duration::from_secs(60); // Wait for registration phase while rpc.get_slot().await? < register_phase_start { + if tokio::time::Instant::now() > wait_deadline { + return Err("Timed out waiting for registration phase".into()); + } sleep(Duration::from_millis(400)).await; } + let wait_deadline = tokio::time::Instant::now() + Duration::from_secs(60); // Wait for active phase while rpc.get_slot().await? < active_phase_start { + if tokio::time::Instant::now() > wait_deadline { + return Err("Timed out waiting for active phase".into()); + } sleep(Duration::from_millis(400)).await; }
53-67: Mitigate airdrop rate-limit flakiness with retry/backoff.CI already failed with “airdrop lamports request failed (rate limit)”. Add a small retry helper with backoff to make these tests robust.
🔧 Example retry helper
+async fn airdrop_with_retry<R: Rpc>( + rpc: &mut R, + pubkey: &Pubkey, + lamports: u64, +) -> Result<(), Box<dyn std::error::Error>> { + let mut delay = Duration::from_millis(200); + for attempt in 1..=5 { + match rpc.airdrop_lamports(pubkey, lamports).await { + Ok(_) => return Ok(()), + Err(e) if attempt < 5 => { + tracing::warn!("Airdrop failed (attempt {}): {:?}. Retrying...", attempt, e); + sleep(delay).await; + delay *= 2; + } + Err(e) => return Err(e.into()), + } + } + Ok(()) +}- rpc.airdrop_lamports(&governance_pubkey, 1_000_000_000) - .await?; + airdrop_with_retry(rpc, &governance_pubkey, 1_000_000_000).await?; - rpc.airdrop_lamports(&forester_pubkey, 10_000_000_000) - .await?; + airdrop_with_retry(rpc, &forester_pubkey, 10_000_000_000).await?;- rpc.airdrop_lamports(&payer.pubkey(), 10_000_000_000) - .await - .expect("Failed to airdrop lamports"); + airdrop_with_retry(&mut rpc, &payer.pubkey(), 10_000_000_000) + .await + .expect("Failed to airdrop lamports");Also applies to: 231-233
🤖 Fix all issues with AI agents
In `@forester/tests/test_compressible_mint.rs`:
- Around line 398-419: The test currently treats a mint as closed only if
rpc.get_account(mint_pda) returns None; update the loop that queries get_account
(variable mint_after) to also treat Some(account) with account.lamports == 0 as
closed (set account_closed = true and break) so zero‑lamport leftover accounts
are accepted as closed (same pattern used in the ctoken test).
In `@sdk-libs/photon-api/src/models/compressed_mint_list.rs`:
- Around line 11-19: The CompressedMintList struct derives Serialize and
Deserialize but the serde traits are not imported; add an explicit import for
serde::Serialize and serde::Deserialize at the top of the file so the derives on
the CompressedMintList struct (and its fields like items:
Vec<models::CompressedMint>, cursor: Option<String>) compile correctly.
♻️ Duplicate comments (2)
sdk-libs/client/src/local_test_validator.rs (1)
71-76: Consider shell-escaping upgradeable program args.
This is still vulnerable to spaces/special chars in paths. Quoting orshell_escapewould make the command safer.🔧 Example quoting
- path.push_str(&format!( - " --upgradeable-program {} {} {}", - upgradeable_program.program_id, - upgradeable_program.program_path, - upgradeable_program.upgrade_authority - )); + path.push_str(&format!( + " --upgradeable-program {} \"{}\" {}", + upgradeable_program.program_id, + upgradeable_program.program_path, + upgradeable_program.upgrade_authority + ));sdk-libs/program-test/src/indexer/test_indexer.rs (1)
999-1022: Avoid todo!() panics in the new compressed mint stubs.These will still panic if invoked; return a deterministic IndexerError instead so tests can handle unimplemented behavior gracefully.
| // Wait for account to be closed | ||
| let start = tokio::time::Instant::now(); | ||
| let timeout = Duration::from_secs(30); | ||
| let mut account_closed = false; | ||
|
|
||
| // Verify compressed mint still exists in the merkle tree | ||
| let compressed_after = rpc | ||
| .get_compressed_account(compression_address, None) | ||
| while start.elapsed() < timeout { | ||
| let mint_after = rpc | ||
| .get_account(mint_pda) | ||
| .await | ||
| .unwrap() | ||
| .value; | ||
| assert!( | ||
| compressed_after.is_some(), | ||
| "Compressed mint should still exist after compression" | ||
| ); | ||
|
|
||
| println!("Mint compression test completed successfully!"); | ||
| } else { | ||
| panic!("Mint should be ready to compress with rent_payment=0"); | ||
| .expect("Failed to query mint account"); | ||
| if mint_after.is_none() { | ||
| account_closed = true; | ||
| println!("Mint account closed successfully!"); | ||
| break; | ||
| } | ||
| sleep(Duration::from_millis(500)).await; | ||
| } | ||
|
|
||
| assert!( | ||
| account_closed, | ||
| "Mint account should be closed after compression" | ||
| ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle “closed” mints that remain with 0 lamports.
Compression might leave a zero‑lamport account rather than deleting it outright. The current check treats only None as closed, which can cause false failures. Consider treating lamports == 0 as closed too (same pattern as the ctoken test).
🔧 Suggested check
- if mint_after.is_none() {
+ if mint_after.is_none()
+ || mint_after.as_ref().map(|a| a.lamports) == Some(0)
+ {
account_closed = true;
println!("Mint account closed successfully!");
break;
}🤖 Prompt for AI Agents
In `@forester/tests/test_compressible_mint.rs` around lines 398 - 419, The test
currently treats a mint as closed only if rpc.get_account(mint_pda) returns
None; update the loop that queries get_account (variable mint_after) to also
treat Some(account) with account.lamports == 0 as closed (set account_closed =
true and break) so zero‑lamport leftover accounts are accepted as closed (same
pattern used in the ctoken test).
| use crate::models; | ||
|
|
||
| #[derive(Clone, Default, Debug, PartialEq, Serialize, Deserialize)] | ||
| pub struct CompressedMintList { | ||
| #[serde(rename = "items")] | ||
| pub items: Vec<models::CompressedMint>, | ||
| #[serde(rename = "cursor", skip_serializing_if = "Option::is_none")] | ||
| pub cursor: Option<String>, | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same missing serde import pattern.
Like the previous model, this file derives Serialize and Deserialize without importing them. If there's a crate-level prelude that handles this, disregard; otherwise, add the import.
🔧 Proposed fix
use crate::models;
+use serde::{Deserialize, Serialize};
#[derive(Clone, Default, Debug, PartialEq, Serialize, Deserialize)]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| use crate::models; | |
| #[derive(Clone, Default, Debug, PartialEq, Serialize, Deserialize)] | |
| pub struct CompressedMintList { | |
| #[serde(rename = "items")] | |
| pub items: Vec<models::CompressedMint>, | |
| #[serde(rename = "cursor", skip_serializing_if = "Option::is_none")] | |
| pub cursor: Option<String>, | |
| } | |
| use crate::models; | |
| use serde::{Deserialize, Serialize}; | |
| #[derive(Clone, Default, Debug, PartialEq, Serialize, Deserialize)] | |
| pub struct CompressedMintList { | |
| #[serde(rename = "items")] | |
| pub items: Vec<models::CompressedMint>, | |
| #[serde(rename = "cursor", skip_serializing_if = "Option::is_none")] | |
| pub cursor: Option<String>, | |
| } |
🤖 Prompt for AI Agents
In `@sdk-libs/photon-api/src/models/compressed_mint_list.rs` around lines 11 - 19,
The CompressedMintList struct derives Serialize and Deserialize but the serde
traits are not imported; add an explicit import for serde::Serialize and
serde::Deserialize at the top of the file so the derives on the
CompressedMintList struct (and its fields like items:
Vec<models::CompressedMint>, cursor: Option<String>) compile correctly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
forester/tests/test_compressible_mint.rs (2)
322-341: Consider usingbuild_expected_minthelper for consistency.You've defined the
build_expected_minthelper at lines 26-58 and use it intest_compressible_mint_bootstrap, but here you're constructing the expected Mint inline. Using the helper keeps things DRY and ensures any future changes to Mint structure only need updating in one place.♻️ Suggested refactor
- // Build expected Mint - let expected_mint = Mint { - base: BaseMint { - mint_authority: Some(payer.pubkey().to_bytes().into()), - supply: 0, - decimals: 9, - is_initialized: true, - freeze_authority: None, - }, - metadata: MintMetadata { - version: metadata_version, - mint_decompressed: true, - mint: mint_pda.to_bytes().into(), - mint_signer: mint_seed.pubkey().to_bytes(), - bump, - }, - reserved: [0u8; 16], - account_type: ACCOUNT_TYPE_MINT, - compression, - extensions: None, - }; + // Build expected Mint using helper + let expected_mint = build_expected_mint( + &payer.pubkey(), + 9, + &mint_pda, + &mint_seed.pubkey().to_bytes(), + bump, + metadata_version, + compression, + );
641-652: Same zero-lamport handling needed here.Consistent with the other test, this loop should also treat zero-lamport accounts as closed.
🔧 Suggested fix
while start.elapsed() < timeout { let mint_after = rpc .get_account(mint_pda_1) .await .expect("Failed to query mint account"); - if mint_after.is_none() { + if mint_after.is_none() + || mint_after.as_ref().map(|a| a.lamports) == Some(0) + { account_closed = true; println!("First mint account closed successfully!"); break; } sleep(Duration::from_millis(500)).await; }
♻️ Duplicate comments (1)
forester/tests/test_compressible_mint.rs (1)
403-414: Handle zero-lamport accounts as "closed".A past review noted that compression might leave a zero-lamport account rather than deleting it outright. The current check only treats
Noneas closed. Treatinglamports == 0as closed too makes the test more robust.🔧 Suggested fix
while start.elapsed() < timeout { let mint_after = rpc .get_account(mint_pda) .await .expect("Failed to query mint account"); - if mint_after.is_none() { + if mint_after.is_none() + || mint_after.as_ref().map(|a| a.lamports) == Some(0) + { account_closed = true; println!("Mint account closed successfully!"); break; } sleep(Duration::from_millis(500)).await; }
Summary by CodeRabbit
New Features
Bug Fixes
Tests
✏️ Tip: You can customize this high-level summary in your review settings.