This document describes the performance benchmarking infrastructure and optimization strategies for CommitLabs contracts.
Performance benchmarking is integrated into the development workflow to ensure contracts remain gas-efficient and performant. The benchmarking system measures:
- Gas Usage: Cost of executing each function
- Execution Time: Time taken for function execution
- Storage Costs: Cost of storage operations
- Cross-Contract Calls: Cost of invoking other contracts
- Batch Operations: Efficiency of batch operations
# Run all benchmarks
bash scripts/benchmark.sh
# Run specific contract benchmarks
cargo test --package commitment_core --features benchmark --release
cargo test --package commitment_nft --features benchmark --release
cargo test --package attestation_engine --features benchmark --release
cargo test --package allocation_logic --features benchmark --releaseBenchmarks run automatically in CI/CD on every push and pull request. Results are saved to benchmarks/results/ with timestamps.
Benchmark results are stored in benchmarks/results/ with the following naming convention:
benchmark_YYYYMMDD_HHMMSS.log- Full benchmark outputsummary_YYYYMMDD_HHMMSS.md- Summary report
- Reduced Storage Reads: Optimized
create_commitmentto readTotalCommitmentsandTotalValueLockedonce instead of multiple times - Batch Storage Operations: Combined related storage reads/writes where possible
- Efficient Key Generation: Optimized commitment ID generation to minimize string operations
- Hot Path Optimization: Optimized frequently called functions like
get_commitment,check_violations - Calculation Efficiency: Reduced redundant calculations in violation checking
- Early Returns: Added early returns in validation functions to avoid unnecessary processing
initialize: Initial contract setupcreate_commitment: Creating new commitments (includes token transfer and NFT mint)get_commitment: Reading commitment datacheck_violations: Checking rule violationssettle: Settling commitments at maturityearly_exit: Early exit with penalty
initialize: Contract initializationmint: Minting new NFTsget_metadata: Reading NFT metadataowner_of: Getting NFT ownerbalance_of: Getting owner's balancetransfer: Transferring NFTs
initialize: Contract initializationattest: Recording attestationsget_attestations: Retrieving attestationscalculate_compliance_score: Calculating compliance scoresverify_compliance: Verifying commitment compliance
initialize: Contract initializationregister_pool: Registering liquidity poolsallocate: Allocating funds to poolsget_allocation: Retrieving allocation dataget_pool: Getting pool informationrebalance: Rebalancing allocations
- Run benchmarks in release mode: Always use
--releaseflag for accurate gas measurements - Compare before/after: Always compare benchmark results before and after optimizations
- Track trends: Monitor gas usage over time to catch regressions
- Test edge cases: Benchmark with various input sizes and edge cases
- Document changes: Document any significant performance changes
- Batch Operations: Implement batch versions of operations where possible
- Storage Layout: Optimize storage key structures for efficient access
- Cross-Contract Calls: Minimize cross-contract calls in hot paths
- String Operations: Reduce string concatenation and manipulation
- Vector Operations: Optimize vector operations for large datasets
- Event Emission: Batch events where possible
- Code Size: Reduce contract size through code optimization
- Compilation: Optimize compilation flags for smaller WASM size
- Benchmarks run automatically in CI/CD
- Results are compared against baseline metrics
- Performance regressions trigger alerts
- Optimization opportunities are tracked in issues
- Automated Performance Reports: Generate detailed performance reports
- Gas Cost Tracking: Track gas costs over time
- Optimization Suggestions: Automated suggestions for optimization
- Performance Budgets: Set and enforce performance budgets
- Comparison Tools: Tools to compare benchmark results across versions